id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
258128654
|
pes2o/s2orc
|
v3-fos-license
|
MCNPX Estimation of Photoneutron Dose to Eye Voxel Anthropomorphic Phantom From 18 MV Linear Accelerator
The dose due to photoneutron contamination outside the field of irradiation can be significant when using high-energy linear accelerators. The eye is a radiation-sensitive organ, and this risk increases when high linear energy transfer neutron radiation is involved. This study aimed to provide a fast method to estimate photoneutron dose to the eye during radiotherapy. A typical high-energy linear accelerator operating at 18 MV was simulated using the Monte Carlo N-Particle Transport Code System extended version (MCNPX 2.5.0). The latest International Atomic Energy Agency photonuclear data library release was integrated into the code, accounting for the most known elements and isotopes used in typical linear accelerator construction. The photoneutron flux from a 5 × 5 cm2 field size was scored at the treatment table plane and used as a new source for estimating the absorbed dose in a high-resolution eye voxel anthropomorphic phantom. In addition, common shielding media were tested to reduce the photoneutron dose to the eye using common shielding materials. Introducing a 2 cm thickness of common neutron shielding medium reduced the total dose received in the eye voxel anthropomorphic phantom by 54%. In conclusion, individualized treatment based on photoneutron dose assessment is essential to better estimate the secondary dose inside or outside the field of irradiation.
Introduction
The use of high-energy photon beams for deep-seated tumors has several advantages. However, assessment of associated secondary photoneutron contamination is also essential for complete patient dose profiling. The neutron fluence behavior and energy spectra are difficult to measure directly. Therefore, recommendations 1-3 covering photonuclear cross-section data and the best methods for neutron measurements in a highenergy radiotherapy suite have been issued. Several important parameters must be considered for photoneutron dose investigation during radiotherapy, including complete specifications for the linear accelerator and full knowledge of the elemental compositions of all materials inside a typical radiotherapy treatment room. 2,3 However, there are few studies on short-lived 4 or long-lived isotopes produced by photonuclear or photoneutron products. 5 The Monte Carlo (MC) simulation is one of the most effective tools for assessing photoneutron doses. A total of 71% of relevant studies have utilized MC methods, and all experimental works are accompanied by analytical or MC calculations. In addition, MC is the preferred choice for improving the calculation time (owing to increased computational capacity) and the availability of photonuclear cross-section data. 6 The majority of photoneutrons are produced by heavy elements (tungsten (W), lead (Pb), copper (Cu), and iron (Fe)) in the target and gantry, including beam conforming and shielding compartments. 7 Multiple bremsstrahlung photons are typically produced by incident electrons. At this stage, there is a negligible probability that neutrons will be produced owing to electron interaction. However, emissions of multiple photoneutrons are possible, depending on the incident photon energy. Different (γ, xn) interactions have different thresholds. In this study, only the (γ, n) interactions are relevant. In most cases, short-lived proton-rich isotopes are produced, particularly from organic elements such as carbon (C), oxygen (O), and nitrogen (N). These isotopes then decay via positron emissions and subsequent gamma emissions. Meanwhile, neutron production involves a typical neutron fission spectrum with an asymmetrical gaseous distribution. Some neutrons are attenuated in the shielding compartments, while some travel isotropically around the treatment room and are scattered, thermalized, and finally absorbed. Various short-and longlived isotopes can be produced, depending on the absorbing material and neutron energy. 8 The eye is a high-risk organ during radiation exposure, 9 and this risk increases when high LET neutron radiation is involved. A previous study investigated the correlation between eye complications and a dose received by head and neck radiotherapy in cases where the eye was on the beam entrance or exit and not the main target for radiotherapy treatment. 10 The photoneutron dose to the eyes outside the irradiation field during radiotherapy treatment was also estimated for a mathematical medical internal radiation dosimetry (MIRD) anthropomorphic phantom 11 and for a voxel anthropomorphic phantom. 12 Kim and Lee reported variations in photoneutron production per irradiation field size, indicating that a 20 × 20 cm 2 irradiation field contributed to a higher photoneutron dose than did other field sizes. 13 Meanwhile, Dowlatabadi et al. reported a lower photoneutron dose for 20 × 20 cm 2 and 5 × 5 cm 2 than for 10 × 10 cm 2 irradiation field sizes. 14 Taylor and Kron reported fluctuating uncertainties associated with secondary photoneutron dose assessment for photon energies as low as 6 MV, with a comparable risk of secondary cancer when using intensity-modulated radiation therapy (IMRT) at 18 MV photon energy. 6 Therefore, radiation-sensitive organs outside the field of irradiation can be exposed to additional photoneutron doses. 15 This study aimed to provide a fast method to estimate photoneutron dose to the eye outside the field of irradiation during radiotherapy treatment. Toward this goal, a simulation of an 18 MV high-energy linear accelerator and a highresolution eye voxel anthropomorphic phantom was carried out using MCNPX 2.5.0 16 to estimate the photoneutron dose to the eye at a peripheral position (X = 0, Y = 20 and Z = À100 cm) from a 5 × 5 cm 2 irritation isocenter field size.
Materials and Methods
The simulation scenario accounted for most of the major structures found in a typical medical linear accelerator operating at 18 MV energy with a 5 × 5 cm 2 irradiation isocenter field size. The dimensions of the linear accelerator were determined as previously described. 17 In addition, primary shielding, and iron shielding of the multileaf collimator (MLC) were included. The simulation consisted of 2 stages. First, photoneutrons were tracked from their origin in the target, flattening filter, collimation structures, and MLC to a thin disc (r = 2.5 cm) located on the treatment table plane. The disc was located in air at an approximate position from the irradiation field of the beam (X = 0, Y = 20, and Z = À100). This closely resembled a normal position where the patient was lying supine, and the disc was positioned on top of the open-eye voxel anthropomorphic phantom. The F4 tally was a volume-based tally that used the entire disc, and the upper and lower surfaces of the disc were used with the F1 surface tally to compare and crosscheck the photoneutron-scored spectra.
The tallied flux energy bins were used to describe a disc of the same size as the new photoneutron source. The tallied photoneutron spectra across the disk were used as the photoneutron source in the second stage of the simulation. The new source was positioned in an eye anthropomorphic voxel phantom to simulate photoneutron dose deposition. In addition, the photoneutron flux was tallied independently across the table plane (À100 to 100 cm) using the F5 detector tally.
Most eye dosimetry models were employed for radiation treatment planning and dose conversion factor calculation. 15,18 The voxel eye anthropomorphic phantom 19 employed in this study was based on 81 slices obtained from the female data of the Visible Human project. Semi-auto segmentation was carried out by color labeling each pixel. Consequently, 15 identified structures were assigned 15 identification (ID) numbers, and the entire phantom was adapted into MCNPX using a lattice card. 18 The data were presented as a 256 × 256 × 81 with a voxel size of .33 mm 3 . Each slice was segmented with different color intensities to contrast various organs/tissues in the original images. Figure 1 shows part of the geometry of the simulated linear accelerator treatment head specifying a 5 × 5 cm 2 irradiation filed size and the geometry section of the open-eye voxel anthropomorphic phantom using MCNPX plotting, featuring color intensities for various tissue volumes identified in the phantom. The shielding thickness suggested for the photoneutron is shown at the top of the phantom and explained further in the Results section.
The latest photonuclear data libraries were downloaded from the IAEA portal. 1 The data included details of cross-sections and the accompanying emission spectra for 209 isotopes in the evaluated nuclear data file format suitable for MCNPX processing. The integration was completed by inserting all element array file headings into an MCNPX cross-section directory (xdir) array list and the full library into an MCNPX library directory. Thus, defining the photonuclear library with an extension photonuclear designation (0000.12u) following each element isotope for the material card in the MCNPX input file.
To increase the precision of the MC calculations, a group of techniques referred to as variance reduction methods were adopted to improve the efficiency of the simulation. A mesh-superimposed importance weight-window generator was used for variance reduction in the simulation. A rectangular mesh was tested, and the best combination for setting the window boundaries along the Zplane of the beam with 2 large bins (X-and Y-planes) covering the entire geometry was determined. The window boundaries along the Z-direction were varied and increased in the region of scoring interest parallel to the assumed plane of the treatment table, where the F5 detector tally and F4 flux tally were located.
The total energy deposition tally (+F6) was used to score the dose in each tissue identified in the eye voxel anthropomorphic phantom. Additional mesh tallies were set independently from the previous mesh to investigate the efficiency problems of the source, photoneutron flux, and photoneutron flux across the geometry. Several cards necessary for the run were activated in the MCNPX file. The bremsstrahlung basing card was used to improve bremsstrahlung production. Physics cards for electrons, photons, and neutrons were used to control the upper-and lower-energy limits. A photonuclear material card and force collision cards for photons and neutrons were used to define materials for photonuclear table interaction and forcing of neutron or photon collisions in each cell.
The compartment structures and elemental compositions employed in the simulations of the gantry head of the linear accelerator are listed in Table 1. The amount of each isotope was included as the abundance fraction of natural elements when available in the new photonuclear library. A voxel eye anthropomorphic phantom was used to identify the tissues, and elemental compositions were adopted. 19 The computer used for the simulation was a Hewlett-Packard Pavilion Laptop AMD Ryzen 7, with a frequency of 1801 Mhz.
Results
The weight-window boundaries were fitted to avoid overlapping cell boundary planes. Figure 2 illustrates the tracked photoneutrons across the problem geometry in a thin slice of fine mesh of 1 × 1000 × 1000 bins along the x-, y-, and z-planes. The image illustrates the effect of using a meshbased weight-window generator. The number of generated neutrons were checked in the MCNPX print table for neutron particle creation against the lost neutron particles for boundary optimization. The number of neutrons and the efficiency (number of particles/computer time) of the source were 10× higher with the weight-window methods than that with the no variance reduction method. The design of the mesh was orientated to the scoring position, resulting in more particles being generated toward the region and the tally of interest. Detailed discussions of weight-window settings include several options. 20,21 The simulation files were run, on average, a 3 × 10 8 particle history until satisfactory statistical and MCNPX code limits checks were achieved, with a maximum reported relative error of less than 1%. The computer time varied with the tally type, with a maximum time of (260 min) to score the average photoneutron spectrum using the F4 tally. The F5 detector diagnostic tally was designed as a selected sized detector sphere (r = 5 cm in this study) with defined coordinates along the plane of the table (from the isocenter À100 cm to the 100 cm Y-plane (Figure 3)). As the name implied, F5 provided diagnostic information regarding the contribution of photoneutrons from different parts of the geometry. This information was printed as a table, along with the results of the tally. It showed the photoneutron generation in each cell defined in the geometry and contributed to F5 tally detection spheres. The analysis of these results indicated that the contribution of the major structures varied with the location of the detection sphere. In general, the main contributions were from the primary collimator, flattening filter, target, secondary collimator, upper and lower jaws, and MLC. The photoneutron flux inside the beam was slightly less than 20 cm outside the field of irradiation and then dropped gradually with increasing distance from the isocenter.
The F4 tally of the photoneutron flux from the irradiation field is shown in Figure 4. Ideally, a source surface file could be created; however, the distal position of the tallied surface and accumulation of sufficient particles to create a source file could exceed the capacity of the computer used in this study. Thus, a simpler source was assumed by obtaining as much information as possible (i.e., energy and spectral direction). The energy and direction of the new source were defined using an SDEF source card. The new source direction assumed equal emission probability and was perpendicular to the open-eye voxel anthropomorphic phantom, with energy bin intensities provided by F4 tally.
The results of the dose calculations in major tissues and organs identified in the eye voxel anthropomorphic phantom are listed in Table 2. The results from the F6 tally were obtained in MeV/Gram and then converted to absorbed dose using a tally multiplier with an appropriate unit conversion factor (C = 2.6 × 10 À8 ) to Rad/Gram. The total dose to the eye was reported in SV/hr/source particles using a built-in dose function in conjunction with a + F6 tally. 16
Discussion
Radiation-sensitive organs outside the field of irradiation, such as the eyes, can be exposed to additional photoneutron doses. In this study, the total absorbed dose for all the tissues and organs identified in the eye anthropomorphic voxel phantom was 0.00473 μGy/source particle/MU, and the total equivalent dose was 4.443 × 10 À9 SV/hr/source particle. Studies such as Martinez-Ovalle et al. and Chegeni et al. 11,12 on the photoneutron dose to the eye outside the field of irradiation for a typical 18-MeV photon beam radiotherapy treatment reported larger doses. In a voxel anthropomorphic model for adult patients, the total absorbed dose was 1.9 μGy in the eye lens, eye gel, and optic nerve from anterior-posterior pelvic treatment of 24.6 × 17.7 cm 2 irradiation field. The eye lens in a voxel anthropomorphic model for an adult patient recorded an absorbed dose of 1.05 μGy, higher than that in other eye tissues. In the study Martinez-Ovalle et al., 11 the status of the segmented eyelid was unclear for the voxel anthropomorphic phantom version (ie, how much of the lid covered the eye). This may suggest that photoneutrons were absorbed on the eyelid and explain the lower dose to the eye gel, where most of the photoneutron dose is expected. A mathematical MIRD anthropomorphic phantom 12 reported the highest total absorbed dose of .0531 mGy and equivalent dose of .983 mSV, estimated from a 10 × 10 cm 2 irradiation field for the anteriorposterior beam in the abdominal region. The dose reported to the eyes in these studies referred to both eyes. Meanwhile, in the study Chegeni et al., 12 the eyelid was not included in the MIRD anthropomorphic phantom study and was segmented out for the open-eye model in the current study. The discrepancies between the results can be attributed to the differences in eye models, field sizes, scoring positions, and linear accelerator specifications. Despite these differences, the overall results indicate the significance of the secondary photoneutron dose on peripheral sensitive organs, such as the eye, regardless of the field size. Dose deposition is more common in the eye gel because it constitutes the largest tissue in the eye and provides a perfect watery medium for scattering and absorbing photoneutrons. Table 2 provides a comparison of the dose calculated in major tissues along with the effect of introducing a 2 cm thickness neutron shielding media (polycarbonate 22 and water). Introducing a material with a higher concentration of hydrogen (H) attenuates a large portion of the photoneutron fission spectrum. Water also reduces the total dose received by the sensitive eye tissues by 43%, while polycarbonate reduces the dose by an average of 50%. This ratio can be increased by properly refining the protective thickness. New materials with suitable shielding properties for both fast and thermal neutrons are under investigation, 23,24 with initial reports indicating more than 70% efficiency. Applying the necessary protection when the oncology team decides on the course of treatment is a routine practice in radiation therapy. However, few innovative practices have been suggested 25,26 for cases in which photoneutron dose assessment is fully conducted.
Many studies have discussed inherent limitations in photoneutron dose estimation, [27][28][29][30] including the underestimation of MC simulations for photoneutron spectra under 1 MeVand other factors affecting analytical calculations, MC calculations, and experimental measurements of optimal conditions for photoneutron dose assessment. The simplified photoneutron source used in this study may lead to underestimation of the photoneutron dose. Photoneutron components, which originate from within the patient or from other materials not included in the simulation, affect the amount and position of photoneutron production and, eventually, the expected photoneutron dose. A total of 14 major structures were simulated in this study, which covers the major structures from the point where the electron beam interacts with the target onward to the isocenter. Other structures require detailed manufacturer specifications, which normally include the geometrical layout and, importantly, the elemental composition of each material. Photoneutron production depends on the energy and elemental photonuclear crosssection; therefore, full knowledge of these details is required for better estimation of the photoneutron dose.
Conclusion
The eye voxel anthropomorphic phantom provides a fast method for photoneutron dose assessment in specific cases. The general principle of radiation protection requires that unwanted radiation should be diminished and that patients should receive minimal unnecessary extra doses. Therefore, individual photoneutron dose assessment is essential to better approximate the secondary dose for complete patient dose profiling. A small eyelid thickness may contribute significantly to the dose received in the eye gel. Suitable shielding with moderate protection reduces the received dose to the peripheral radiation-sensitive organs.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Data Availability
The data for the voxel eye anthropomorphic phantom used in this study are available from the corresponding author on reasonable request.
|
2023-04-14T15:13:17.737Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "996b2d3eccdb9aa961aaef1101ee7bf4287d93f3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/15593258231169807",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "40343b16783e2fa5149c8f38aaface16c5e7f8a2",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255977769
|
pes2o/s2orc
|
v3-fos-license
|
Temporal genetic diversity of Schistosoma japonicum in two endemic sites in China revealed by microsatellite markers
Schistosomiasis is one of the neglected tropical diseases. The causative agent of schistosomiasis in China, Schistosoma japonicum, has long been a major public health problem. An understanding of fundamental evolutionary and genetic processes in this species has major implications for its control and elimination. Intensive control efforts have greatly reduced the incidence of schistosomiasis in China, but little is known about the genetic consequences of these efforts. To investigate this, we sampled twice (years 2003 and 2011) from two endemic regions where populations of S. japonicum had persisted despite control efforts and genotyped these samples using ten microsatellite markers. Our main hypothesis was that parasite genetic diversity would be greatly reduced across this time period. There was no apparent reduction in allelic diversity, and a non-significant reduction in clonal diversity in these parasite populations between 2003 and 2011. We did, however, detect temporal genetic differentiation among the samples. Such a significant temporal genetic variation of S. japonicum populations has not been reported before.
Introduction
Schistosomiasis is a serious parasitic disease, infecting over 200 million people and threatening the health of about 779 million people around the world [1]. Schistosomiasis due to infection with Schistosoma japonicum has long been a major public health problem in China [2]. In the last decade, the disease has been intensively controlled in most endemic regions in China [3]. Specifically, in 2004, a national program for schistosomiasis control was launched, aiming to reduce the transmission of S. japonicum from cattle and humans to snails. Effective interventions were implemented from 2005 to 2007, including keeping cattle away from snail-infested grasslands, providing farmers with mechanised farming tools and improving public environmental sanitation by supplying tap water and building lavatories. Annual synchronous rounds of chemotherapy are also routinely used [4]. It has been proposed that evolutionary theory should have an important role in the design, application and interpretation of such programs [5]. Control interventions are expected to reduce the genetic diversity within the parasite population. Associated with this, intensive selection is likely, driving rapid evolutionary changes in natural parasite populations [5]. For example, under strong drug selective pressure, resistance could emerge and become fixed, since the drugresistant phenotypes and alleles have an advantage over the wild type in these adverse circumstances [5]. Although we cannot test for such selection here, we can look for evidence of reduced genetic diversity in parasite populations following intervention.
Genetic diversity is of great importance for any organism to adapt in a changing environment and, in the case of parasites, to respond to the pressure of intervention programs [6]. Different types of molecular markers, for instance, mitochondrial DNAs [7] and microsatellites [8], have been applied to evaluate genetic variation in S. japonicum populations among different geographical regions [9], as well as the genetic differentiation among host species [10] or host individuals [11]. Some recent studies have reported spatial genetic variation [10,12,13] among natural S. japonicum populations. There have also been reports on spatio-temporal modeling for the prevalence of S. japonicum infection [14,15]. However, to date, no study has investigated temporal changes in genetic structure of S. japonicum populations.
In this study, we sampled again in 2011 from seven endemic regions where S. japonicum populations were present in 2003. In 2011, the parasite was only detected in two of the seven locations, suggesting a high degree of intervention success at several locations. For the two locations at which parasites had persisted, we hypothesised that a considerable reduction in genetic diversity (allelic richness and clonal diversity) would have occurred between these time points as a consequence of the interventions. In addition, we applied a range of populationgenetic analyses under the hypothesis that there should be no temporal changes between the time points.
Ethics statement
All procedures involving animals were carried out according to the guidelines of the Association for Assessment and Accreditation of Laboratory Animal Care International. Our protocol followed institutional ethical guidelines that were approved by the ethics committee at the National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (NIPD, China CDC; Permit No: IPD2008-4).
Sample collections
In 2011, we re-sampled seven endemic sites in China that had been investigated in 2003 for our previous project [16]. In the interim, the National Program for Schistosomiasis Control had reduced the transmission of S. japonicum [17] to such an extent that infected snails were detected in only two out of seven locations: Yueyang city in Hunan Province and Shashi city in Hubei Province (Table 1). These two sites are both close to the Yangtze River and about 129 km apart.
Ten laboratory-raised Kunming mice were each infected with about 1,000 S. japonicum cercariae isolated from infected snails (Oncomelania hupensis) collected from two locations (i.e. Yueyang and Shashi) in 2011. For each location, we screened 5,000 snails in 2011 and detected three infected snails from Shashi and 25 infected snails from Yueyang. After 45 days, adult worms were obtained by perfusing the hepatic portal system and mesenteric veins of each infected mouse. All worms were washed at least three times with normal saline (0.9 % NaCl) to remove the host tissues before being stored in 95 % ethanol at 4°C. The gender of each worm was noted.
DNA extraction, PCR amplification and microsatellite genotyping
In this study, we randomly selected 44 worms per location from the year 2011 samples for the genetic analyses. Genomic DNA was extracted individually from adult schistosomes using a DNeasy Blood & Tissue Kit (QIA-GEN, Germany), following the manufacturer's instructions, and then stored at -20°C until use.
The DNA of each individual S. japonicum was genotyped at ten microsatellite loci (i.e. Sjp1, 4, 5, 6, 8, 9, 10, 14, 15 and 17) from our previous work [16]. The PCRs were performed in a total volume of 12 μl following the protocol in [16]. Then the PCR products were diluted to appropriate concentrations and analysed on an ABI 3730 capillary automated sequencer using a LIZ 500-labeled size standard. Allele sizes were read using GeneMapper software (version 4.0) and checked carefully.
To compare the genetic variation of S. japonicum populations between 2003 and 2011, we included microsatellite data from 51 worms from Yueyang and 23 worms from Shashi, which were collected in 2003 and published previously [16]. This yielded four population categories for analysis (two per time point and per location). For some analyses, these categories were further subdivided by sex (termed subpopulations here). We excluded all individuals for which data were missing.
Microsatellite analyses
Clonal diversity (R) was calculated manually as R = (G-1)/ (N-1) [18], where G is the number of genotypes and N represents sample size. Clonal diversity can vary from 0 (all individuals belong to one clone) to 1 (each individual is unique). All 162 individuals were included in this analysis. Principal coordinates analyses (PCoA) were then performed to display the relatedness of all unique multilocus genotypes (MLGs) in GenAlEx 6.5 [19]. Somatic mutations occurring during the proliferation of cercariae from a single miracidium may produce multiple nearidentical MLGs [16]. Inclusion of clusters of near-identical siblings may bias results. We therefore removed nearidentical MLGs in the subsequent analyses, retaining only a single representative of each cluster [20]. PCoAs were then done to show the relatedness of the 82 retained MLGs. All subsequent analyses were based on these 82 MLGs. Genetic diversity indices (allelic richness A r ; sample size was corrected) were estimated using FSTAT V2.9.3.2 [21]. As the sample size was low, males and females were not treated separately in the calculations of allelic richness. Genetic differentiation between pairs of samples was calculated as Wright's F ST in Arlequin 3.11 [22], and the significance of the F ST value was tested using 10 4 permutations. Hierarchical analysis of molecular variance (AMOVA) was applied to quantify genetic variances into spatial, temporal and within-sample components in Arlequin 3.11.
Results
In total, 138 unique MLGs were detected among 162 S. japonicum individuals based on ten microsatellite loci. No shared MLG was found among the eight subpopulations (location, year and gender). The PCoA revealed that the 138 unique MLGs fell into a main cluster (Cluster 1) with small outlying clusters (Fig. 1a). Most (6 of 8) MLGs in the male subpopulation from Yueyang in 2011 (YY11M) were assigned to a small cluster (Cluster 2), whereas the other two were in Cluster 1 (Fig. 1a). Additionally, some (8 of 23) unique MLGs in the female subpopulation from Yueyang in 2011 (YY11F) formed another small cluster (Cluster 3) while the remainder fell into Cluster 1 (Fig. 1a). Overall, 56 individuals exhibited near-identical MLGs ( Table 1), indicating that they might be siblings. When only a single representative of each near-identical MLG group was retained, no visually obvious temporal or spatial structuring was apparent among the remaining 82 MLGs (Fig. 1b).
Importantly, allelic richness among the 82 retained MLGs and corrected for sample size did not change dramatically at either site between 2003 and 2011 (Table 1). Table 1 Indeed, allelic richness at Shashi was greater in 2011 than in 2003. The clonal diversity (R) of subpopulations ranged from 0.35 to 1.00 (average R = 0.85). The value of 1.00 (detected in three subpopulations; Table 1) Genetic differentiation was significant for all tested pairs of four S. japonicum samples and the F ST values ranged from 0.009 to 0.046 (averaged over all loci, Table 2). AMOVA analyses indicated that withinpopulation variance explained most of the observed genetic variation (97.45 %). The remaining genetic variation was due to additional variance across space (0.02 %) and across time within space (2.53 %; Table 3). The temporal component was significant but spatial was not (Table 3).
Discussion
Using microsatellite markers, we noted a non-significant decrease in clonal diversity between the two time periods, but no obvious change in allelic richness. Measures of gene flow and population structure indicated substantial temporal genetic change (between 2003 and 2011) according to population differentiation tests (F ST ) and the AMOVA. Spatial structure (between Yueyang and Shashi) was also indicated according to F ST values, but not according to the AMOVA results.
Clonal diversities were in agreement with previous studies using microsatellites [8,20]. A high clonal diversity (average R = 0.94) was detected in S. japonicum populations from the year 2003. However, by 2011, this had decreased but not significantly. The clonal diversity in subpopulation YY11M was very low (0.35), perhaps indicating that the sample contained many sibling cercariae from a single male miracidium. Such a temporal change in clonal diversity of S. japonicum populations has not been observed previously.
The maintenance of high allelic richness across time was not expected. Infected snails were more difficult to find in 2011 than in 2003 (and indeed were not found at all in five of seven locations in 2011). This suggests a reduction in overall population size of S. japonicum in the region, but apparently not to an extent sufficient to impact markedly on allelic richness. Similarly, the extent of temporal population-genetic structure was unexpected. Such temporal changes are not previously known for S. japonicum, although structure at small spatial scales has been reported [8,11,16]. Little is known about the metapopulation dynamics of S. japonicum over the vast region of the mid-Yangtze basin, from which our samples came. Local extinction followed by repopulation from elsewhere in the region might partly explain the extent of genetic differentiation observed. Given the known fluxes of water in the region [23], especially in connection with annual floods, there may be a large regional population of S. japonicum subject to substantial dispersal and gene flow.
At our study locations, intensive control measures (annual chemotherapy) had been implemented during the investigation period (spanning 9 years), which might have acted as a strong selective pressure and been responsible for the temporal changes observed. There were also marked ecological changes during this period, in particular the completion of the Three Gorges Dam (TGD), which significantly reduced the amount of water reaching our sampling locations (Dongting and Poyang lakes) and has moderated the annual floods. Given the lack of baseline data from prior to the construction of the TGD, and the lack of understanding of metapopulation dynamics in S. japonicum, few strong conclusions can be drawn.
There are several limitations of our study. One is the use of different mammal hosts between 2003 (rabbits) and 2011 (mice). LoVerde et al. [24] noted that use of rodents (as opposed to primates) as experimental hosts led to increased reduction in genetic diversity over several generations in S. mansoni. However, we used mammal hosts only for a single passage (field-collected snails were used to infect laboratory hosts from which adult worms were harvested) and not serial passages such as used by LoVerde et al. [24]. In effect, our worms are analogous to those subjected to electrophoresis from the M 0 generation of laboratory hosts, in which very little deviation from the parental population was observed [24]. Another limitation relates to the small number of infected snails (three) found in Shashi in 2011. Despite this, the clonal and allelic diversities within the parasite population there were high. The inference is that at least one of the infected snails had been invaded by multiple unrelated miracidia. A further point to consider is that adult S. japonicum can be long-lived in the mammal host, 47 years being reported in one human case [25]. Long-lived individuals can carry substantial genetic diversity through a bottleneck event. The interventions have taken place over a relative handful of years, perhaps insufficient time for any genetic bottleneck to have caused genetic erosion. Similarly, the effects of regional ecological change, such as caused by the TGD, may take more time to become apparent in the genetic structure of schistosome populations.
In conclusion, using ten microsatellite loci, we rejected the hypothesis that genetic diversity in two S. japonicum populations would be reduced following control programs between the years of 2003 and 2011. We did, however, find significant genetic differentiation across time in these populations. Overall, our findings provide a deeper understanding of molecular epidemiology and population genetics of S. japonicum, which may be of value for effective control and elimination of schistosomiasis.
|
2023-01-19T22:09:20.276Z
|
2016-01-22T00:00:00.000
|
{
"year": 2016,
"sha1": "042e2daee7789866a264e34881240c88adcab0a1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13071-016-1326-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "042e2daee7789866a264e34881240c88adcab0a1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
134575114
|
pes2o/s2orc
|
v3-fos-license
|
Incorporating Floodplain Inundation as a Strategy in Flood Mitigation Plan
This paper is promoting the awareness that nature and engineering structure can co-exist together. Natural floodplain inundation is usually restrained to separate floodplain lands for human uses. In contrary to conventional flood control systems, a vision of
restoring floodplain inundation in Kuching Bypass Floodway is presented as a flood mitigation plan. Modelling of the approach indicates a
reduction of flooded areas up to 61%. By means of modelling, portions of floodplains are virtually preserved in their natural states and
functions, a role that often has been undervalued. Floodplain permits storage and conveyance of floodwaters. At the same time, it provides
replenishment of the adjoining wetlands. The strategy proves beneficial to both human and natural systems. It also calls for a systemic
change in flood management that we can live with the natural forces instead of forbidding them.
BACKGROUND
floodplain is the area adjoining a river that is naturally covered by seasonal floodwater.A floodway is a channel and parts of the floodplain connected to a river that is reasonably required to efficiently carry the floodwater of a river.Conventionally, flood control systems favour compounded river and floodway, prohibiting spills over the river bank to protect heavy human settlements in the floodplain.Even so, flood remains common to every corner of the world.
Take the example of Red River Floodway in Canada, the 48 km channel is recognized as one of the 16 engineering achievements that shaped the world since biblical times [1].The floodway, first used in 1969, takes part of the Red River flow around the city of Winnipeg, Manitoba to the east and discharges it back into Red River below the Lockport Dam (see Figure 1b).Used over 20 times in the 37 years from its completion to 2006, the floodway has saved an estimated $10 billion (CAD) in flood damages.The 1997 Red River flood, termed as the "Flood of the Century", resulted in volume of water exceeded the safe capacity of the floodway and water lapped within inches of the city's dikes.The Winnipeg city suffered little flood damage primarily as a result of the floodway was capable of containing the floodwater to enter the city (see Figure 1c) [2].The Red River upstream cities of Fargo and Grand Forks of North Dakota, US had been suffering lengthy recovery processes from the disastrous flood.However, the maximum spring discharges of Red River have shown a rising trend, indicating that the flood hazard is becoming more severe than was initially assumed [3].If this trend continues, future benefits of the floodway will continue to exceed expectations.The increasing vulnerability of the floodplain inhabitants poses new challenges.The flood diversion may influence flood levels in areas which are not normally flood-prone [4].
Another example is the Manggahan Floodway of Manila, Philippines.The metropolis of Manila is covered by formerly tidal flats along Manila Bay.During flood time, the excess floodwater is diverted from the Marikina River via the 9 km long Manggahan Floodway to sea since 1984 (see Figure 2a).Discharge exceeding 600 m 3 /s inundates the low lying areas of Manila aggravated by the tidal fluctuation in Manila Bay.The greatest challenge in Manila is its rapid urbanization that encourages massive movement of rural dwellers to urban centres.It has resulted in overcrowding of urban poor settlements in the metropolis-floodplain.The scarcity of affordable build-able lands and living spaces has led squatter communities encroach on the floodway canals (see Figure 2b).An estimated 25,000 squatter houses are situated along the floodway, causing inaccessibility for dredging activities and solid wastes accumulate in the waterways.Originally 260 m wide, the floodway is narrowed down to only 220 m.With less space in the floodway, water quickly breached its bank [6].In October D.Y.S. Mah is with the Department of Civil Engineering, Faculty of Engineering, Universiti Malaysia Sarawak, 94300 Kota Samarahan, Sarawak, Malaysia (corresponding author, phone: 082-583207; fax: 082-583409; e-mail: ysmah@feng.unimas.my).R.A. Bustami is with the Department of Civil Engineering, Faculty of Engineering, Universiti Malaysia Sarawak, 94300 Kota Samarahan, Sarawak, Malaysia (e-mail: abrosmina@feng.unimas.my).
Incorporating Floodplain Inundation as a Strategy in
Flood Mitigation Plan It is clear that standard engineering design intends to isolate the floodplain from the river channel to make space for human activities.Manmade structures like dikes are erected to protect floodplain communities.Therefore once overflow happens, it would cause damages and losses to many as illustrated in the examples.
II. MOTIVATION
Recent guidelines [7], however, has encouraged portion of floodplains to be reclaimed to bring back flooding during wet seasons.This allows the natural function of the floodplain to reoccur as the communities realize that we can live with natural forces instead of forbidding them.This paper explores a nature-sensitive approach on the Kuching bypass floodway to incorporate floodplain inundation as part of the local flood mitigation plan.Having valuable floodplain lands for seasonal flooding is a luxury endeavour and usually not an option for the decision makers, take the Manggahan Floodway for an instance.In order to counter changing climates, they tend to build bigger and deeper floodway, or adding higher dikes to existing floodway, which are costly and unsustainable in long term well-being.Alternatively, allowing natural flooding to take place could avoid these choices.
In the case of Kuching city of Malaysia, the 8 km manmade floodway would be built across a broad plain of deep peat swamp, diverting floodwater from the Sarawak River to the Salak River (see Figure 3).The former is a freshwater system, while the latter a coastal river lined with mangroves.The structure is expected to be in full operation by 2015.Technically, the floodway is capable of alleviating the flooding of Kuching city centre [8].The lowland peat swamp is found unsuitable for agriculture and physical infrastructure development.Management of peat swamp in its natural condition and conservation of its biodiversity is the best land use choice from a long term perspective [9].We argue here, floodplain inundation would be a major process to replenish the wetland ecosystems [10].On the other hand, incorporating floodplain inundation in the Kuching bypass floodway would cease the pressure in floodwater flushing and reduce the flooding vulnerability in the urban floodplain along the Sarawak River.An alternative approach is depicted in Figure 5.By disallowing floodplain inundation, the floodway acts as a bottleneck during high flow events, causing a jam of floodwaters upstream and subsequently flooding in the upstream stretches.It should be noted that the Second Barrage would block any floodwaters flowing downstream to Kuching city in such cases.The bunds in the downstream bypass floodway are virtually removed from the computer model to permit connection of waterway and its floodplain.Floodplain lands and adjacent water form a dynamic physical and biological wetland system.The connection increases the area available to store and convey floodwaters and can reduce flood risk for nearby areas.From the technical point of view, a compounded bypass floodway has capacity limit.By allowing floodplain inundation, the bottleneck mentioned above is lessened and thus ceases the pressure of floodwater flushing.An encouraging reduction of Hydrological monitoring stations are distributed along the Sarawak River which provide the necessary flow data for the upstream boundaries [12].With that, a base model representing the existing conditions has been calibrated and validated to at least 80% of confidence.The base model, carefully calibrated against river flows for one event, can be utilised to predict flows for the second event, and including the flood bypass channel and associated floodplain inundation for investigation.The bypass is modelled as an extended river channel from the oxbow of Sarawak River to the outlet of the bypass, excluding the Salak River.No gauges are available at the tidally Salak River at the moment.However, a tide table is accessible from the marine department [13] to represent the flows at the outlet of the bypass.
IV. RESULTS AND DISCUSSION
The most recent flood that hit the Kuching flat happened in January 2009.This devastating 100-year return period flood has prompted the Sarawak State Government on the determination to construct a bypass before the city centre.Computer modelling of the extreme event has its flood extent drawn to a background map in Figure 4. Consequently, a modelling scenario of including a conventional bypass floodway design is superimposed in the same figure for comparison.The bypass is lined with earthen bunds on its both sides.The computed flood extent as a result of the control system against the repeating January 2009 flood indicates a reduction of 53% of flooded lands.Flooding is expected to persist in Lower Sarawak River due to high astronomical tides experienced in the Kuching Bay.Low lying areas like Batu Kawa town in Upper Sarawak River remain to be effected by flood risk.
61% of flooded lands is further estimated in Batu Kawa town, comparing the alternative approach to the conventional control system.Such natural processes cost far less money than it would take to build facilities to correct flood.It also suggests that by restoring flooding in floodplain, the design life cycle of current flood mitigation plans can be lengthened, instead of adding larger and larger structures to accommodate flood control in ever-changing climatic conditions.
Figure 1
Figure1Red River Floodway, Canada[5] a) Components of Floodway Winnipeg a) Manggahan Floodway and Metro Manila b) Close Up on Manggahan Floodway
|
2019-04-27T13:03:23.586Z
|
2012-12-01T00:00:00.000
|
{
"year": 2012,
"sha1": "633f5121e79fbbce7a1de06cf9d7da39dd41ae72",
"oa_license": "CCBYNCSA",
"oa_url": "http://publisher.unimas.my/ojs/index.php/JCEST/article/download/101/78",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "633f5121e79fbbce7a1de06cf9d7da39dd41ae72",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
237365375
|
pes2o/s2orc
|
v3-fos-license
|
Yellow Pigment Powders Based on Lead and Antimony: Particle Size and Colour Hue
This paper reports the results of particle size analysis and colour measurements concerning yellow powders, synthesised in our laboratories according to ancient recipes aiming at producing pigments for paintings, ceramics, and glasses. These pigments are based on lead and antimony as chemical elements, that, combined in different proportions and fired at different temperatures, times, and with various additives, gave materials of yellow colours, changing in hues and particle size. Artificial yellow pigments, based on lead and antimony, have been widely studied, but no specific investigation on particle size distribution and its correlation to colour hue has been performed before. In order to evaluate the particle size distribution, segmentation of sample data has been performed using the MATLAB software environment. The extracted parameters were examined by principal component analysis (PCA) in order to detect differences and analogies between samples on the base of those parameters. Principal component analysis was also applied to colour data acquired by a reflectance spectrophotometer in the visible range according to the CIELAB colour space. Within the two examined groups, i.e., yellows containing NaCl and those containing K-tartrate, differences have been found between samples and also between different areas of the same powder indicating the inhomogeneity of the synthesised pigments. On the other hand, colour data showed homogeneity within each yellow sample and clear differences between the different powders. The comparison of results demonstrates the potentiality of the particle segmentation and analysis in the study of morphology and distribution of pigment powders produced artificially, allowing the characterisation of the lead and antimony-based pigments through micro-image analysis and colour measurements combined with a multivariate approach.
Introduction
The preservation, archival, and study of cultural heritage is of the utmost importance at local, national, and international levels [1]. In the last decade, researchers in the field of imaging science have contributed to a growing set of tools for cultural heritage, thereby providing indispensable support to the above said efforts [2][3][4][5][6][7]. In this scenario, the morphological and morphometric analysis of the particles of pigments can supply a useful contribution to the knowledge and conservation of artistic objects. In general, the relationship between colour and particle size is known, and it is the object of studies in several fields such as food [8], earth science [9], and medicine [10]. The correlation between colour and particle size of artist pigments is also relevant, especially for the conservation and restoration of polychrome artifacts [11]. The knowledge of the optical characteristics of pigments used by artists and suppliers in earlier times represents an important starting point for the study and characterisation of paintings or generally of artworks, and for Table 1 lists the pigment powders used in this study, the modality of synthesis, and the old recipes from which they were obtained. PSAPPB2 was obtained starting from PSAPPB1 with an annealing process carried out under the same conditions used for producing PSAPPB1, according to the procedure reported in the recipes indicated in Table 1. [17,19] In the recipes by Cipriano Piccolpasso and Giambattista Passeri, the amounts of reagents are reported in Roman libra (lb, lire in the recipes), specifically: Sb 4 lb, Pb 6 lb, feccia (lees) 1 lb (1 lb corresponds to 327.168 g).
Samples' Description and Image Acquisition
For the production of lead-antimony-based yellow pigments, pure grade chemicals supplied by Acros Organics (New York City, New York), MP Biomedicals (Santa Ana, California), and Sigma-Aldrich (St. Louis, Missouri) were used. The reagents were mixed in agate mortars by following the amounts suggested in the recipes and then placed into the laboratory furnace at room temperature. They were then heated in order to reach the required temperature. The temperature was maintained constant for 5 h. In the case of yellows prepared according to the common recipes by Cipriano Piccolpasso and Giambattista Passeri, a double firing was used [16].
Colour Measurements
Colour was measured through an X-Rite CA22 reflectance spectrophotometer according to the CIELAB colour system [23]. The characteristics of the colour measuring instrument are the following: light source D65; standard observer 10 • ; fixed geometry of measurement 45 • /0 • ; spectral range 400-700 nm; spectral resolution 10 nm; aperture size 4 mm. For each specimen, twenty-five measurements were performed in order to account for possible colour variations due to particle size of powders. Samples were mixed after each measure, and then the average values and standard deviations were calculated. Measurements were performed at room temperature (about 20 • C) and relative humidity of about 50%, controlled by the laboratory humidifier/dehumidifier.
Stereomicroscopic Investigation
Samples were characterised by optical microscopy using a Leica M205C stereomicroscope. A coaxial LED incident-light illumination optic unit was utilised as an energising source. The adopted magnification was 160× to obtain images of powder samples and details of the morphological and morphometric parameters. The same areas were acquired for each sample under transmitted light so that to highlight the morphological characteristics of the examined powders.
Data Processing
The process of segmentation has been performed using the MATLAB software environment (Version 7.11.1, MathWorks, Inc., Natick, MA, USA). The image segmentation is commonly used to process and analyse digital images with the aim of creating parts or regions, often on the base of the pixel characteristics. In order to maximise the segmentation of the elements in the image, the first step of processing is devoted to the separation of the background from the foreground and the grouping of pixel regions according to similarities in colour or forms. Therefore, algorithms are needed to transform the grey scale image into a binary image (binarisation), so as to preserve the relevant content as much as possible ( Figure 1A). At the same time, all objects less than 100 pixels in size were removed. Subsequently the measures of each object in the binary image were extracted ( Figure 1B). J. Imaging 2021, 7, x FOR PEER REVIEW 4 of 12 regions, often on the base of the pixel characteristics. In order to maximise the segmentation of the elements in the image, the first step of processing is devoted to the separation of the background from the foreground and the grouping of pixel regions according to similarities in colour or forms. Therefore, algorithms are needed to transform the grey scale image into a binary image (binarisation), so as to preserve the relevant content as much as possible ( Figure 1A). At the same time, all objects less than 100 pixels in size were removed. Subsequently the measures of each object in the binary image were extracted ( Figure 1B). The extracted parameters can be listed as follows: • Area (Area): effective number of pixels in the region, returned as scalar.
• Centroid (Circ.): mass centre of the region, returned as a 1-by-Q vector. The first centroid element is the horizontal (or x coordinate) of the mass centre. The second element is the vertical coordinate (or y coordinate). The other elements of the centroid are ordered by size. • Eccentricity (Ecc.): eccentricity is the ratio between the distance of the ellipse fires and the length of its major axis. The value ranges between 0 and 1. • Major axis length (M axis): length (in pixels) of the major axis of the ellipse that has the same second normalised central moments of the region, returned as scalar. • Minor axis length (m axis): length (in pixels) of the minor axis of the ellipse that has the same second normalised central moments of the region, returned as scalar. • Equivalent diameter (Eq. diameter): Diameter of a circle with the same area of the region, returned as scalar. Calculated as sqrt (4*Area/pi). • Perimeter (Perim.): the distance around the boundary of the returned region as a scalar. The system calculates the perimeter by measuring the distance between each pair of adjacent pixels around the edge of the region. • Hausdorff Fractal (F. Haus.): returns the Hausdorff fractal dimension of an object represented by a binary image. • Fractal "Box-Counting" (F. boxc.): counts the number N of D-dimensional boxes of size R necessary to cover the % of the non-zero elements of the identified object.
Principal Component Analysis (PCA)
Principal component analysis (PCA) is a powerful and versatile method capable of providing an overview of complex multivariate data. It is widely adopted to treat different kinds of data [24][25][26]. PCA can be used to reveal relations between variables and samples (i.e., clustering), detecting outliers, finding and quantifying patterns, generating new hypotheses, etc. PCA is used to decompose the data into several principal components (PCs), linear combinations of the original data, embedding the variations of each collected data
Principal Component Analysis (PCA)
Principal component analysis (PCA) is a powerful and versatile method capable of providing an overview of complex multivariate data. It is widely adopted to treat different kinds of data [24][25][26]. PCA can be used to reveal relations between variables and samples (i.e., clustering), detecting outliers, finding and quantifying patterns, generating new hypotheses, etc. PCA is used to decompose the data into several principal components (PCs), linear combinations of the original data, embedding the variations of each collected data set [24]. According to this approach, a reduced set of factors is produced. Such a set can be used for exploration, since it provides an accurate description of the entire dataset. The first few PCs, resulting from PCA, are generally used to analyse the common features among samples and their grouping: in fact, samples characterised by similar characteristics tend to aggregate in the score plot of the first two or three components [26].
Since the selected variables differ from each other, the samples were pre-processed through Autoscale. In this paper, PCA was applied to both imaging and colour values.
Image Analysis
The images of the three areas acquired under reflected and transmitted light are shown in Figures 2 and 3.
J. Imaging 2021, 7, x FOR PEER REVIEW 5 of 12 set [24]. According to this approach, a reduced set of factors is produced. Such a set can be used for exploration, since it provides an accurate description of the entire dataset. The first few PCs, resulting from PCA, are generally used to analyse the common features among samples and their grouping: in fact, samples characterised by similar characteristics tend to aggregate in the score plot of the first two or three components [26].
Since the selected variables differ from each other, the samples were pre-processed through Autoscale. In this paper, PCA was applied to both imaging and colour values.
Image Analysis
The images of the three areas acquired under reflected and transmitted light are shown in Figures 2 and 3. From the microscopic images, it appears that samples are not homogeneous in particle size and distribution, and also in morphological aspect. After the process of segmentation, the obtained parameters, as described in Section 2.4, with the relative abbreviations, are reported in Table 2. The analysis of average data shows morphological analogies between samples APB1, ABP2, and APB3 with fractal dimensions comparable between the three samples. However, the morphological variability between the examined samples, highlighted by the minimum and maximum values, is high. Samples PSAPPB1 and PSAPPB2 exhibit significant differences mainly in the contour variations. This is stressed by the different fractal dimension and by a decrease of dimension highlighted by the following parameters: axis, perimeter, and average equivalent diameter.
The high variance found in all samples requests the use of multivariate methods for the analysis variance. The PCA model of APB1, APB2, and APB3 requires six PCs to express a total captured variance equal to 99.35% and shows a complex clusters scenario. In fact, the score clusters of the three classes are not sharply separated by a single PC, except for the 'APB3′ class. In more detail, the PC1-PC5 score plot ( Figure 4A) shows that pixels belonging to 'APB1′ and 'APB2′ classes occur in different regions of the plot in respect to APB3 and they are not separated. In addition, APB1 and APB2 are clustered in two From the microscopic images, it appears that samples are not homogeneous in particle size and distribution, and also in morphological aspect. After the process of segmentation, the obtained parameters, as described in Section 2.4, with the relative abbreviations, are reported in Table 2. The analysis of average data shows morphological analogies between samples APB1, ABP2, and APB3 with fractal dimensions comparable between the three samples. However, the morphological variability between the examined samples, highlighted by the minimum and maximum values, is high. Samples PSAPPB1 and PSAPPB2 exhibit significant differences mainly in the contour variations. This is stressed by the different fractal dimension and by a decrease of dimension highlighted by the following parameters: axis, perimeter, and average equivalent diameter. The high variance found in all samples requests the use of multivariate methods for the analysis variance. The PCA model of APB1, APB2, and APB3 requires six PCs to express a total captured variance equal to 99.35% and shows a complex clusters scenario. In fact, the score clusters of the three classes are not sharply separated by a single PC, except for the 'APB3 class. In more detail, the PC1-PC5 score plot ( Figure 4A) shows that pixels belonging to 'APB1 and 'APB2 classes occur in different regions of the plot in respect to APB3 and they are not separated. In addition, APB1 and APB2 are clustered in two different portions of the score plot, probably due to the major variability of particles in terms of grain size and morphology. By analysing the loadings of the selected parameters, it is possible to highlight how PC5, that mostly influences the variance detected between APB3 and the other two samples (APB2 and APB1), is mainly due to the two fractal parameters, while morphologically the samples are similar to each other as shown by PC1 ( Figure 4B). different portions of the score plot, probably due to the major variability of particles in terms of grain size and morphology. By analysing the loadings of the selected parameters, it is possible to highlight how PC5, that mostly influences the variance detected between APB3 and the other two samples (APB2 and APB1), is mainly due to the two fractal parameters, while morphologically the samples are similar to each other as shown by PC1 ( Figure 4B). The variation found in the APB1 and APB2 samples is given by the presence of areas with different circularity as shown by PC1. This result is in agreement with the previous published studies on artificial yellow pigments that showed differences in composition and colour between the three powders produced according to the recipe by Valerio Mariani from Pesaro (1620) [18]. APB1 and APB2, in fact, were found to be inhomogeneous powders with yellow and brown grains, especially APB1, whose composition was not exactly characterised also by applying X-ray diffraction (XRD) analysis, as discussed in [18] and in Supplementary Materials included in the present paper. The variation found in the APB1 and APB2 samples is given by the presence of areas with different circularity as shown by PC1. This result is in agreement with the previous published studies on artificial yellow pigments that showed differences in composition and colour between the three powders produced according to the recipe by Valerio Mariani from Pesaro (1620) [18]. APB1 and APB2, in fact, were found to be inhomogeneous powders with yellow and brown grains, especially APB1, whose composition was not exactly characterised also by applying X-ray diffraction (XRD) analysis, as discussed in [18] and in Supplementary Materials included in the present paper.
It has been supposed that different compounds were produced also containing Na and Cl in the crystalline lattice of lead antimonate [18]. On the other hand, sample APB3 was homogeneous in colour and composition, in accordance with the results of particle analysis.
The PCA model of PSAPP1 and PSAPP2 requires seven PCs to express a total captured variance equal to 99.89%. The score clusters of the two classes are well separated by a single PC. In detail, the PC1-PC5 score plot ( Figure 5A) shows that pixels belonging to 'PSAPP1 and 'PSAPP2 classes occur in different regions of the plot. In addition, PSAPP2 pixels are clustered in two different portions of the score plot, probably due to the presence of particles with different grain sizes and morphology. By analysing the loadings plot, it can be derived how PC5, mostly influencing the variance between the PSAPP1 and PSAPP2, is mainly determined by the two fractal parameters ( Figure 5B). On the other hand, the samples are morphologically similar to each other, as shown by PC1. The variability observed within the yellow PSAPPB2 is caused by areas of different circularity, as suggested by the loading values of PC1.
J. Imaging 2021, 7, x FOR PEER REVIEW 8 of 12
It has been supposed that different compounds were produced also containing Na and Cl in the crystalline lattice of lead antimonate [18]. On the other hand, sample APB3 was homogeneous in colour and composition, in accordance with the results of particle analysis.
The PCA model of PSAPP1 and PSAPP2 requires seven PCs to express a total captured variance equal to 99.89%. The score clusters of the two classes are well separated by a single PC. In detail, the PC1-PC5 score plot ( Figure 5A) shows that pixels belonging to 'PSAPP1′ and 'PSAPP2′ classes occur in different regions of the plot. In addition, PSAPP2 pixels are clustered in two different portions of the score plot, probably due to the presence of particles with different grain sizes and morphology. By analysing the loadings plot, it can be derived how PC5, mostly influencing the variance between the PSAPP1 and PSAPP2, is mainly determined by the two fractal parameters ( Figure 5B). On the other hand, the samples are morphologically similar to each other, as shown by PC1. The variability observed within the yellow PSAPPB2 is caused by areas of different circularity, as suggested by the loading values of PC1.
Colorimetric Analysis
The other important parameter considered in the present paper is the colour of the produced powders. Hue of painting pigments is highly relevant in the choice of materials by artists that probably knew the production modalities and the different kinds of available yellows [19].
The average values of the chromatic coordinates with the relative standard deviation are reported in Table 3. Comparing the values of chromatic coordinates for the three yellows prepared according to the recipe of the treatise by Valerio Mariani from Pesaro, i.e., APB1, APB2, and APB3, we observe a clear difference of APB3 with respect to APB1 and APB2, especially concerning the a* coordinate that is higher in APB3 indicating a reddish hue of the pigment. APB3 is also darker in respect to APB1 and APB2 and more yellow, with the b* coordinate having a higher value.
Colorimetric Analysis
The other important parameter considered in the present paper is the colour of the produced powders. Hue of painting pigments is highly relevant in the choice of materials by artists that probably knew the production modalities and the different kinds of available yellows [19].
The average values of the chromatic coordinates with the relative standard deviation are reported in Table 3. Comparing the values of chromatic coordinates for the three yellows prepared according to the recipe of the treatise by Valerio Mariani from Pesaro, i.e., APB1, APB2, and APB3, we observe a clear difference of APB3 with respect to APB1 and APB2, especially concerning the a* coordinate that is higher in APB3 indicating a reddish hue of the pigment. APB3 is also darker in respect to APB1 and APB2 and more yellow, with the b* coordinate having a higher value. The other two yellows (PSAPPB1 and PSAPPB2) exhibit very similar values of the b* coordinate, representing the yellow component, and also a similar value of a*. The most consistent difference between the two yellows is given by the L * parameter: the lower value of about 4 points indicates that the PSAPPB2 sample is darker than PSAPPB1. Therefore, the annealing process does not seem to change the value of the chromatic coordinates but only causes a moderate darkening of the pigment. Furthermore, the reflectance spectra of samples APB1, APB2, and APB3, as collected and after pre-processing ( Figure 6A,B), have been compared to highlight the variations in the different spectral regions. The spectra of APB1, APB2, and APB3 ( Figure 6A) show variations at 500 and 600 nm, whereas those of PSAPPB1 and PSAPPB2 exhibit an increasing trend from 400 to 650 nm, with a slight variation after 650 nm. The other two yellows (PSAPPB1 and PSAPPB2) exhibit very similar values of the b* coordinate, representing the yellow component, and also a similar value of a*. The most consistent difference between the two yellows is given by the L * parameter: the lower value of about 4 points indicates that the PSAPPB2 sample is darker than PSAPPB1. Therefore, the annealing process does not seem to change the value of the chromatic coordinates but only causes a moderate darkening of the pigment. Furthermore, the reflectance spectra of samples APB1, APB2, and APB3, as collected and after pre-processing ( Figure 6A,B), have been compared to highlight the variations in the different spectral regions. The spectra of APB1, APB2, and APB3 ( Figure 6A) show variations at 500 and 600 nm, whereas those of PSAPPB1 and PSAPPB2 exhibit an increasing trend from 400 to 650 nm, with a slight variation after 650 nm. In order to maximise the spectral differences, MSC (Median) pre-processing and 1st Derivative have been applied with the aim of removing the light scattering effects on the pigment surface and of emphasising the variations of the spectral signatures, respectively. Finally, PCA was applied to the pre-processed spectra (Figure 7). In order to maximise the spectral differences, MSC (Median) pre-processing and 1st Derivative have been applied with the aim of removing the light scattering effects on the pigment surface and of emphasising the variations of the spectral signatures, respectively. Finally, PCA was applied to the pre-processed spectra (Figure 7).
The scores plot of PCA ( Figure 7A) shows five clusters corresponding to the five yellow pigments based on lead and antimony. In more detail, it is possible to note that spectra belonging to the 'APB3' class can be easily distinguished from pixels of the other classes, being clustered in the fourth quadrant of PC1-PC2. In addition, the PCA score plot shows that pixels belonging to the 'APB1 and 'APB2 classes are mainly concentrated in the first quadrant, corresponding to positive values of PC1 and PC2, whereas pixels belonging to 'PSAPPB1' and 'PSAPPB2' classes occur in different regions of the plot, mainly in the second and third quadrants, corresponding to negative values of PC1, and they are very close to each other. Moreover, the loading plot associated with PCA ( Figure 7B) highlights how the variance of positive PC1 is mainly related to the wavelength around 550 nm, whereas the negative PC1 is influenced by the spectral region around 450 and 600 nm. The positive variance of PC2 is influenced by the wavelength around 500 nm, whereas the negative variance of PC2 is influenced by the spectral region around 550 nm. Figure 6. Reflectance spectra of artificial Pb/Sb yellows (A) and pre-processed spectra (B) through MSC (Median), 1st Derivative (order: 2, window: 5 pt, including only tails: weighted), Mean Centre.
In order to maximise the spectral differences, MSC (Median) pre-processing and 1st Derivative have been applied with the aim of removing the light scattering effects on the pigment surface and of emphasising the variations of the spectral signatures, respectively. Finally, PCA was applied to the pre-processed spectra (Figure 7). The scores plot of PCA ( Figure 7A) shows five clusters corresponding to the five yellow pigments based on lead and antimony. In more detail, it is possible to note that spectra belonging to the 'APB3' class can be easily distinguished from pixels of the other classes, being clustered in the fourth quadrant of PC1-PC2. In addition, the PCA score plot shows
Comparison of Colour and Particle Analysis
Comparing the results of particle and colour analysis with the previous published data on lead and antimony-based yellow pigments, an agreement can be assessed. PSAPPB1 and PSAPPB2, in fact, have been found constituted by inhomogeneous powders with different stoichiometric ratios of Pb and Sb in the areas examined under SEM-EDS [16]. These two yellows include two main lead antimonates, i.e., Pb 2 Sb 2 O 7 and PbSb 2 O 6 , the last being rosiaite often found in the synthesis of Naples yellow [14,27], but also other compounds containing K are not well-characterised. K has been detected through SEM-EDS analysis in all examined points [14].
The results of colour measurements on PSAPPB1 and PSAPPB2 show little variation between the two samples if compared to the variations detected by particle analysis demonstrating how the annealing process carried out on PSAPPB1 for obtaining PSAPPB2 decreases the uniformity of powder in terms of particle size, without losing the acquired colorimetric features, albeit with a small decrease in brightness. This result is interesting from a technological point of view because it suggests that probably a second firing was not necessary as it decreases the particle homogeneity without significantly changing the colour characteristics.
The particle and colour analysis of the APB1 and APB2 samples confirms that application of slightly different temperatures (900 • C for APB1, and 950 • C for APB2) does not produce significant differences between the two samples. On the other hand, the APB3 sample, fired at a temperature of 1050 • C, has completely different characteristics in respect to APB1 and APB2, both in terms of particle and colour parameters. Specifically, the colour data of APB1, APB2, and APB3 samples show how APB1 and APB2 have similar colour features if compared to APB3; moreover, the first two yellows exhibit particle heterogeneity in respect to APB3, confirming the results obtained by previous findings on chemical composition [16]. In fact, APB1 and APB2 are characterised by a compositional heterogeneity, while APB3 is a homogeneous dark yellow powder with a reddish hue, as shown by the significantly higher value of the a* coordinate.
Conclusions
A new approach has been employed in this study to investigate powder samples of artificial yellow pigments, based on lead and antimony as main elements, widely used since ancient times for ceramics, glasses, and paintings. These pigments are very important in the study of artworks because their composition is linked to ancient recipes that could allow us to suppose the geographical areas of their provenance or the potential circulations of materials and techniques, as recently demonstrated by Montanari et al. [12,13].
Yellow pigments based on Pb and Sb were produced with different recipes with the addition of salt (NaCl) and/or K-tartrate, but temperature, times, and crucible types were not specified in the recipes, so different experimental tests have been performed producing pigments of similar colour and appearance but of different composition. Moreover, in the case of Naples yellow, different compounds were obtained in the firing process, independently from the starting reagents. This results in inhomogeneous compositions of Naples yellow prepared according to ancient recipes.
The analysis proposed in the present paper, being non-invasive and rapid, resulted in being useful for the examination of pigment powders by giving information about morphology, distribution, and homogeneity. In detail, the optical and colorimetric characteristics of yellow pigments are correlated with particle sizes investigated by image analysis combined with a multivariate approach.
The results obtained on the yellow powders are interesting in terms of pigment particle homogeneity and colour, demonstrating that pigments having uniform colour are not always also characterised by uniform particle parameters. In general, the pigments produced according to the ancient recipes are not homogeneous in particle characteristics (i.e., size and shape) and composition, even if the colour is homogeneous. Particle analysis demonstrated the importance of this approach to evaluate the analogies and differences between yellows in terms of morphologic and morphometric parameters, and in particular for dimension (i.e., area, circularity, perimeter, equivalent diameter) and fractal dimension.
In summary, the proposed approach makes it possible to obtain a great quantity of information on pigments through a non-destructive approach, allowing us to carry out subsequent complementary analyses on the same samples that could complete the information extractable from the pigment powders and from the production processes.
|
2021-09-01T06:18:30.080Z
|
2021-07-30T00:00:00.000
|
{
"year": 2021,
"sha1": "fec151dfb353a3501e4679eb46028433a0229531",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-433X/7/8/127/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "647bbc74e9ec1f1159546d56fe03cc2907c6f765",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
10196548
|
pes2o/s2orc
|
v3-fos-license
|
Upregulated TRPC3 and Downregulated TRPC1 Channel Expression during Hypertension is Associated with Increased Vascular Contractility in Rat
Transient receptor potential (TRP) C1 and C3 (TRPC1 and TRPC3) are expressed in vascular smooth muscle cells and are thought to be involved in vascular contractility. In the present study, we determined the effect of systemic hypertension on TRPC1/TRPC3 channel expression and vascular contractility in rat carotid artery (CA). CA were studied from male spontaneously hypertensive rats (SHR), Wistar-Kyoto (WKY), and Long Evans (LE) rats. TRPC1/3 expression was determined by RT-PCR and Western blot. TRP channel function was evaluated by whole-cell patch clamp, using UTP (60 μM) to stimulate TRPC1/3 channels. Contractions of endothelium-denuded CA segments to UTP (1–300 μM) and phenylephrine (Phe; 0.1 nM–10 μM) were measured in an isometric tension bath. TRPC1 and TRPC3 mRNA was present in CA of both WKY and SHR. Western blot demonstrated 3.1 ± 1.2 times greater TRPC3 expression and 0.5 ± 0.2 times TRPC1 in SHR versus WKY CA. Isolated CA showed potentiated contraction to UTP in the SHR versus WKY. Activation of voltage-dependent Ca2+ channels (VDCC) in UTP-mediated constriction only occurred in SHR CA. Contraction to Phe was unaltered between WKY and SHR CA and involved equal significant VDCC activation in both groups. Patch clamp demonstrated that the UTP-stimulated current (Iutp) was greater in SHR compared to the normotensive WKY and LE rats with peak Iutp (at −110 mV) of −63 ± 24 pA compared to −25 ± 4 pA, respectively. We demonstrate that UTP-mediated but not Phe-mediated constrictions are potentiated in the CA during hypertension. Expression of TRPC1 is decreased whereas TRPC3 is increased in SHR CA. Interestingly, VDCC activation only contributes to UTP-mediated contraction of SHR CAs whereas it contributes substantially and equally in Phe-mediated contraction. We speculate that the alteration of TRPC channel expression in hypertension leads to greater smooth muscle depolarization, VDCC activation, and vascular contractility in the UTP (but not Phe) signaling pathway.
INTRODUCTION
Hypertension is associated with profound alteration in smooth muscle cell calcium homeostasis and proliferation (Sugiyama et al., 1990;Bendhack et al., 1992;Touyz and Schiffrin, 1997). Smooth muscle cell contractility and proliferation is largely dependant on increased intracellular calcium concentration which is brought about by voltage-dependent Ca 2+ channels (VDCC) and non-voltage gated calcium entry pathways (Nelson et al., 1990;McDonald et al., 1994;Kuriyama et al., 1998;Hofmann et al., 1999;Large, 2002;Inoue et al., 2004;Taylor and Moneer, 2004). The bulk of calcium influx is generally thought to be mediated by VDCC. However, canonical transient receptor potential (TRPC) channels have also been shown to be important signal transducers for agonist mediated vascular contractility. The TRPC channels comprise a seven member family (TRPC1-7) of nonselective cation channels with varying Na + and Ca 2+ permeability (Clapham et al., 2005;Ramsey et al., 2006). They are activated by a variety of vasoconstrictors such as ATP, UTP, endothelin, and Angiotensin II (Hurst et al., 1998;Reading et al., 2005;Peppiatt-Wildman et al., 2007;Alvarez et al., 2008;Abramowitz and Birnbaumer, 2009). In addition to a potential direct contribution to calcium influx, the TRPC channels are thought to promote calcium entry by providing the depolarizing stimulus (Na + and Ca 2+ current) for VDCC activation and subsequent smooth muscle contraction (Welsh et al., 2002;Gudermann et al., 2004).
TRPC3 expression has recently been found to be upregulated in monocytes from spontaneously hypertensive rats (SHR; Liu et al., 2005a) and patients with essential hypertension (Liu et al., 2006). This upregulation was associated with increased calcium influx into these cells. More recently, Liu et al. (2009) showed increased TRPC3 channel expression and function in intact aorta and cultured aortic smooth muscle cells from SHR using Angiotensin II mediated calcium influx and aortic contraction. www.frontiersin.org TRPC1 is associated with VSMC proliferation after vascular injury (Kumar et al., 2006) and its expression is upregulated in hypoxia induced pulmonary hypertension (Lin et al., 2004) and in cardiac hypertrophy (Ohba et al., 2007). However, the role of this channel in vascular contractility is still not resolved, with some studies indicating a role for TRPC1 in smooth muscle contraction and others not (Kunichika et al., 2004;Bergdahl et al., 2005;Dietrich et al., 2007;Wolfle et al., 2010). TRPC1 is known to form heterotetramers with several TRPC channel members, including TRPC3 (Liu et al., 2005b;Zagranichnaya et al., 2005), thus making the study of TRPC1 expression in hypertension a logical parallel.
Given the emerging pattern of upregulation of TRPC1 and TRPC3 expression in other arteries in systemic and pulmonary hypertension, we sought to determine if these channels were similarly upregulated in carotid artery (CA) of hypertensive rats. Additionally, we sought to determine the effect of hypertension on TRPC channel function at the cellular level and vascular contractility at the whole artery level.
MATERIALS AND METHODS
Male SHR and Wistar-Kyoto (WKY) rats with an average age of 13 ± 3 weeks from Charles River Laboratories were used for the study. SHR weighed 276 ± 36 g and WKY 281 ± 36 g. Blood pressure was measured using tail-cuff plethysmography. Average systolic blood pressure for the SHR was 157 ± 18 mmHg while for WKY was 102 ± 12 mmHg. In some experiments agematched Long Evans (LE) rats were used as additional normotensive controls. Experiments were carried out in accordance with the National Institutes of Health guidelines for the care and use of laboratory animals and were approved by the Animal Protocol Review Committee at Baylor College of Medicine. The rats were anesthetized with Isoflurane followed by exsanguination prior to harvesting the carotid arteries.
RNA EXTRACTION AND RT-PCR ANALYSIS
RNA was extracted from liquid nitrogen frozen carotid arteries using Trizol Reagent with the PureLink Micro Kit (Invitrogen, Carlsbad, CA, USA). Arteries were ground with a pestle and subjected to a Tissue Tearor for 45 s prior to addition of TRIzol. The rest of the protocol was then carried out according to the instructions for the PureLink kit. RNA was quantified using a UV Photospectrometer at the 260-and 280-nm wavelengths to assess purity and concentration. RNA was then treated with DNAse I (Invitrogen, Carlsbad, CA, USA) and then reverse transcribed to cDNA using the Super Script III First Strand Synthesis System (Invitrogen, Carlsbad, CA, USA). RNase H was added and incubated for 20 min at 37˚C to degrade any remaining RNA. Control samples (RT-) were prepared by substitution of dH 2 O for reverse transcriptase to control for genomic contamination. PCR amplification was performed using Platinum Taq DNA polymerase (Invitrogen, Carlsbad, CA, USA) according to the following protocol: 94˚C activation (2 min) followed by 97˚C dissociation (15 s), 56˚C anneal (30 s), and 72˚C elongation (30 s) for 33 cycles. Primer pairs were as follows: TRPC1 forward: 5 -gtgcttgcggcttgagat-3 , TRPC1 reverse: 5 -tgccatagctggggaac-3 ; TRPC3 forward: 5 -ctggccaacatagagaaggagt-3 , TRPC3 reverse: 5 -caccgattccagatctccat-3 (5) . PCR products were analyzed by electrophoresis and digitally imaged for analysis. The predicted size of the PCR amplicons was 114 and 141 bp for TRPC1 and TRPC3, respectively.
WESTERN BLOT ANALYSIS
Two and a half millimeter sections of CA were harvested, cleaned of connective tissue and blood, and frozen in liquid nitrogen. The segments were pulverized in a matching bullet tube and pestle, homogenized in RIPA buffer (Sigma, St. Louis, MO, USA) containing 50 mM Tris . HCl, pH 8.0, with 150 mM sodium chloride, 1.0% Non-idet P-40, 0.5% sodium deoxycholate, and 0.1% sodium dodecyl sulfate with protease inhibitor cocktail (Roche, Indianapolis, IN, USA) and incubated on ice for 15 min. After centrifugation (15 min at 15,000 g ), a portion of the supernatant was used for total protein quantification by DC Protein Assay (Bio-Rad, Hercules, CA, USA). Equal amounts of sample protein were mixed with LDS sample buffer and sample reducing agent (Invitrogen, Carlsbad, CA, USA), and heated at 70˚C for 10 min. Total protein samples (Chen et al., 2010;micrograms) were loaded into each lane of the gel (10% NuPAgE Bis-Tris gel). The gel was run at room temperature at 150 V for 90 min and then transferred to a nitrocellulose membrane by iBlot Dry Blotting System (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. The membrane was blocked in PBS supplemented with 5% non-fat dry milk and 0.2% Tween 20 (block solution) for 3 h, followed by incubation with anti-TRPC3 antibody (1:800) or anti-TRPC1 antibody (1:400; all from Sigma, St. Louis, MO, USA). The membrane was washed in PBS with 0.2% Tween 20 (PBS-T, 2 × 5 min then 3 × 10 min) and then incubated in a goat anti-rabbit secondary antibody conjugated to horseradish peroxidase (1:10,000 diluted in block solution, Thermo Fisher, Rockford, IL, USA) for 1 h. The lower portion of the membrane was probed separately with anti-GAPDH antibody (1:200,000) at 4˚C overnight as a loading control. The GAPDH antibody used goat anti-mouse HRP (1:50,000 diluted in block solution) as the secondary antibody. The membrane was washed with PBS-T (as above) and then layered with Pierce SuperSignal West Femto Maximum Sensitivity Substrate (4 min), followed by exposure to film. Densitometric analysis was performed with Image J (NIH).
ISOLATION OF RAT CA SMC
Right and left CA were harvested, cleaned of connective tissue and blood, and cut into two segments which were incubated for 10 min in cold (4˚C) digestion buffer, containing 140 mM NaCl, 5 mM KCl, 2 mM MgCl 2 , 10 mM glucose, 10 mM HEPES, and 1 mg/ml Bovine Serum Albumin (Sigma, A9647), with pH adjusted to 7.4 with NaOH. Arteries were then digested with 1.86 mg/ml papain (Sigma, P4762), 1 mg/ml 1,4-dithioerythritol (Sigma, D8255) and 1 mg/ml BSA in digestion buffer for 30 min at 37˚C. The tissue was next washed with digestion buffer twice and further digested with 1.5 mg/ml collagenase H (Sigma, C8051), 1 mg/ml hyaluronidase (Sigma, H3506), and 1 mg/ml BSA in digestion buffer containing 50 μM CaCl 2 for 10-20 min at 37˚C. The tissue was washed several times with digestion buffer and triturated with a firepolished Pasteur pipette. The cells were placed on ice and used within 8 h.
PATCH CLAMP ELECTROPHYSIOLOGY
The whole-cell patch clamp configuration was used for measurements of whole-cell currents in freshly isolated SMC using a MultiClamp 700B amplifier (Axon Instruments) and pCLAMP 10 software (Axon Instruments, Union City, CA, USA), as reported previously . Patch electrodes were pulled from borosilicate glass (1.65 outer diameter, 1.28 inner diameter; Warner Instruments, Hamden, CT, USA) and polished to a pipette resistance of 2 MΩ. The pipette buffer contained (in mM) 156 CsCl, 1 MgCl 2 , 0.91 CaCl 2 , 2 EGTA, 7.85 CsOH, and 10 HEPES; pH was adjusted to 7.2 with CsOH. The calculated free Ca 2+ concentration was 131 nM. The bath buffer contained (mM) 156 NaCl, 1.8 CaCl 2 , 10 glucose, and 10 HEPES; pH was adjusted to 7.4 with NaOH. The isolated cells were placed in a recording chamber on the stage of an inverted microscope and continually superfused with bath buffer. Whole-cell currents were recorded in the voltageclamp mode over the voltage range of −110 to +80 mV (sweep rate of 0.131 mV/ms; −50 mV holding potential). Whole-cell ionic currents were measured in the absence or presence of 60 μM UTP. Lanthanum chloride (100 μM) was added 5 min after UTP stimulation. Data were digitized and filtered at 1 kHz. All recordings were performed at room temperature (20-22˚C).
ISOMETRIC TENSION RECORDINGS IN ISOLATED CAROTID ARTERIES
Isolated CAs from SHR and WKY rats were placed in ice-cold physiological solution containing the following (in mM): 137 NaCl, 5.6 KCl, 1 MgCl 2 , 10 glucose, 2.5 CaCl 2 , and 10 HEPES (pH 7.4). Each artery was subsequently cut into four ring segments that were 2.5 mm in length. The rings were denuded of the endothelium by placing curved forceps in the lumen and gently rolling the artery along a length of submerged paper towel. Arteries were then mounted in an eight channel artery myograph (ChuelTech, Houston, TX, USA) containing warm (37˚C), gassed (75%N 2 , 20%O 2 , 5%CO 2 ) physiological saline solution containing (in mM) 119 NaCl, 4.7 KCl, 25 NaHCO 3 , 1.2 KH 2 PO 4 , 1 MgSO 4 , 2.5 CaCl 2 , and 11.1 glucose (pH 7.4). In experiments using Ca 2+ -free physiological saline solution, CaCl 2 was omitted and 0.1 mM EGTA was added. Each ring was stretched to a resting tension of 1.0 g based on preliminary length-tension studies. Rings were then subjected to four 40 mM KCl exposures, followed by one 10 −5 M carbachol exposure to confirm the absence of endothelium. Denuded vessels that demonstrated partial relaxation to carbachol were excluded from further study. Concentration-response curves (CRC) were performed to UTP (1 × 10 −6 to 3 × 10 −4 M) or phenylephrine (1 × 10 −10 to 1 × 10 −5 M) in half-log steps in the presence and absence of verapamil (1 × 10 −5 M). Single concentration responses to UTP (100 μM) were also performed in some protocols. Data was digitized and analyzed with Powerlab/8SP with Chart version 4.24.
STATISTICAL METHODS
Data are presented as mean ± SEM. CRC were evaluated by two way RM-ANOVA with post hoc Holm-Sidak test for individual comparisons. Repeated single concentration UTP responses were compared by paired t -test within the same group and by t -test between groups. Densitometry data was first normalized to the respective GAPDH signal and then the SHR value divided by the WKY value. The resulting values were then evaluated between groups by Mann-Whitney Rank Sum test (TRPC3) or t -test (TRPC1) based on the outcome of the normality test. Patch clamp data was compared by Mann-Whitney Rank Sum test.
DETERMINATION OF TRPC1/3 EXPRESSION IN WKY AND SHR CA
Spontaneously hypertensive rats and WKY CA both express mRNA for TRPC1 and TRPC3 (Figure 1A). PCR was run with and without reverse transcriptase to control for genomic DNA contamination. Protein expression for TRPC1 and TRPC3 was detected by Western blot in SHR and WKY CA (Figure 1B). Expression of TRPC3 appeared clearly increased in SHR arteries while TRPC1 appeared slightly decreased. Densitometric analysis of the blots ( Figure 1C) revealed that TRPC3 was increased by 3.1 ± 1.2 times (P = 0.002; n = 6 each) in SHR compared to WKY, whereas TRPC1 expression was slightly decreased to 0.5 ± 0.2 times WKY expression (P = 0.014; n = 7 each). We also examined TRPC3 expression in cerebral arteries in SHR and WKY (Figure A1 in Appendix). TRPC3 expression was similarly upregulated in both the middle cerebral and basilar arteries.
Isometric tension bath studies
UTP has been shown to promote vasoconstriction of pressurized cerebral arteries through activation of TRPC3 channels (Reading et al., 2005). Given the upregulation of TRPC3 in the SHR arteries we examined the effect of UTP stimulation on tension development.
Carotid arteries were denuded of the endothelium in order to focus on the effect of UTP on smooth muscle contraction. Removal of the endothelium was functionally confirmed by the absence of relaxation to 10 μM carbachol in KCl-contracted arteries. UTP (1-300 μM) CRC were obtained in the presence and absence of verapamil (10 μM) to determine the contribution of VDCC in the UTP-mediated vasocontraction. Absence of vasocontraction to KCl in the presence of verapamil confirmed the effectiveness of the VDCC blocker ( Figure A2 in Appendix). We compared UTP CRC between SHR and WKY with and without verapamil (Figure 2A). In the absence of verapamil, UTP CRC were significantly potentiated in SHR compared to WKY (P < 0.05, n = 7-12). These data demonstrated a notable increase in vascular contractility to UTP in the hypertensive group. In the presence of verapamil, CA contractile response to UTP was significantly decreased in SHR (P < 0.05, n = 12-13) to a level comparable to WKY. Verapamil did not affect the contractile response to UTP in the WKY group (P = NS, n = 6-7). These data demonstrate that VDCC contribute to UTP-mediated vasocontraction in the SHR CA but not those of WKY.
In order to determine if the greater contractility was specific to the UTP signaling pathway, we also performed CRC to phenylephrine (Phe). Constrictions to Phe (1 × 10 −10 to 1 × 10 −5 M) in endothelium-denuded arteries were similarly compared in the absence and presence of verapamil ( Figure 2B). Addition of verapamil significantly attenuated the Phe response in both SHR and WKY groups. In contrast to UTP, however, addition of verapamil www.frontiersin.org
FIGURE 1 | Expression of TRPC1 and TRPC3 mRNA and protein in WKY and SHR carotid artery (CA). (A) RT-PCR results comparing expression of TRPC1 and TRPC3 in rat CA of WKY and SHR. A 100-bp ladder is shown to the left. (B)
Representative Western blots demonstrating TRPC1 and TRPC3 expression (upper panels) in rat CA of WKY and SHR. The lower portion of the membrane was probed with GAPDH (lower panels) for normalization. Mass standards for 110, 80, and 40 kDa are indicated. (C) Densitometric analysis of the band intensity of SHR compared to WKY. Data are presented as mean ± SEM for TRPC3 (n = 6 each) and TRPC1 (n = 7 each); * = P < 0.05 compared to WKY. = 11-13). Responses are presented for control (solid lines) and verapamil (dashed lines) for WKY and SHR. Data are presented as mean ± SEM; * = P < 0.05 by RM-ANOVA between WKY and SHR groups.
FIGURE 2 | Concentration-response curves (CRC) to UTP (A) and Phe (B) for WKY (n = 5-7) and SHR endothelium-denuded CA (n
attenuated contractile response to Phe in both WKY and SHR and did not differ between SHR and WKY (P = NS, n = 5-11). Similar to the contractile response to UTP in the presence of verapamil, there was no difference in Phe-induced contraction between SHR and WKY in the presence of verapamil. Together, these data demonstrate a similar dependence on VDCC activation between SHR and WKY CA in the Phe signaling pathway. We next examined the dependence of extracellular Ca 2+ in the UTP-mediated constriction. In this protocol, we administered UTP twice with a washout in between. In the Ca 2+ -free group, we replaced the Ca 2+ -containing buffer with Ca 2+ -free buffer containing 0.1 mM EGTA just prior to the second UTP administration. In Ca 2+ -free conditions, UTP produced a small transient constriction (Figure 3). There was no difference in response to Frontiers in Physiology | Vascular Physiology = P < 0.05 compared to control response with paired t -test. UTP in absence of extracellular Ca 2+ between SHR (red) and WKY (black) groups. These data demonstrate that the constriction to UTP is substantially dependent on calcium influx for both groups.
PATCH CLAMP ELECTROPHYSIOLOGY
TRP channel function was evaluated in freshly isolated CASMC by whole-cell patch clamp method. Whole-cell currents were recorded in the voltage-clamp mode over the voltage range of -110 to +80 mV (Figure 4). UTP activated non-selective cation currents (I UTP ), which we previously demonstrated to be inhibited by 100 μM lanthanum chloride (non-selective TRPC channel blocker) and by intracellular application of TRPC3 and TRPC1 antibodies . I UTP was primarily inward and reversed around 0 mV. These currents were significantly greater in SHR compared to WKY and LE rats. The mean peak inward I UTP in the SHR CASMC at −110 mV was −63 ± 24 pA (n = 7) compared to −25 ± 4 pA in the pooled WKY and LE CASMC (P = 0.011, Mann-Whitney Rank Sum test). The LE (n = 15) and WKY (n = 5) responses were similar (−23 versus −32 pA) and thus pooled for comparison with the SHR. Median peak inward I UTP for each group was −23, −18.5, and −48 pA for LE, WKY, and SHR respectively.
DISCUSSION
We present the following novel findings regarding the expression and function of TRPC1 and TRPC3 in CA of hypertensive rats: (1) TRPC3 protein expression is increased and TRPC1 protein is decreased in SHR CA; (2) SHR CA demonstrates greater contractility to UTP but not Phe; (3) VDCC contribute to contraction of www.frontiersin.org SHR CA but not WKY CA; (4) UTP stimulates a TRPC-like current that is significantly greater in SHR CASMC. These findings are discussed in greater detail below.
EXPRESSION OF TRPC1/3 IN HYPERTENSION
We examined the expression of TRPC1 and TRPC3 in rat CA in hypertensive SHR and normotensive WKY rats. TRPC3 protein expression was found to be significantly upregulated in SHR CA compared to WKY. These findings are consistent with what Liu et al. (2009) recently showed in SHR aorta and what Chen et al. (2010) found in SHR mesenteric artery. Together, these data suggest widespread TRPC3 upregulation in multiple vascular beds during hypertension. TRPC1 has previously been demonstrated to assemble with TRPC3 endogenously and in expression systems (Lintschinger et al., 2000;Liu et al., 2005b;Zagranichnaya et al., 2005) and most recently endogenously in rat CA . Interestingly, TRPC1 protein in CA did not demonstrate a parallel increase in expression. Instead, TRPC1 expression was actually slightly decreased in SHR compared to WKY. This is in contrast to the study by Chen et al. (2010) that showed increased expression of both TRPC1 and TRPC3 in the mesenteric arterioles of the SHR.
TRPC CHANNEL FUNCTION AND VASCULAR CONTRACTILITY IN HYPERTENSION
The major role of TRPC channels in smooth muscle contraction is often ascribed to activation of VDCCs through cell depolarization (Gudermann et al., 2004). In addition, L-type Ca 2+ channels have been shown to be upregulated in mesenteric, skeletal muscle, and renal arteries in SHR (Pratt et al., 2002;Pesic et al., 2004). Given the apparent role of VDCCs in the mechanism of TRPC channel activation and the reported upregulation of VDCCs in other arteries in hypertension, we sought to determine the relative role of VDCCs in the CA of SHR and WKY as well. The role of VDCCs in the UTP and Phe-stimulated contraction was evaluated with verapamil using a concentration that completely inhibited KCl-mediated contractions. CRCs to UTP demonstrated that SHR (but not WKY) CAs were attenuated by verapamil. The residual verapamil-insensitive contraction was similar between SHR and WKY CAs.
There are two major points to make based on our findings with verapamil. First, VDCCs only contribute significantly to UTP-mediated contraction in CA of hypertensive rats whereas they contribute equally in Phe-mediated contraction. In CA of normotensive rats, the contraction to UTP is essentially independent of VDCC activation, though it is still heavily dependent on Ca 2+ influx. This finding is consistent with the idea that a significant portion of Ca 2+ influx, particularly in larger arteries, can be independent of VDCC activation . From our study, we know that VDCC are present and capable of mediating significant contraction since Phe and KCl-mediated contraction was significantly or completely blocked by verapamil. We also know that the UTP-mediated constriction requires Ca 2+ -influx, since the acute removal of Ca 2+ resulted in substantial inhibition. Together, these data demonstrate that UTP stimulates Ca 2+ influx that is independent of VDCC activation in WKY CA but becomes partially VDCC-dependent in SHR CA. In contrast to the UTP mechanism, the Phe signaling pathway appears to be similar between WKY and SHR and equally dependent on VDCC activation.
The second major point addresses the difference in verapamil sensitivity between SHR and WKY CA. The contribution of VDCCs is significantly increased in the SHR but not WKY CA. Increased expression and function of L-type Ca 2+ channels has been reported for other SHR arteries (Pratt et al., 2002;Pesic et al., 2004). We have not measured expression of this VDCC in the CA, but our data argue against the potentiated constrictions being simply explained by increased expression of VDCC. Addition of Phe produced significant VDCC-dependent constriction in both SHR and WKY CA that was similar in magnitude. If the greater constriction to UTP were primarily due to upregulated VDCC in SHR arteries, one would expect the constrictions to Phe to be potentiated as well. Instead, our data favors a model in which coupling of the UTP receptor(s) with VDCC activation is greater in SHR CA. Increased expression of TRPC3 channels linked to UTP receptor activation would theoretically increase the magnitude of SMC depolarization and subsequently the level of VDCC activation. In support of this possibility, the SHR demonstrated greater UTP-stimulated whole-cell current in freshly isolated CASMCs. This current reversed at the same point (∼0 mV) as in the WKY cells and exhibited similar rectification properties. These results indicate that hypertension indeed results in increased function of TRPC-like channels capable of producing a depolarizing current. While we previously demonstrated a role for both TRPC1 and TRPC3 channels in the UTP-stimulated current in normotensive CASMCs , further studies will be needed to confirm the molecular identity of the channel(s) responsible for the increased current in SHR CASMCs. While TRPC1 channels have been shown to be mediators of store operated Ca 2+ entry in smooth muscle cells (Xu and Beech, 2001) and possibly contribute to vasoconstriction (Wolfle et al., 2010), these channels have also been described in a mechanism of smooth muscle relaxation. In this latter model, TRPC1 channels form a functional unit with BK Ca channels in which Ca 2+ influx through TRPC1 promotes smooth muscle hyperpolarization through activation of associated BK Ca channels (Kwan et al., 2009). If this model is valid for intact CA, then a decrease in TRPC1 expression would be expected to reduce BK Ca channel activation during agonist stimulation. This reduced BK Ca channel activation would permit greater smooth muscle depolarization and VDCC activation, ultimately leading to greater contraction. While this scenario is consistent with our findings of greater contractility to UTP and a greater involvement of VDCC in SHR CA, considerably more studies would be required to demonstrate any altered role of BK Ca channels in SHR as well as a functional link to TRPC1 in this artery.
In summary, we report that UTP-mediated constrictions are potentiated in the CA during hypertension. Examination of expression levels of TRPC1 and TRPC3 demonstrated that TRPC1 is decreased whereas TRPC3 is increased in SHR CA. In addition, we demonstrate that VDCC activation plays a role in UTPmediated contraction of hypertensive but not normotensive CAs. We propose that increased TRPC3 channel expression in hypertension leads to greater Ca 2+ and Na + influx, which provides Ca 2+ Frontiers in Physiology | Vascular Physiology directly (through TRPC3 channel) and indirectly (through VDCC activation) for smooth muscle contraction. The decreased expression of TRPC1 was contrary to what we expected. However, given the reported link between TRPC1 and BK Ca channel activation, a possible explanation is that decreased TRPC1 expression leads to reduced BK Ca channel activation in SHR smooth muscle. When taken as a whole, it is clear from these and other recent studies that expression of multiple TRPC channel members is altered in hypertension with significant functional consequence in the vasculature.
|
2014-10-01T00:00:00.000Z
|
2011-07-22T00:00:00.000
|
{
"year": 2011,
"sha1": "5443afd87a099b706dcf8a8dc3cc8e56588c0ee4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2011.00042/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5443afd87a099b706dcf8a8dc3cc8e56588c0ee4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12383157
|
pes2o/s2orc
|
v3-fos-license
|
Antibody Therapy for Pediatric Leukemia
Despite increasing cure rates for pediatric leukemia, relapsed disease still carries a poor prognosis with significant morbidity and mortality. Novel targeted therapies are currently being investigated in an attempt to reduce adverse events and improve survival outcomes. Antibody therapies represent a form of targeted therapy that offers a new treatment paradigm. Monoclonal antibodies are active in pediatric acute lymphoblastic leukemia (ALL) and are currently in Phase III trials. Antibody-drug conjugates (ADCs) are the next generation of antibodies where a highly potent cytotoxic agent is bound to an antibody by a linker, resulting in selective targeting of leukemia cells. ADCs are currently being tested in clinical trials for pediatric acute myeloid leukemia and ALL. Bispecific T cell engager (BiTE) antibodies are a construct whereby each antibody contains two binding sites, with one designed to engage the patient’s own immune system and the other to target malignant cells. BiTE antibodies show great promise as a novel and effective therapy for childhood leukemia. This review will outline recent developments in targeted agents for pediatric leukemia including monoclonal antibodies, ADCs, and BiTE antibodies.
INTRODUCTION
Leukemia is the most common pediatric malignancy and is still the most frequent cause of death of all childhood malignancies (1). Despite significant progress in cure rates since the 1970s, relapsed and refractory acute lymphoblastic leukemia (ALL) still results in a high burden of disease (2) with a 5-year survival of~30%. Moreover, acute and long-term adverse effects of systemic conventional chemotherapy and radiotherapy limit quality of life for survivors (3). Acute myeloid leukemia (AML) is less common than ALL in the pediatric population and carries a poorer prognosis. While 80% of children with newly diagnosed AML will achieve remission, the overall cure rate remains unchanged at 50-60% (4).
Over the past decade, leukemia outcomes have improved as a result of optimizing chemotherapy; tailoring treatment to individual patients, for example, by monitoring for minimal residual disease (MRD); and utilizing more sophisticated hematopoietic stem cell transplantation (HSCT) techniques. However, current conventional cytotoxic drugs have limitations including their narrow therapeutic window. This leads to systemic cytotoxicity, due to non-selective mechanisms of action that affect both normal and neoplastic cells (5,6). Thus, novel therapeutic approaches are needed to overcome these limitations, reduce adverse effects, and improve disease-free and overall survival (OS), especially in patients with relapsed or refractory disease. Targeted therapies that deliver drugs specifically to malignant cells while minimizing exposure to normal tissues represent one therapeutic approach.
Since the role of the immune system in the recognition and elimination of malignant cells has been better understood (7), monoclonal antibodies and antibody-drug conjugates (ADCs) have been explored and developed as potential novel therapies for both hematological and solid tumors. Malignant cells, such as leukemic blasts, express antigens on their surface that can be selectively targeted by monoclonal antibodies. This minimizes generalized side effects and allows directed delivery of highly potent drugs. They have longer circulating half-lives, greater accumulation in tumor cells, and fewer systemic side effects than traditional chemotherapeutic agents (8).
Leukemic cells are particularly well suited to these novel antibody based treatment strategies since their surface antigen markers are well characterized, readily accessible in the circulation and shared almost exclusively with precursor cells in the hematopoietic system, the depletion of which can be transiently tolerated (9).
One of the factors that contribute to the efficacy of antibody based therapies includes the level of expression of the target antigen. Antigen targets are ideally expressed in high concentrations on malignant cell surfaces but not on normal cells, thus enhancing the selectivity and minimizing the systemic side effects of the drug. Rituximab is a naked monoclonal antibody against CD20 that is effective in non-Hodgkin lymphoma (NHL) where 100,000 antigens are present on the surface of each cell (10). However, high level antigen expression is not a prerequisite for clinical benefit, especially when treated with ADCs. For example, AML cells express~5000-10,000 copies of CD33 on each cell's surface, which is sufficient to produce sensitivity to gemtuzumab ozogamicin (GO) (11). Antigen expression on non-vital organ or cell populations may be acceptable, such as CD19, CD20, and CD33, which are markers for B cells and myeloid cells. Most patients can temporarily tolerate the elimination of these cells.
The selection of "functional antigens" that are essential for cell survival is advantageous to prevent the malignant cells becoming resistant by down-regulating the specific target antigen. The target antigen should be expressed in all or most patients with the disease, or at least be measurable through flow cytometry to allow selection of patients likely to benefit. It is ideally expressed throughout the disease course (12).
Rapid internalization of the bound antigen-ADC complex is desirable, a process that usually occurs through receptor mediated endocytosis. The catabolic environment within lysosomes then provides ideal conditions for effective drug release (13). In contrast, slow internalization is preferred for naked monoclonal antibodies to allow them to trigger antibody-dependent cellular cytotoxicity (ADCC) or complement-dependent cytotoxicity (CDC) upon binding to the target antigen (14).
Unconjugated monoclonal antibodies are the archetype on which targeted therapies are modeled and rituximab has demonstrated high levels of therapeutic utility within this class of drugs (15). Murine antibodies (derived entirely from mice) or chimeric antibodies (constructed from variable regions derived from a murine source and constant regions from a human source) result in an immune response and the generation of human anti-murine antibodies that can limit their efficacy and lead to resistance. Humanized and entirely human antibodies circumvent this limitation as their constant and variable regions are human-derived and hence less likely to generate antibodies. They have half-lives of days to weeks in the human circulation (15).
Antibody-drug conjugates combine the specificity of monoclonal antibodies with the potency of highly effective cytotoxic drugs that cannot be delivered systemically. After binding to the target antigen, the ADC-antigen complex is internalized and transported to intracellular organelles, where the cytotoxic drug is released and causes cell death.
Bispecific antibodies have been recently developed and use a similar approach, but rather than enhancing the selectivity of a chemotherapeutic compound, they seek to engage the patient's own immune system to target the tumor cell. They typically contain two binding sites -one that targets an antigen on the cancer cell and one that targets immune cells, such as T cells, and thus engages them to attack the malignant cell.
Other conjugated immunotherapies currently under investigation for the treatment of leukemia include pretargeted radioimmunotherapy, which combines antibodies with α, β, or γ-emitting radionuclides (16). These immunoconjugates are beyond the scope of this review.
The current state of clinical development of antibodies for treatment of pediatric leukemia are summarized in Table 1 and discussed in detail below.
UNCONJUGATED MONOCLONAL ANTIBODIES
Unconjugated monoclonal antibodies selectively target antigens expressed on malignant cells and cause cell killing by three main mechanisms: ADCC through the engagement of NK cells, macrophages, and neutrophils; antibody-dependent cellular phagocytosis (ADCP); and CDC (12). They have generally shorter circulating half-lives than ADCs and the risk of antibody development against murine proteins can limit their utility (14). The majority of naked monoclonal antibodies are used in combination with chemotherapy as they have insufficient cytotoxic activity when delivered as monotherapy.
ANTI-CD20 ANTIBODIES
Rituximab, a chimeric anti-CD20 monoclonal antibody, is one of the earliest examples of this class and is now used as standard front-line therapy in adult NHL and second line treatment in other hematologic malignancies (17,18). CD20 expression is found in >95% of B cell lymphomas, particularly in mature cell malignancies such as Burkitt lymphoma, making this an ideal target. Anti-CD20 antibodies have an effect on cell signaling and induce apoptosis through ADCC and CDC mechanisms. Secondary to CD20+ B cell depletion from the blood, marrow, and lymph nodes, rituximab is associated with hypogammaglobulinemia and increased risk of infections, particularly some viral infections including cytomegalovirus and hepatitis B (16).
Rituximab has been investigated in children primarily as a therapy for Burkitt (mature B cell) leukemia/lymphoma. Its activity has been demonstrated as a single agent (19) and the Children's Oncology Group (COG) recently completed a single arm Phase II trial of rituximab in combination with standard chemotherapy in pediatric patients with newly diagnosed mature B cell leukemia and/or lymphoma, with results pending (NCT00057811). An international Phase III randomized trial is currently evaluating the benefit of the addition of rituximab to standard therapy in children with newly diagnosed mature B cell leukemia/lymphoma (NCT01516580).
CD20 is expressed in~50% of pre-B ALL with increased expression in childhood ALL observed after induction chemotherapy. The role of anti-CD20 monoclonal antibody therapy in this condition is yet to be defined. Rituximab resistance is emerging as a clinical issue in adult leukemia and lymphoma, leading to the development of newer generation anti-CD20 antibodies including ofatumumab, ocrelizumab, and veltuzumab (20).
Ofatumumab, a fully humanized anti-CD20 monoclonal antibody, is currently undergoing a Phase II study in combination with the hyper-CVAD regimen (cyclophosphamide, vincristine, adriamycin, and dexamethasone) as first line treatment for adult patients with CD20 positive ALL. A Phase II trial of ofatumumab in combination with conventional chemotherapy (cyclophosphamide, doxorubicin, vincristine, and prednisone; O-CHOP) in patients with follicular lymphoma resulted in a 90-100% overall response rate (21). ALEMTUZUMAB Alemtuzumab (Campath) is a humanized anti-CD52 monoclonal antibody that is used predominantly for the treatment of refractory chronic lymphocytic leukemia (CLL) in adult populations and for graft versus host disease prevention in pediatric HSCT recipients. CD52 is broadly expressed across all normal T and B cell lymphocytes, except plasma cells, as well as on some myeloid cells. Due to this broad expression of CD52 antigen, the major toxicity is immunosuppression and opportunistic infections. CD52 is also highly expressed across a variety of malignant cells including in childhood precursor B cell ALL. It was therefore evaluated by the COG as a single agent in a Phase II study for children with relapsed or refractory childhood ALL. Limited activity was seen with a response rate of only 8%, suggesting no defined role in this disease, although poor accrual led to early termination of the trial and a small sample size of nine fully evaluable patients (22).
EPRATUZUMAB
Epratuzumab, is a humanized anti-CD22 monoclonal antibody that exerts its anti-cancer efficacy through ADCC activity and biologic activity through B cell receptor modulation. CD22 expression is restricted to B cells including immature and mature B cells, but not pro-B or plasma cells, and is expressed on the majority of pre-B ALL cells, making it an attractive target for immunotherapy. Epratuzumab was initially studied for the treatment of lupus and is currently in Phase III stages of development for this indication. It has shown promising results in adult NHL and diffuse large B cell lymphoma (DLBCL) in combination with rituximab and standard chemotherapy (23). Epratuzumab represents the monoclonal antibody that is most progressed in the treatment of childhood pre-B ALL. A pilot COG study showed that it was well tolerated in 15 children with relapsed ALL with toxicity limited to mild infusion related reactions. During a single agent treatment window limited activity was seen. However, of the 12 patients who received antibody in combination with chemotherapy, 9 achieved a complete remission (CR), and in 7 their MRD became undetectable (24). In a larger, follow up, Phase II study, 114 children received epratuzumab in combination with re-induction chemotherapy for relapsed ALL. While remission rates did not differ from historical controls, those who obtained CR had significantly lower MRD levels than patients in previous reports treated with the same chemotherapy regimen (25). These results suggest that treatment with epratuzumab may improve the quality of remission in relapsed patients and hence overall outcomes. This hypothesis is currently being studied in a large, randomized, international Phase III relapsed pediatric ALL study. As such, epratuzumab will be the first monoclonal antibody to be evaluated in a randomized Phase III setting for childhood pre-B ALL.
CONJUGATED MONOCLONAL ANTIBODIES
Antibody-drug conjugates have been shown to be more active than unconjugated monoclonal antibodies targeting the same surface antigen in preclinical studies. For example, SAR3419, an ADC comprised of the anti-CD19 antibody huB4 and the maytansine derivative DM4, has greater anti-tumor activity compared with the monoclonal antibody huB4, the drug conjugate DM4 alone or the unconjugated anti-CD20 rituximab (26). Due to limitations in binding sites on each antibody, only a small amount of active drug can be expected to be delivered to target cells, a barrier that can be overcome by utilizing highly potent cytotoxic agents (27).
Leukemic cells are prime targets for ADCs as they express several human antigens that are not commonly expressed on normal cells and are easily accessible in the circulation (11,27). Additionally, anti-drug antibody formation, and hence chemotherapy resistance, is reduced in leukemia due to the commonly associated immunosuppression and depletion of B cells and thus decreased ability to form antibodies. Since they do not require active immunological responses to exert their clinical activity, they may also be effective in profoundly immunocompromised patients. The ADC target antigens in leukemia are well characterized, and their expression on normal cells (such as precursor B cells and myeloid cells) can be tolerated due to the sophisticated supportive care available in clinical practice, and the ability of these cells to regenerate (9).
The target antigen, potency of the cytotoxic agent and stability of the linker that joins the two elements of the ADC have interdependent effects on the properties of the drugs. They determine the clinical activity and tolerability of the ADC (28).
LINKERS
Linker technology is an area in ADC drug development that has progressed rapidly in recent years. There are four main types of linkers currently in use (27): 1. Acid labile-hydrazone linkers that are degraded in the acidic environment of lysosomes (pH~5) (Unstable, acid-cleavable). 2. Disulfide-based linkers that are selectively cleaved in the intracellular milieu of the cytosol (Unstable, acid-cleavable). 3. Peptide linkers such as citrulline-valine that are highly stable in circulation and are degraded by lysosome proteases in the target cells. They are generally more stable than disulfides or hydrazones. 4. Non-cleavable thioether linkers that release the active cytotoxic drug after degradation of the antibody in the lysosomes.
Stable linkers such as peptide linkers have the advantage of extending the half-lives of cytotoxic drugs from hours to several days. Unstable linkers can reduce the half-lives of monoclonal antibodies, resulting in free antibody, which will competitively bind to the target antigen thereby reducing the efficacy of the ADC (29). Intermediate linker stability results in the most effective ADCs, since highly stable linkers result in decreased cytotoxic drug release following internalization of the ADC-antigen compound. For example, a serum-stable but intracellularly cleavable linker used to join epratuzumab to SN-38 used in B cell malignancies was shown to be 40-to 55-fold less potent than the more labile CL2A linker in in vitro studies (30).
Antibody-drug conjugates linked by a disulfide linker (but not thioether bond) are capable of exerting a bystander effect on cells that express none or low levels of the target antigen. Hence ADCs can be engineered to exert a bystander effect by being linked by a disulfide bond or exert more precise killing of cells expressing the target antigen and sparing nearby normal cells by being linked by a thioether bond. The bystander effect may be beneficial particularly in solid tumors whereby damaging supporting structures including endothelial cells, neovasculature, and stromal cells can enhance the efficacy of the ADC (31). However, in leukemia precision of the ADC in targeting circulating malignant cells is more desirable.
There is still an ongoing effort to further improve linker technology to improve the efficacy and reduce toxicity of ADCs. Newer linker technologies include flexible polymer linkers (Mersana Therapeutics), which allow greater drug loading (15-20 drugs per antibody), as well as the use of antibody fragments. This allows the use of less potent cytotoxics and hence potentially reduces generalized toxicity (32).
CYTOTOXIC DRUG
Historically ADCs combined monoclonal antibodies with standard chemotherapeutic agents including anthracyclines (doxorubicin), methotrexate, and vinca-alkaloids (vinblastine) due to their availability and known cytotoxic properties. More recently highly potent cytotoxic drugs have been used that cannot be delivered systemically without being conjugated to specific antibodies via a stable linker.
Auristatins and maytansines exert their cytotoxic activity through inhibition of microtubule assembly by binding to tubulin at the same site as vinca-alkaloids. These agents are 50-to 200-fold more potent then vinca-alkaloids and they cause G 2 /M phase cell cycle arrest and apoptotic cell death. Calicheamicin is an enediyne antibiotic and DNA strand cleaving agent that causes double-strand breaks, leading to cell apoptosis. Each is 100-to 1000-fold more potent than conventional chemotherapy drugs, but has little to no cytotoxic activity at the maximum tolerated dose achievable in the clinic if used alone.
Pharmacokinetic studies have shown that an average of four drugs per antibody binding site produces a stable compound that effectively delivers optimal drug concentrations into malignant cells that express the target antigens (33,34). More heavily loaded drug concentrations tend to be rapidly cleared from the circulation or cause aggregation and impair antigen binding (12). Less loaded conjugates result in free monoclonal antibody, which competitively binds to the target antigen, resulting in a shorter half-life (13).
CD33
CD33 is an antigen expressed in significant levels by 90% of leukemic blasts in AML and immature normal cells of the myelomonocytic lineage but that is absent from normal hematopoietic stem cells (12), making it an optimal target.
GEMTUZUMAB OZOGAMICIN
Gemtuzumab ozogamicin (Mylotarg) is the first example in the ADC class of drugs to receive FDA approval. It was approved in 2000 for the treatment of AML after undergoing trials as both monotherapy and combination therapy with standard treatment in adult AML patients (35). It is a humanized anti-CD33 monoclonal antibody linked to calicheamicin. The antibody is linked to the cytotoxic drug via an acid labile disulfide linker, which is hydrolyzed within the acidic environment of lysosomes and endosomes in target cells to release calicheamicin as an active drug.
Conflicting results have been seen in adult AML patients treated with GO. As monotherapy in patients >60 years with relapsed CD33 positive AML, GO resulted in an overall response rate of 25-30%. However, GO was withdrawn from the market by its manufacturer in June 2010 due to the results of a Phase III randomized controlled trial that showed no additional benefit of GO in combination with standard therapy (daunorubicin and cytarabine) over standard therapy alone for adult AML (36). Additionally, fatal toxicity secondary to veno-occlusive disease (VOD) was reported in the GO arm, with an incidence of~2% and increased risk post HSCT (37). GO's efficacy was thought to be limited by heterogeneous drug conjugation, linker instability, and a high incidence of multi-drug resistance (38).
Gemtuzumab ozogamicin has been studied extensively in pediatric AML. In a Phase I pediatric study it induced remission as a single agent in 28% of patients with relapsed or refractory CD33 positive AML. The main adverse events reported were marrow suppression and VOD. The latter occurred in 40% of patients who underwent HSCT after GO, and one patient prior to HSCT. This patient subsequently underwent HSCT without developing VOD (4). In a randomized Phase III study of GO as post-consolidation therapy for children with AML, the drug was well tolerated, however no survival benefit was seen (39). GO was recently evaluated in a randomized Phase III COG trial in pediatric patients newly diagnosed with AML. The addition of GO to standard chemotherapy did show improved overall event-free survival and relapse-free survival but with no significant difference in OS (40,41).
A low fractionated dose regimen of GO as first line therapy in adults with previously untreated AML showed significant improvement in event-free, relapse-free, and OS compared with standard chemotherapy, and was generally well tolerated apart from hematological toxicity (thrombocytopenia) in a Phase III randomized open-label trial (ALFA-0701) (42). This suggests that this regimen may allow for the delivery of higher cumulative doses and improve outcomes.
Several case reports have identified the potential of GO monotherapy in inducing remission in relapsed CD33 positive ALL, which represents 15% of pediatric and adult ALL (43)(44)(45)(46)(47).
SGN-CD33A
SGN-CD33A is a humanized anti-CD33 antibody conjugated to a highly potent, synthetic DNA cross-linking pyrrolobenzodiazepine dimer via a protease-cleavable linker. It causes DNA damage with cell cycle arrest and apoptotic cell death to exert its efficacy in CD33 positive AML (38). A Phase I dose finding study is currently recruiting adult patients with CD33 positive AML (NCT01902329).
AVE9633
This humanized antibody (huMy9-6) that targets CD33 is linked to the maytansinoid (DM4) via a disulfide linker. It is currently undergoing a Phase I trial in relapsed or refractory CD33 positive AML in adults (NCT00543972). Results are pending, but one complete response and one partial response was observed from the first 17 patients enrolled in the study (12).
CD22
As discussed above, CD22 has been identified as an ideal target for ADCs due to high expression on the surface of malignant B-lineage leukemia (>90% of B cell ALL) and lymphoma cells and rapid internalization after binding (48,49). Since the CD22 antigen undergoes constitutive endocytosis, it is well suited for intracellular drug delivery.
A number of CD22 targeted ADCs and recombinant immunotoxins are currently in development for pediatric B-lineage ALL and NHL, as well as adult hairy cell leukemia (49)(50)(51)(52).
INOTUZUMAB OZOGAMICIN
Inotuzumab ozogamicin (IO, CMC-544) is a human anti-CD22 monoclonal antibody linked to calicheamicin, which was shown to induce CR in 39% of adults and children with relapsed and refractory ALL with an overall response rate of 57% in a Phase II trial (53). A Phase I/II dose escalation trial in adults resulted in 71 and 88% OS in DLBCL and follicular lymphoma, respectively (54). It is currently undergoing a randomized Phase III trial in adults with a pediatric ALL Phase I trial planned. Despite lower CD22 expression on ALL cells compared to lymphoma cells, IO had similar cytotoxicity against both types of cells in preclinical in vitro studies (55).
DCDT2980S
DCDT2980S is a humanized anti-CD22 antibody linked to the potent monomethyl auristatin E (MMAE) cytotoxic agent via a protease-cleavable linker, and is capable of inducing complete tumor regression in xenograft models of NHL (56).
CD19
CD19 is a transmembrane glycoprotein and a pan-B cell marker expressed throughout B cell development with the exception of mature plasma cells. It has threefold higher expression in mature B cells compared with immature B cells (57) and is one of the earliest B cell restricted antigens. It plays an important role in maintaining balance between immunity and autoimmunity. CD19 is integral to B cell differentiation through receptor signaling at multiple stages of B cell development, which allows anti-CD19 antibodies to target various B cell malignancies including immature precursor B cells in ALL (58). CD19 is almost universally expressed in all pediatric ALL blast cells (59), which has led to interest in the development of CD19 targeted antibodies for the treatment of ALL.
SAR3419
SAR3419 is comprised of a humanized monoclonal IgG antibody targeting CD19 (huB4) and a maytansine derivative and highly potent cytotoxic drug (DM4) conjugated via a cleavable disulfide cross-linking agent (N -succinimidyl-4-2-pyridildithio butanoic www.frontiersin.org acid, SPDB). It has a half-life of 4-6 days in vivo. The humanized B4 monoclonal antibody alone directed against CD19 was not found to have any in vivo activity against a variety of lymphomas in mouse models (26).
A Phase I trial in adults with relapsed or refractory CD19 positive B cell NHL resulted in a 33% objective response rate with further results pending (60). Another Phase I first-in-man clinical trial in patients with relapsed lymphoma demonstrated a reduction in tumor size in 47% of adult patients (61). The main dose-limiting toxicity in Phase I trials has been reversible corneal microcystic epithelial changes. There has been a notable lack of significant hematological toxicities (58).
SAR3419 was identified through the National Cancer Institute's Pediatric Preclinical Testing Program as a potentially highly effective therapy for pediatric ALL. Further preclinical studies suggest that SAR3419 is highly effective in combination with standard induction chemotherapy (vincristine, dexamethasone, and l-asparaginase) for CD19 positive ALL, including chemoresistant subtypes such as Philadelphia positive ALL and infant MLL-ALL. SAR3419 induced durable remissions in highly chemoresistant ALL xenografts and effectively prevented relapse in hematolymphoid and peripheral organs (except the CNS) when administered in combination with standard chemotherapy (62). A Phase I/II trial in adult ALL patients is currently recruiting, and a pediatric Phase I trial is currently planned.
SGN-CD19A
This humanized anti-CD19 monoclonal antibody conjugated to the auristatin derivative monomethyl auristatin F (MMAF) showed positive results in a first-in-human Phase I trial of patients with relapsed or refractory B cell ALL and lymphoma, including pediatric patients. Of a total eight patients with leukemia, one achieved CR and four experienced clinical improvement. The main reported adverse effects were headache, fever, nausea, fatigue, and blurred vision (63).
OTHER ANTIBODY-DRUG CONJUGATES
Antibody fusion proteins combine the cytotoxic portion of a protein toxin produced by bacteria, fungi, or plants, and a monoclonal antibody directed at antigens expressed on malignant cell surfaces. These cytotoxic agents inhibit protein synthesis and induce apoptosis. Moxetumomab pasudotox is an example of this class of drugs that is composed of a humanized anti-CD22 monoclonal antibody and a 38 kDa fragment of the pseudomonas exotoxin A called PE38 (64). A Phase I trial of moxetumomab pasudotox resulted in 3 CRs out of 12 pediatric patients with pre-B cell ALL (65) and a Phase II study of pediatric patients with relapsed or refractory B cell ALL or NHL is currently planned.
BISPECIFIC ANTIBODIES
Since the recognition of tumor immune surveillance and the role of T cells in this process (66,67), various T cell based therapeutic approaches have been developed to control cancer growth or induce tumor regression. These include anti-cancer vaccines, T cell activating antibodies and adoptive transfer of autologous ex vivo expanded T cells (68). Most of these strategies are subject to tumor escape mechanisms through down-regulation of surface antigens and loss of molecules involved in T cell recognition. Additionally, conventional antibodies cannot recruit T cells as they lack an Fcγ receptor.
Bispecific T cell engager (BiTE) antibodies can largely overcome these limitations by directly engaging and recruiting pre-existing, antigen-experienced, polyclonal T cells at the invariant CD3 receptor as well as antigens on malignant cell surfaces and bringing them into close proximity. This then triggers the signaling cascade of the T cell receptor complex and redirects endogenous T cells against specifically targeted malignant cells. Granules containing granzymes and perforin fuse with the T cell membrane and discharge their cytotoxic contents. (69). This local T cell activation has the potential to be used to monitor the efficacy of BiTE antibody-drugs.
Bispecific T cell engager antibodies are rapidly emerging as an exciting novel targeted cancer therapy, particularly in hematological malignancies. The therapeutic mechanism of BiTE antibodies is relatively resistant to immune escape mechanisms as they utilize the patient's own immune system for their efficacy. Since these drugs rely on functional immune effector cells for activity, challenges exist around their administration in conjunction with myelosuppressive chemotherapy. While a precise role is yet to be defined, their administration after allogeneic HSCT may be effective.
BLINATUMOMAB
Blinatumomab is an anti-CD19/anti-CD3ε bispecific antibody in clinical development for the treatment of B-lineage hematologic malignancies. It is designed to transiently engage primed cytotoxic effector memory T-lymphocytes for targeted killing of malignant B cells, which uniformly express CD19 (70).
There have been very promising results emerging from Phase II trials with blinatumomab, indicating that it is a highly efficacious anti-leukemia drug. In a recent long-term analysis of a Phase II trial with a median follow up of 33 months, blinatumomab induced an 80% MRD response rate in adults (16 of 20 patients) with B cell ALL and persistent or relapsed MRD (71). Blinatumomab has been reported to induce CR in three cases of pediatric patients with relapsed and refractory B cell ALL after allogeneic HSCT (72). A pediatric Phase I/II trial of blinatumomab in children with relapsed or refractory B cell ALL resulted in an overall response rate of 41% with a 32% CR rate (73). The most significant reported adverse events are reversible central nervous system toxicities including encephalopathy, tremor, and aphasia (20).
An adult Phase II trial in patients with refractory or relapsed DLBCL is also currently recruiting patients (NCT01741792) and a number of other Phase II trials with blinatumomab are planned including patients with Philadelphia positive B cell ALL (NCT02000427) and MRD positive ALL (NCT00560794). A Phase III trial of blinatumomab for patients with refractory or relapsed B cell ALL is also planned (NCT02013167).
Blinatumomab does have a relatively short half-life of 2-3 h due to rapid renal clearance, making continuous infusion over 4-8 weeks via portable mini-pump the optimal mode of delivery (69), which presents challenges in the pediatric population. The dependence on circulating immune cells also limits the ability to combine the treatment with standard cytotoxic and myelosuppressive therapies. However the striking efficacy and has generated great interest with a potential future role in the management of MRD positive disease.
CHIMERIC ANTIGEN RECEPTORS
Although a detailed discussion is beyond the scope of this review, chimeric antigen receptors (CARs) are also emerging as effective therapies for hematological malignancies. CARs are T cells genetically modified and linked to an antibody directed against malignant cell surface antigens. They exert their cytotoxicity through T cell mediated signaling pathways. CARs directed against CD19 antigens have been effective in hematological malignancies with an overall CR rate of 88% in adults with refractory or relapsed B cell leukemia (74). A number of ongoing clinical trials using CARs directed against CD19 have demonstrated efficacy in pediatric patients with B cell leukemia and lymphoma (75). Grupp et al. reported CRs in two pediatric patients with relapsed and refractory B cell ALL with CTL019 CAR T cells. The most significant dose-limiting toxicities were the cytokine-release syndrome, requiring cytokine blockade with etanercept and tocilizumab, and B cell aplasia (76).
CONCLUSION
Antibody therapy represents an exciting new treatment approach for childhood leukemia. ADCs and BiTEs are rapidly emerging as the next frontier in the treatment of hematological malignancies and their application in pediatric leukemia is in development. Over the past 50 years, minimal changes have occurred in the drugs used to induce and maintain remission in pediatric leukemia, with most trials using established cytotoxic drugs but with variations in schedules and dosages. Further advancement in the treatment of pediatric leukemia, with the ultimate aim of improved OS and reducing the acute and long-term complications of treatment, may be achieved by the inclusion of novel antibody therapies. Several drug conjugates and bispecific antibodies have demonstrated promising activity in pediatric leukemia and ultimately these compounds may transform the routine management of childhood leukemia patients in the future.
A major challenge lies in the development of clinical trials that will ultimately inform the integration of novel antibody therapies into standard treatment protocols. Experience with rituximab has shown that the improvement in survival in adult lymphoma patients occurs through combination with standard chemotherapy, rather than implementation as monotherapy (16). Trials of antibodies such as alemtuzumab, as single agents in pediatric leukemia, have had difficulty recruiting patients and have shown low levels of activity. Preclinical data on SAR3419 demonstrate that it has the greatest levels of activity and synergy when combined with standard cytotoxic therapies (62). Encouragingly, epratuzumab is the first monoclonal antibody to be evaluated in combination with conventional chemotherapy for childhood pre-B ALL in an international Phase III trial. Integration of antibody therapy with chemotherapy will be especially challenging for BiTE drugs such as blinatumomab that rely on T cell function for their efficacy, since these immune cells are depleted by myelosuppressive chemotherapy. Blinatumomab may eventually have a role administered between cycles of standard chemotherapy, as part of maintenance treatment, post transplantation, or to treat MRD.
The incorporation of antibody therapies into standard chemotherapy backbones not only produces opportunities to increase treatment efficacy, but may be an avenue to reduce treatment side effects. ADC therapies could potentially be used to replace a cytotoxic drug in standard protocols with an ADC with a similar mechanism of action, e.g., vincristine may be replaced in re-induction regimens by SAR3419. Optimal treatment regimens may include a number of ADCs targeting different antigens, e.g., anti-CD20 and anti-CD19 for B cell malignancies. As the number of ADCs developed increase, combination trials will need to be conducted (32).
As antibody trials in pediatric leukemia progress it will be critical to investigate and identify biomarkers that can accurately identify patients most likely to benefit from antibody therapy. While it is clear that expression of the target antigen on the surface of the leukemia cell is a prerequisite, it is unknown whether other factors influence treatment response. For example, are there factors that increase the likelihood of the emergence of a resistant clone, and does the antigen density (that is, the amount of antigen expressed on the surface of each cell) predict response?
Another major clinical challenge in relapsed ALL remains the ability to target sanctuary sites such as the CNS and testicular leukemia. Antibodies in general do not penetrate the blood-brain barrier and preclinical data confirm that novel ADCs do not eliminate CNS disease in mouse models of pediatric ALL (62). Rituximab has recently been shown to be active when administered intrathecally (77,78). It would be of interest to study the intrathecal administration of novel antibody therapies in childhood leukemia to determine whether this approach may improve the treatment of CNS disease, and also potentially reduce the need for radiation therapy with its associated significant long-term morbidities.
The adverse effects of antibody therapies will need to be closely monitored in the pediatric setting. Overall they appear to be very safe, with the majority of trials showing favorable toxicity profiles. In particular, limited hematological toxicity has been recorded. However, some unexpected adverse events have been noted including neurotoxicity with blinatumomab and an increased risk of VOD, particularly after HSCT, following treatment with GO (4).
It is notable that in the adult population antibody therapy is utilized and studied more often in lymphoma patients than in leukemia patients. This is mostly related to the relative incidence of the diseases, with lymphoma occurring with much higher frequency than ALL. However, hematological malignancies provide ideal targets for these novel agents, as target antigens can be readily assessed by flow cytometry on blood or marrow. In pediatric patients, relapsed ALL is a more common clinical problem with greater burden of disease than lymphoma and should be the focus of future antibody trials in this population.
Several challenges remain to ensure these novel agents are made available to childhood leukemia patients. The cost of development, production, and manufacturing of these drugs is a major limitation to their generalized applicability (9) and pediatric leukemia patients remain a small market for pharmaceutical companies. Despite being the most common malignancy to affect children, www.frontiersin.org relapsed childhood leukemia remains a relatively rare disease, and testing these drugs in clinical trials -from early to advanced phases -requires multi-institutional trials and international cooperation. Despite these challenges, these novel agents bring the promise of great advancements in the treatment of pediatric leukemia with the potential for improved OS and a reduction in treatment toxicity.
|
2016-06-17T21:21:35.100Z
|
2014-04-21T00:00:00.000
|
{
"year": 2014,
"sha1": "e4664c51b74b0398cdea7e4bc85bde86f5a4447d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2014.00082/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4664c51b74b0398cdea7e4bc85bde86f5a4447d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225302160
|
pes2o/s2orc
|
v3-fos-license
|
Comparative study of CSD-grown REBCO films with different rare earth elements: processing windows and Tc
REBa2Cu3O7−x (REBCO, RE = rare earth) compounds with different single RE elements were grown via TFA-MOD (metal-organic deposition of trifluoroacetates) to clarify their Tc values when grown by the same preparation method and their processing windows; here: the crystallisation temperatures at a constant process gas composition (pO2, pH2O). We focussed on the lanthanides (Ln) Nd, Sm, Gd, Dy, Ho, Er and Yb as substituents for Y in the REBCO phase and investigated their growth behaviour in terms of resulting physical (inductive Tc and Jc(77 K)) and structural properties (determined by XRD, SEM, TEM). All phases were grown as pristine films on LaAlO3 and SrTiO3 and compared to their respective nanocomposites with 12 mol% BaHfO3 for in-field pinning enhancement. With regard to Tc and Jc(77 K), the optima of both values shift towards higher growth temperatures for increasing and decreasing RE ion size with respect to yttrium. Highest Tc values achieved so far do not show a trend that can solely be related to the RE ionic size. On the contrary, Tc,90 values of the LnBCO compounds from Sm to Er range between 94.0 and 95.3 K and are, therefore, significantly larger than the highest values of the average-size non-lanthanide, Y, with Tc,90 = 91.5 K. Jc,sf values at 77 K seem to plateau between 5 and 6 MA cm−2 from Sm to Er and are again clearly above the maximum values we ever achieved for Y with 4.2 MA cm−2. REBCO phase formations of the very small Yb and large Nd turned out to be more difficult and require further adjustment of growth parameters. All REBCO phases investigated here show distinct dependences of Tc on the lattice parameter c.
Introduction
In order to assess the suitability of a REBa 2 Cu 3 O 7−x (REBCO or RE123, RE = rare earth) compound other than the Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. well-established YBCO for coated conductor (CC) production, three main aspects have to be considered: price, process handling and performance. Moreover, all three have to be optimised simultaneously in order to raise the performancecost ratio and make the material attractive for the market. Thus, CC production has to be cheap, simple and reproducible, and the product has to perform to the best of its possibilities and best at all operating conditions aimed for (i.e. temperature and magnetic field environment). But is there any room for Prices of the rare earth compounds: (a) prices per 10 g RE acetate (99.9% purity grade) in Euro [12]; (b) world market prices per kg RE 2 O 3 (for different available grades of purity as given in the graph above the data points) in US Dollar [13]; insets: magnifications to the low-cost region. improvement beyond YBCO, when years of YBCO research have passed [1][2][3][4][5][6][7][8][9][10][11]?
YBCO is indeed one of the cheapest precursor media that can be purchased on the market. YBCO targets for pulsed laser deposition (PLD), for example, cost roughly half of a GdBCO target. The same applies for the prices of salts used for metalorganic deposition (MOD), figure 1(a). Here, yttrium is also one of the cheapest amongst all RE acetates. However, those prices probably reflect demand rather than possible production cost. If the demand of another REBCO precursor was scaled to the demand of YBCO, prices might level, since world market prices for RE raw materials, figure 1(b), are only partially reflected in the prices of the precursor salts. Despite the rather cheap raw material for Eu, for example, the acetate is rather costly, while the overly expensive oxides of Tb and Dy have a rather low impact on the according costs of the acetates. Anyway, the pro rata cost for the rare earth element within the superconducting layer of a CC may rather be neglected when the entire production process is considered and prices for metallic substrates, buffer materials and processing costs are added to the sum.
Several thin film deposition methods have qualified for REBCO CC production, such as physical vapour deposition techniques including PLD [14][15][16] and reactive coevaporation [3,16], as well as chemical methods including metal-organic chemical vapour deposition (MOCVD) [17] and chemical solution deposition (CSD) [6,18]. All methods have their advantages and disadvantages, which have been examined carefully in several studies. Yet, when only the price is considered, techniques depending on high-vacuum equipment have clear disadvantages for large-scale production. Particularly cheap is the deposition from chemical solutions. CSD equipment consists of not more than a reel-to-reel-operated furnace with a deposition device, e.g. dip, spray, print or slotdye coating. The entire CC architecture, including cap and buffer layers, can be processed 'all-chemically' [19]. However, with respect to the performance-cost ratio, mixed approaches may be the means of choice. Particularly, the deposition of the REBCO layer via CSD has clear advantages over other techniques, though: The preparation of the precursor solutions is utterly simple and therefore cheap, and the composition can be easily altered through solution mixing or additions of further soluble salts. With this, a plethora of inclusions for pinning enhancement is accessible: atomic-scale doping [20] on the RE [21][22][23], the Ba [24][25][26], and the Cu [27][28][29] site or secondary phase additions of various compositions (e.g. Y 2 O 3 [30], BaZrO 3 [31,32], BaHfO 3 [33][34][35], BaSnO 3 [36], Ba 2 YTaO 6 [37], Ba 2 YNbO 6 [38]), forms (spherical, plate-like, rectangular, etc) and amount. Thereby, precipitates grown by CSD are known to lead to the largest reductions of the J c anisotropy of REBCO with sometimes near-isotropic J c in a certain field and temperature range, see e.g. [39][40][41].
Yet, most of the attention has been focussed on YBCO thin films and much less is known about the other lanthanide-REBCO compounds (LnBCO). In particular, comparative studies from a single source are rare and focus often on the comparison of crystal structures [42] or T c values [43] or both [44,45] and less on the performance, i.e. critical current (densities), I c (resp. J c ), at interesting points in the temperaturemagnetic field matrix. Certainly, very detailed investigations of individual LnBCO compounds have been reported, e.g. [46][47][48] and above all on GdBCO [49][50][51], but comparing such data with each other in order to define trends is problematic, especially when different deposition techniques are involved. The problem of comparability becomes most obvious when the growth of self-assembling nanostructures of the BMO 3 type (M transition metal) within the REBCO matrix is examined. While PLD mostly leads to the formation of biaxially oriented nanoparticles and -rods, CSD tends to create randomly oriented particles of spherical shape. Those differences are caused by the different growth modes of the two techniques, which also lead to very distinct shapes of the grain boundaries (GB). The growth mode of PLD leads to columnar grains with straight GB perpendicular to the substrate interface; ex-situ deposition techniques, such as CSD, promote laminar grain growth resulting in rather meandering shapes [52].
Therefore, we optimised a number of different single-RE123 compounds and the according nanocomposites with BaHfO 3 (BHO) via a CSD approach: the well-established TFA-MOD (metal organic deposition of trifluoroacetates). Our objective was to determine trends for the phase formation windows and structural and physical properties and thus to evaluate their suitability for CSD-grown CCs. For production, low furnace temperatures are of interest for economic reasons and wide processing windows for improved reproducibility of the properties and therefore increased robustness of the process. For a comprehensive comparison, a wide range of REBCO phases has been chosen from the largest RE element still known to form a stable superconducting RE123 phase, Nd, over Sm, Dy, Ho and Er, to the very small RE ion Yb. Those phases were compared to YBCO and GdBCO, for which we had obtained similar data in previous studies [35,53]. Not part of the study were several Ln elements, whose corresponding RE123 phases are either metastable (La) or unstable (Ce, Tb), not superconducting (Pr) or would be radioactive (Pm). Also not included were Eu and Tm, mostly for timely reasons, as well as Lu due to expected difficulties for single-phase growth. We chose BHO as nanoscale flux pinning centres due to our expertise and good experience with this secondary phase with respect to pinning enhancement and reduction of the macroscopic anisotropy [35,49]. For the current study, the windows of the growth temperatures at a constant gas composition with an oxygen partial pressure pO 2 of 150 ppm have been accessed via inductively determined values of T c and J c . The record values of each system are compared to each other and brought into relation to structural characteristics, e.g. the c-axis parameter.
Experimental section
All REBCO systems studied here, pristine and with additional BHO nanoparticles, required separate precursor solutions, which were prepared after the same recipe based on the well-established TFA-MOD approach [2,[53][54][55]: The acetates of RE, Ba, and Cu (>99.99%, Alfa Aesar) were dissolved in water in a 1:2:3 ratio, mixed with an excess of trifluoroacetic acid (TFAH, 99.5+%, Alfa Aesar) to enforce a high degree of conversion of the salts into the trifluoroacetates, and stirred until the last remains of metal-organic salt were dissolved. The solutions were concentrated by means of a rotary evaporator to yield a viscid residue, which was re-diluted in ultra-dry methanol (>99.9%, H 2 O < 50 ppm, Carl Roth) and filled up to the final concentration of the rare earth of 0.25 mol l −1 . In the case of the nanocomposite solutions, additional hafnium(IV)-2,4-pentanedionate (Hf(acac) 4 , 97+%, Alfa Aesar) and a similar molar amount of barium acetate were dissolved in water to enable a theoretical formation of 12 mol% BHO within the REBCO films. After adjusting the concentration, very small amounts of acetylacetone relating to roughly 60 mol% with respect to the RE element (or 1.5 vol%) were added to the solutions in order to make them insensitive to impurities such as water [53]. The additive leads to a momentary precipitation of copper acetylacetonate, which dissolves readily and permanently after a few minutes in an ultrasonic bath. Hereafter, the solutions were filtered through PTFE with 0.2 µm pore size.
These solutions were spin-coated on cleaned 10 × 10 mm 2 (100)-oriented LaAlO 3 (LAO) or SrTiO 3 (STO) single crystal substrates with a rotation speed of 6000 rpm for 30 s, which leads to a final film thickness of (220 ± 20) nm. Subsequently, the films were heat-treated as described in [53], whereby the oxygen partial pressure pO 2 during the crystallization was kept at 150 ppm and the oxygen for the final oxygenation process was introduced after the oxygenation temperature of 450 • C had been reached. The cooling step between crystallization and oxygenation was carried out in dry nitrogen with the same pO 2 and gas flux as used during the film growth.
The superconducting characteristics of the films were analysed by inductive techniques: the self-field critical current density J c,sf at 77 K with a calibrated Cryoscan (Theva, 50 µV criterion), the transition temperature T c by a self-designed and calibrated mutual inductance device. Here we show T c,10 , T c, 50 and T c,90 which are defined as the temperatures at which the net induced voltage reaches 10, 50 and 90% of the value in the normal state. The layer thicknesses were determined by atomic force microscopy (AFM; Dimension Edge, Bruker) on 50 µm wide bridge structures. Those bridges were prepared by photolithography with an image reversal resist (AZ5214E, Microchemicals) and wet-chemical etching with an 0.6 wt% HNO 3 dilution. Structural features of the films were investigated by x-ray diffraction (XRD; D8 Discover, Bruker, Cu-K α radiation) and scanning electron microscopy (SEM) with a low-resolution 'table-top' device (SH-5000P, Hirox, tungsten cathode, SE-detector, 10 kV acceleration voltage). The highangle annular dark-field scanning transmission electron microscopy (HAADF STEM) images were taken at an FEI Titan probe C s -corrected transmission electron microscope (TEM) operated at 200/300 kV [56,57].
Results and discussion
Most of the newly investigated REBCO phases with RE = Dy, Ho and Er show exactly the same behaviour on the two singlecrystal substrates LAO and STO, which are commonly used for basic investigations, as did YBCO and GdBCO described in a previous study of ours [35]: The pristine phases show significantly better properties on LAO, while the nanocomposites with BHO prefer STO as substrate. This had been related to a loss of the chemical inertness of STO at higher temperatures (particularly above 760 • C), resulting in both T c reduction through Ti permeation and growth of misoriented grains because of a deterioration of the structural integrity of the substrate interface. This problem occurs mainly in pristine films and can be avoided by LAO, which seems to be stable up to higher temperatures in the same growth conditions (pO 2 ). The nanocomposites, on the other hand, tend to form spherical BHO particles within the REBCO matrix but also flat and wide-stretched structures at the interface covering a large percentage of the substrate. We assume that those particles limit the Ti diffusion into the REBCO matrix and preserve the lattice information of the substrate due to a small lattice misfit to STO (<7%). On LAO, this misfit is significantly larger (>10%) leading to randomly oriented BHO particles at the interface with a disturbing impact on the film growth for nanocomposites on LAO. Therefore, we focus mostly on the data for pristine films grown on LAO and BHO nanocomposites on STO.
The images of DyBCO, HoBCO and ErBCO in figure 2 exemplify the typical macroscopic appearances of optimally grown films: All films are completely dense and homogeneous apart from a few surface decorations on pristine DyBCO and the nanocomposite of ErBCO. Especially smooth are the surfaces of the pristine phases with only a flat terrace architecture while the nanocomposites show some superficial trenches.
Cross-sectional HAADF STEM images of YBCO, DyBCO, HoBCO and ErBCO pristine films and BHOnanocomposites confirm that all films are very dense throughout the entire layer, see figure 3. They also show quite similar microstructural landscapes in general. All films, except the HoBCO nanocomposite, contain RE 2 O 3 nanoparticles in the volume. In YBCO and DyBCO, those particles either occur in the middle part of the films or are attached to the substrate. Er 2 O 3 on the other hand has only been found with association to the substrate. The presence of BHO does not play a role for RE 2 O 3 particle size and distribution. Furthermore, BHO particle size and distribution are rather similar in all three studied cases. Aside from secondary phases, the bottom and top parts of the films contain a high density of elongated intergrowths that may be interpreted as additional Cu-O-planes or as insertions of REBa 2 Cu 4 O 8 (RE124). Those parts vary in thickness throughout the samples and are partly very thin, especially the bottom parts near the substrate. The middle parts are generally more sparsely interrupted by this type of stacking fault (SF). While such a graduation has been observed in all investigated systems, parts of the central films of pristine YBCO and DyBCO were even found nearly without SFs at all and in particular if no large secondary phase particles were present, as is shown on the example of pristine YBCO (top left picture in figure 3). ErBCO and DyBCO samples appear in some parts to have almost pure RE124 phases in the top part of the films, although this has not been detected by XRD (not shown) and is thus not really a pure phase but SFs of high density. In the BHO nanocomposites on STO, the amount of intergrowths seems to be lower, at least with respect to their dimension. The occurrence of stacking faults in direct vicinity [61], primary axis, black open circles, yttrium has been fit into the contracting line of lanthanides according to its size; increasing formation of a solid solution RE 1+y Ba 2−y Cu 3 Oz with increasing RE ion size beyond yttrium [59], secondary axis, green plus signs and solid line; scheme of the tendency to form vacancies with decreasing RE ion size beyond yttrium [58], red dotted line.
to the substrate interphase can be interpreted as a means of stress relief in the films, while the large number close to the film surfaces is believed to be a result of Cu accumulation during the film growth.
Despite several contradictions collected by MacManus-Driscoll et al in a thorough literature study [58], it is commonly agreed that the phase stabilities of the RE123 compounds differ significantly in dependence on the rare earth ion size, figure 4. While YBCO is known to form one of the most stable phases, mostly without any noticeable cation exchange in bulk materials [58], larger RE elements tend to form solid solutions through an exchange on the RE and Ba sites [59,60] and smaller ions lead to vacancies in the lattice [58].
Both effects have significant implications for the REBCO phase stabilities with obvious impact on the crystallisation temperatures, T cryst , required for an optimal film growth via CSD as well as on the width of the T cryst windows, figure 5: YBCO, the most stable phase, has the maxima for J c,sf (77 K) at the lowest growth temperatures compared to the other REBCO compounds, both for the pristine phases (~780 • C) and the nanocomposites (~770 • C). With falling and rising RE ion size, these optima shift towards higher growth temperatures, particularly in the case of pristine films on LAO, and the windows become incrementally narrower, particularly for the nanocomposites on STO. T c values do not show as clear maxima as the J c values, but YBCO and the adjacent compounds with similar RE ion sizes, HoBCO and DyBCO, span a rather large temperature window for very narrow transition widths (∆T c = T c90 − T c10 ). The smaller and larger RE ions, Er and Gd, show similarly narrow ∆T c values only in a very small range of T cryst~1 0 • C, in some cases only at a distinct annealing temperature. This is most likely due to inhomogeneities in the films caused by disorder on the atomic scale (solid solution or vacancies), which reduces the reproducibility dramatically. Those very narrow windows, specifically of the GdBCO and ErBCO nanocomposites, may certainly be considered difficult with respect to CC fabrication. Nevertheless, previous results have also shown that other growth parameters, such as the pO 2 , are further means to influence the quality and windows of the film growth. REBCO phases with larger RE ions, e.g. GdBCO, profit from lower pO 2 [49].
The record T c and J c values achieved so far in every REBCO system, regardless of BHO presence and including further samples grown with additional variations of the pO 2 , are summarized in figure 6. Up to this point, the growth of NdBCO has not been successful yet despite a very thorough scan of the T cryst -pO 2 matrix. None of the NdBCO samples shows any sign of superconductivity above 77 K. The x-ray diffraction patterns (not shown) give only very weak signs of the desired phase with poor c-axis texture but point towards very high optimal growth temperatures (>840 • C). This might be in conflict with the stability of the LAO substrate and the reason for the failing phase formation. Also SmBCO is more difficult than smaller RE ions. In contrast to NdBCO however, it forms the Sm123 phase without problems and with very dense and homogeneous films of good c-axis texture. Yet, the superconducting properties do not show the same reproducibility, which is attributed to the large stoichiometric disorder in the investigated films and, again, to an increasing influence of the substrate stability at the very high temperatures required for the film growth. Several SmBCO samples show very good properties, though, and the optimum temperatures seem to bẽ 830 • C for both pristine films and nanocomposites grown at pO 2 = 150 ppm. A reduction to 50 ppm slightly shifts the optima to 820 • C for both.
On the side of the smaller RE ions, YbBCO can be grown indeed, but it gives mostly broad transitions with a maximum T c,90 of 89.5 K, figure 6(a). The optical appearance of the samples is already severely disturbed after the pyrolysis, which is in clear contrast to all the other REBCO systems investigated here. From our experience with the sensitivity of YBCO and GdBCO towards humidity in the solutions and asdeposited gel-like films [53], we assume that YbBCO solutions are significantly more sensitive to traces of water, and the addition of acetylacetone may not have the same beneficial effect. Therefore, YbBCO seems to require a further optimisation of the solution preparation and pyrolysis, before the growth of high-quality films can be addressed.
Disregarding the problematic cases of the very small and large RE ions Yb and Nd, optimally grown REBCO films from Sm to Er show a very narrow margin of T c values ranging from 94.0 to 95.4 K, figure 6(a). Only YBCO drops out, which may be attributed to one or more of the following facts that distinguish Y from the other RE elements: Y is not a lanthanide, it is significantly lighter, and magnetic characteristics may play a role, too. Thus, a tendency of T c solely related to the RE ion size as established in literature for bulk samples [62][63][64] is not observed in this investigation and may not even be expected since older studies suggest that also NdBCO [63] and YbBCO [65] can achieve T c values of up to 96 K. A similar trend applies for the inductive values of J c,sf at 77 K, figure 6(b): The lanthanides seem to head for the same direction and maybe the same limits of about 6 MA cm −2 . The slightly lower values of DyBCO and SmBCO are rather a matter of statistics, but for the same reason the dropout of YBCO is real since more than a hundred samples have been prepared in this system over several years. However, 77 K is rather close to T c , so the generally lower J c values of YBCO can be considered a T c effect.
Those record values shown in figure 6 may not be the absolute maxima possible but are presumably quite close to what can be achieved in thin films grown by CSD, particularly for T c . One aspect that has been neglected for the present study though is the oxygenation process, namely conditions such as pO 2 , annealing temperature and dwell time. This topic has been widely ignored for many years although those parameters are expected to have a significant impact on the oxygen load and with that on the lattice parameters as well as T c and J c . Yet, this topic seems to be very comprehensive on its own and is, therefore, under current investigation. All films shown here have been oxidised in the same way, i.e. the films were annealed in pure oxygen (p = 1 atm) at 450 • C for about 120 min. The rather high annealing temperature and long dwell time are expected to allow for fast and thus mostly complete oxygen diffusion processes in the films. Subsequently, all samples were furnace-cooled in this atmosphere, whereby the slow cooling rates let us assume that equilibria of the oxygenation may have been reached down to 300 • C-350 • C. According to literature, the RE species has indeed a severe influence on the oxygenation of the REBCO phase, figure 7 [66,67]. Yet, most of our systems seem to pass their respective optimum oxygenation temperature during the slow furnace-cool and may thus be considered optimally doped. Based on figure 7, only the two systems YBCO and ErBCO appear to have the potential for a further increase of T c by quenching from the oxygen annealing temperature. Therefore, they may be slightly more in the overdoped region of the phase diagram due to the cooling in the furnace. However, aiming for their T c maxima may even decrease rather than increase J c since the J c maximum had been found in the overdoped region [68][69][70].
In combination with T c , the c-axis parameter is often used to estimate the oxygen doping level of YBCO, as e.g. shown in [71]. Yet, this is only strictly valid for single crystals and has rather little relevance for thin films, where many more factors influence both T c and c, such as stoichiometry variations due to secondary phase precipitation, foreign-ion permeation from the substrate or artificial pinning centres, and above all strain induced by the substrate, misoriented grains or secondary phases [72]. Nevertheless, the thin films interestingly show very distinct relations between T c and c, the latter determined via the Nelson-Riley method [73], figure 8. The green lines depict the theoretical values of c in single crystals for oxygen deficit x = 0, c * , taken from the ICSD database. For YBCO, the entirety of data of superconducting films ever produced with CSD in our group forms a cloud around just this line of c * and an average value of T c , ØT c,90 , of roughly 87-88 K. Thus, a large number of samples is found with (blue open symbols) with ØT c,90~8 9 K. The nanocomposites on STO (red solid symbols) also form a rather dense cloud around c * with ØT c,90~8 7.5 K; on LAO (blue solid symbols), c stretches out to significantly larger values, though, without a negative impact on T c,90 . The generally very small values of c, particularly in pristine samples, point to high oxygenation grades with x close to zero, which supports the idea that the films may be slightly overdoped as deduced from figure 7. However, further microstructural events beyond the oxygenation seem to occur in films with c < c * causing slight compressive strain in the c direction of the crystal structure.
The underlying databases for HoBCO and ErBCO with the smaller RE ions may contain considerably less samples, yet one difference is obvious: c is mostly larger than c * . This suggests that the films are not as fully oxygenated as the YBCO films. Yet, strain caused by other reasons than oxygen vacancies may simply overcompensate and therefore mask the impact of oxygen loading on c. Comparable are the dense clouds with high T c values for pristine films on LAO and the larger values of c for the nanocomposites.
In the direction of the larger RE ions, the T c -c characteristics become more versatile. DyBCO shows nearly no T c dependence on c, i.e. the c parameter extends from c * = 11.689 Å towards very large values of 11.796 Å maintaining T c,90 of at least 90.4 K. The widest span has the nanocomposite on LAO, even showing c-parameters as small as c * ; the pristine films on LAO have the narrowest range. It is very unlikely that the oxygenation grade is responsible for the expansion of c, since a strong impact on T c should be expected. The reasons for the large c-axis parameters in DyBCO are not clear at present; yet, the high tolerance of T c towards such an expansion of c is remarkable. GdBCO, on the other hand, shows a very dense data cloud again: almost all samples show c parameters very close to c * and ØT c,90 > 92 K. Only a few T c values drop out amongst the nanocomposites on STO. Yet another behaviour occurs in SmBCO, as this system shows a dramatic decrease of T c for increasing c.
Some of the observations for the T c -c correlation may have to remain uncommented, particularly the reasons for the severe differences between the large RE ions Dy, Gd and Sm, since too many factors may contribute. Yet, it can be concluded that the nanocomposites show wider ranges of the c parameter and generally larger values of c. Strain caused by the BHO nanoparticles is very unlikely the reason, since the strain fields around the particles are very narrow, as determined by TEM. Further, it cannot be concluded from our data that BHO generally suppresses T c in CSD-grown films, as often observed in PLD-grown films [74][75][76]. In fact, for YBCO, the largest T c values were found in nanocomposites. Thus, it is also unlikely that a diffusion of Hf into the REBCO matrix causes the stretch of the c-axis in the nanocomposites, since it should come along with a simultaneous decrease of T c [77]. Also, the lattice parameter c is rather expected to decrease, if Hf 4+ substitutes for Y 3+ in the crystal structure [77]. Up to what extent the strain caused by the lattice misfit to the substrate has part in the T c -c correlation is also unclear at present, since all REBCO systems lie on the same side of the misfit with respect to the two relevant substrates, figure 9: LAO induces compressive, STO tensile strain in the ab direction of the REBCO thin films, whereby the former increases with the RE ion size, while the latter decreases. Hence, similar impact on the c-parameters should be expected, i.e. an increasing expansion on LAO and a decreasing contraction on STO with increasing ion size. Such tendencies cannot be taken from the overall c-axis parameters of the investigated REBCO-substrate combinations, also because only a very thin layer is directly concerned by the misfit to the substrate, and means of stress relief are applied in the films. More TEM analyses of different samples within one REBCO system may have to be performed to clarify the reasons, specifically for the differences between the investigated systems.
Conclusions
Very similar microstructures were observed in CSD-grown films of different REBCO compounds (RE = Sm, Gd, Dy, Y, Ho, Er), above all similar stacking fault distributions with a graduation through the layers, as well as similar secondary phases, sizes and distributions. The macroscopic film structure of all investigated systems is absolutely dense and homogeneous when optimally grown. Despite that, very interesting and partly unique relations between T c and c were found. While most of the systems form rather dense data clouds around their respective theoretical c-axis values for x = 0, the lattice parameters of SmBCO and DyBCO scatter in a considerably wider range with a drastic decrease of T c with increasing c in the case of SmBCO and only a marginal impact on T c in the case of DyBCO. BHO nanocomposites show generally larger values and ranges of c, but mostly without a significant impact on T c . The processing windows seem to follow a trend with lowest optima and widest ranges of the growth temperatures close to the optimal RE ion size, i.e. Y 3+ . With increasing difference to it, the reproducibility for specific properties, such as J c and T c , sinks noticeably due to enhanced disorder on the atomic scale. In particular, the growth windows of the nanocomposites with small or large RE ions are very narrow and must therefore be considered more difficult for CC production due to reduced process stability. Further, no tendencies were observed that could be related to solely the RE ion size or weight. Particularly, the non-lanthanide YBCO drops out of several characteristics. The LnBCO compounds, on the other hand, head towards similar limits for T c,90~9 5-96 K and J c,sf (77 K)~6 MA cm −2 when fully optimised. The very small and large RE ions Yb and Nd require further optimisation beyond temperature and pO 2 adaption during the crystallisation step. Thereby, the substrate may play a crucial role for NdBCO, since temperatures beyond the stability of STO and LAO seem to be necessary, whereas YbBCO requires a re-formulation of the precursor solution or a modification of the pyrolysis, respectively, before the crystallisation step can be addressed adequately.
|
2020-06-11T09:07:47.664Z
|
2020-07-20T00:00:00.000
|
{
"year": 2020,
"sha1": "8c3151ce84478c971e94bf6206aff19317843e49",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-6668/ab9aa0",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "7a1255ddad5b0942955c7cff86f0aa18a3725ec9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
13223766
|
pes2o/s2orc
|
v3-fos-license
|
Lower Extremity Near-infrared Spectroscopy After Popliteal Block For Orthopaedic Foot Surgery
Background: Noninvasive measurement of cutaneous tissue oxygenation using near-infrared spectroscopy (NIRS) has become common in peri-operative care. Following institution of peripheral nerve blocks, neurovascular alterations in the blocked region have been described. Objective: The primary aim of this study encompassed the assessment of the influence of a popliteal block on changes in regional oxygen saturation (SrO2), and the location of most prominent changes. Method: We conducted a prospective randomised controlled trial. Hundred twenty patients who received a popliteal block for foot surgery were included. Popliteal block was performed under echographic guidance. The patients were randomized in 3 groups according to the location of the SrO2 electrodes on the legs. Bilateral SrO2 measurements were performed simultaneously. SrO2 in the operated leg and in the control leg was measured at baseline and 1, 5, 10, 15, and 30 minutes after the perineural injection. We quantified the evolution in SrO2 by calculating over time the differences in SrO2 values between the operated and control leg (=ΔSrO2). Results: At 30 minutes, ΔSrO2 increased significantly (p<0.05) at the plantar side of the foot (11.3% ± 2.9%), above the ankle (4.9% ± 1.3%) and the popliteal fossa (3.6% ± 1.2%). Conclusion: At 30 minutes after institution of the popliteal block, ΔSrO2 was most prominent at the plantar side of the foot as compared with measurement performed above the ankle or under the knee.
INTRODUCTION
Peripheral neural blockade is a widely used technique in perioperative care. Besides the desired analgesic effect, perineural infiltration of local anaesthetics induces a number of neurovascular changes. Clinically, patients experience a distinctive albeit subjective sensation of warmth and dilation of the superficial veins. Venodilation in the arm in patients who received an axillary block was demonstrated by echography [1], and alterations in the ipsilateral brachial artery after brachial plexus block was reported using pulsed wave Doppler ultrasound [2]. Increased local blood flow and increased skin temperature occur as a result of sympathetic nerve blockade and these neurovascular changes and the resulting local vasodilatation are associated with successful peripheral nerve blockade [3,4].
Noninvasive measurement of cutaneous tissue oxygenation using near-infrared spectroscopy (NIRS) has become common in cardiovascular and plastic surgery [5,6]. NIRS quantifies the differing absorption and reflection of nearinfrared light by human tissues, which provides a tissue-oxygen saturation index [7].
What are the alterations in tissue oxygenation caused by regional anaesthesia and peripheral nerve blocks? Is it possible to detect these changes with the available NIRS devices? Tighe et al. demonstrated significant differences in regional oxygen saturation (=SrO 2 ) values in blocked vs. control limbs after cervical paravertebral and infraclavicular blocks in a small group of patients [8]. The primary aim of this study is to assess the influence of a popliteal block on changes in SrO 2 as compared to the contralateral lower extremity.
METHODS
One Hundred twenty patients between 18 and 65 years of age who received a popliteal block for foot surgery were included. All patients underwent hallux valgus repair with a similar technique and surgery was performed by the same orthopaedic surgeon. The study was accepted by the local ethical committee (2011.55). All patients were enrolled after obtaining written informed consent. Exclusion criteria included contraindication to peripheral nerve block, history of peripheral nerve injury or neuropathy of the affected region, peripheral vascular disease or surgery, morbid obesity defined as body mass index (BMI) greater than 40, or infection or cutaneous lesion in the monitored region. The presence of sensory block in the peroneal and tibial nerve area was ascertained using the pinprick method 30 minutes after injection of local anaesthetic. If no full sensory block was confirmed, the patient was excluded from further analysis and its randomisation number was returned to the sealed card pool. The presence of motor block was not assessed before surgery. A successful block was defined as a complete sensory block of both branches of the sciatic popliteal nerve within 30 minutes and the absence of a response to surgical stimulation. All patients received general anaesthesia according to a standardised protocol, which did not allow the administration of opiates. Failure of surgical block was considered as an exclusion criterion. The patients were randomized in 3 groups according to the location of the SrO 2 electrodes on the legs. The electrodes were bilaterally applied at 4 cm distal from the popliteal fossa (UK, n=40) in group 1, at 10cm above the ankle (AA, n=40) in group 2 and at the plantar side of the foot (FO, n=40) in group 3. SrO 2 in the operated leg and in the control leg was measured at baseline (moment of performing the block) and 1, 5, 10, 15, and 30 minutes after the perineural injection. We quantified the evolution in SrO 2 by calculating over time the differences in SrO 2 values between the operated and control leg (=ΔSrO 2 ). ΔSrO 2 expressed the absolute change in SrO 2 in the leg where a popliteal block was performed compared with the non-operated leg.
Statistical analyses were performed using Statistical Analysis Software (SAS) (version 9.2, Cary, NC, USA). Variables are reported as mean +/-standard deviation (SD). The Wilcoxon paired sample test was used to compare the different variables between groups. The Fisher Exact test was used for comparisons between categorical variables. All tests were performed two-tailed. Statistical significance was defined as p<0.05.
RESULTS
The sample size calculation was based on pilot data measuring a mean(SD) ΔSrO 2 at 30 minutes at the plantar side of the foot of 11.2%(2.6%). Assuming a standard deviation of 2.6% in all groups, α= 5%, β=5% and a significance level of 2%, a T-test based on the difference between independent means showed that the minimal required sample size was 38 patients in each group. A total of 129 patients were included in the study, 9 patients were excluded because of failed sensory block. 40 patients were included in each group. Demographic characteristics (gender, age and BMI) were well balanced between the three groups. The evolution of ΔSrO 2 was evaluated in the three groups. ΔSrO 2 significantly increased over time, F(1,87)=29.85, p<0.0001, but this linear increase levelled out at 30 minutes, F(1,264)=5.69, p=0.02 (as indicated by a small negative effect for the quadratic effect of time). More importantly however, the ΔSrO 2 was different in the three groups, F(2,264)=6.51, p=0.0017 (group x time interaction). Fig. (1) shows that, although the ΔSrO 2 significantly increased over time in all groups, this increase was significantly stronger in the group with measurement of SrO 2 on the plantar side of the foot. Post-hoc pairwise comparisons indeed confirmed that the linear increase over time is stronger in the group with SrO 2 measurement electrodes on the plantar side of the foot. The two other groups showed a similar, statistically significant increase in ΔSrO 2 , which was less pronounced than the group with plantar
DISCUSSION
The NIRS sensor offers the ability to detect changes in tissue oxygen saturation. This value can change based on alterations in either oxygen consumption or oxygen delivery. NIRS is unable to distinguish between these causes. The exact mechanism of increased perfusion and a rise in SrO 2 after peripheral neural blockade is uncertain. The neurovascular alterations after popliteal block could influence regional flow or peripheral oxygen consumption.
The NIRS probe emits light with several wavelengths in the 700 to 850 nm interval and measures the reflected light mainly from a predefined depth [9]. NIRS utilizes a narrower spectrum of wavelengths than pulse oximetry, which penetrate deeper into the tissue [10]. Complex physical models then allow the measurement of relative concentrations of oxy-and deoxyhaemoglobin [11]. Yet, given the large number of assumptions and approximations in the theoretical basis of NIRS, one may assume trends in different NIRS parameters as more robust than discrete values. This is the reason why we chose to work with the evolution of ΔSrO 2 , rather than with absolute values. The exact depth of penetration (and hence monitoring) seems to be variable. Monitoring depth is technically limited by using light of which the energy does not damage tissues. The main determinants of the returned signal are small vessels of the microcirculation. Variations in thickness of subcutaneous fat can be an important factor of variability. This suggests that differences in vessel density in the subcutaneous region can influence NIRS measurement, and could be another explanation for different NIRS values at the different areas [12]. It is however possible that the evolution of ΔSrO 2 is less influenced by this phenomenon. Vessel density is more likely to have impact on the absolute SrO 2 values but probably less on the trends.
In free flap surgery, NIRS is considered highly suitable for postoperative flap monitoring to detect vascular occlusion [13]. While in flap surgery, the optimal location for placing the sensors is obvious, to what extent NIRS is applicable to detect vascular effects of popliteal block, as well as the optimal location for application of the sensors remains unclear. We investigated the relationship between a popliteal block and subsequent changes in SrO 2 . After successful peripheral nerve blockade, several vascular changes occur as a result of the blockade of sympathetic nerve fibers [4]. Perfusion index, which is automatically calculated by pulse oximetry and provides an indication of peripheral perfusion at the sensor site (finger), has shown to be a useful method for evaluating axillary or sciatic block in patients scheduled for limb surgery [14,15]. Sympatholysis following locoregional block also causes augmentation of SrO 2 on the ipsilateral side.
By simultaneously monitoring SrO 2 on the contralateral side and comparing those values with the changes of SrO 2 on the interventional side we were able to quantify the absolute difference in SrO 2 attributable to the neurovascular effects of the popliteal block.
We created 3 groups corresponding to the locations of the NIRS electrodes. Sciatic nerve block results in anaesthesia of the entire lower limb below the knee, including both motor and sensory block, with the exception of a variable strip of skin on the medial leg and foot, which is the area of the saphenous nerve, a branch of the femoral nerve [16]. We placed one NIRS probe at the medial side of the ankle, an area that is normally not influenced by a popliteal block. At the popliteal level we placed another NIRS probe 4 cm beneath the articular line in the middle of the leg, knowing that sensory innervation at that level relies on the posterior cutaneous nerve or the sural nerve [17,18]. At the plantar level the NIRS probe was placed in the middle of the sole. This region is only innervated by the sciatic nerve and the two subsequent branches. It may be questionable choosing these 3 NIRS probe locations based on the anatomy of sensory innervation of the lower limb, since NIRS would probably not be affected by a regional block that spared the innervating nerve. However it may be probed whether changes in regional perfusion could also be detected in adjacent regions that are not directly affected by the regional block, or if sympatholysis is contained to the blocked region. The sensory innervation at the location of the popliteal probe relies on the posterior cutaneous nerve, a branch of the parasacral plexus, or can be innervated by the sural nerve. Sensory blockade was not tested at this level and concerns may arise about these NIRS measurements as some patients may have been blocked and others not. We also did not test the clinical effect on the popliteal block at the medial side of the ankle, but it is unlikely that this area of the saphenous nerve was blocked by the popliteal nerve block.
The ΔSrO 2 increases significantly over time in all three groups but to a different extent: the effect of a popliteal block on local neurovascular state, regional perfusion and subsequent SrO 2 changes is more pronounced in the group with the plantar location of the NIRS probe. Given the extensive rise of ΔSrO 2 in group 3, we concluded that the optimal location for the NIRS electrodes to establish a relation between popliteal block and SrO 2 measurement is the plantar side of the foot. The two other groups showed a similar, statistically significant increase in ΔSrO 2 , but less pronounced than the group with plantar SrO 2 measurement. These changes in group 1 and 2 cannot be fully explained as the block has not been tested on the location of the measurement probe. Hypothetically statistical significance could arise if the patients with blocked UK or AA region had pronounced changes and the patients with non-blocked UK or AA region remained at baseline. However, there was a positive ΔSrO 2 trend in all individual patients of group 1 and group 2. At 30 minutes mean ΔSrO 2 was 3.6 +/-1.2 for group 1 (=UK) and 4.9 +/-1.3 for group 2 (=AA). In addition an observed increase in all individual patients suggests a relation between popliteal blockade (either effective or noneffective in the proper region) and NIRS alterations in group 1 and group 2.
Regarding the negative ΔSrO 2 at time 0 in Fig. (1) we pose the following hypothesis: a blanket was the only measure taken for thermoprotection. The leg where a popliteal block was performed was not covered by a blanket and also had to be disinfected, which made it susceptible for mild temperature loss. It could be that SrO 2 was affected by this discrete temperature difference compared with the SrO 2 of the non-operated leg, explaining the higher SrO 2 in the non-operated leg at time 0 during the performance of the regional block on the contralateral side. This effect could be more pronounced depending on the location: In function of temperature loss: foot>ankle>popliteal fossa and hence in function of SrO 2 : foot<ankle<popliteal fossa. Following this assumption it appears that ΔSrO 2 at time 0 is more negative in FO as compared with AA, and ΔSrO 2 at time 0 in AA is smaller as compared with UK.
We withhold a relation between popliteal block and ΔSrO 2 when the NIRS probe is placed on the plantar side of the foot. This relation is interesting in regard to the positive effects of tissue oxygenation on wound healing and the prevention of chronic pain [19]. As such, while the clinical diagnostic value of NIRS during surgery is limited because foot surgery is conventionally performed under ischemia, our findings show that further research is needed to investigate the extend of ΔSrO 2 alteration, its diagnostic value, the level of sympathetic blockade and its relation with postoperative analgesia. Interestingly, our findings indicate that the optimal location to apply the NIRS sensors may not coincide with the sensory innervation zones. As such, if vasomotor effects are to be evaluated, the plantar side of the foot seems the most appropriate location. It is worthwhile to mention here that while the pathophysiological mechanism of complex regional pain syndrome is a multifactorial phenomenon, disturbed vasomotor activity is known to contribute to the pathology. Popliteal block can interrupt the sensory (afferent) arc as well as the motor (efferent) arc of the autonomic reflex circuits. Prevention or disruption of this neurogenic inflammation cascade [20] may require adequate monitoring, for which properly applied NIRS sensors may be appropriate.
CONCLUSION
At 30 minutes after institution the popliteal block, positive ΔSrO 2 was most prominent at the plantar side of the foot.
LIST OF ABBREVIATIONS BMI
= Body mass index NIRS = Near-infrared spectrometry SrO 2 = Regional oxygen saturation
|
2016-10-31T15:45:48.767Z
|
2016-07-20T00:00:00.000
|
{
"year": 2016,
"sha1": "0148b0cdb8d7e25c78f6381570278c0879475d05",
"oa_license": "CCBYNC",
"oa_url": "https://openorthopaedicsjournal.com/VOLUME/10/PAGE/258/PDF/",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0148b0cdb8d7e25c78f6381570278c0879475d05",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265320758
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of the temporal and seasonal variabilities in air
: Physical activity (PA) can reduce the risk of non-communicable diseases like heart dis-19
Introduction
Air pollution is a major global environmental challenge, and the vast majority of this air pollution health burden occurs in the Global South which includes Africa, Latin America, and most of Asia, including the Middle East [1].In Africa where routine air pollution monitoring is sparse and air quality awareness is poor, many are at the risk of exposure to poor air quality by virtue of occupying public space.For example, physical activity is actively encouraged as a healthy behaviour to reduce the risk of non-communicable diseases (NCD) like heart disease and diabetes, the burden of which is rising steeply in many low and middle-income contexts [2].But for many, embracing this behaviour in the most equitable manner (in public space) can paradoxically increase disease risk from exposure to other harmful factors such as air pollution, environmental waste, injury, and safety risks in the course of engaging in physical activity in spaces that are not conducive for these behaviours [3].This is due to the fact that physical activity increases the air pollution intake by increasing the inhaled dose of air pollutants because of the exerciseinduced higher minute ventilation, and a higher deposition of the inhaled particles in the lungs.
The World Health Organization (WHO) has proposed a "six-dimensional view of health" that includes physical health (healthy diet and balanced nutrition), psychological health, intellective health, mental health, social health and environmental health [4].This holistic concept highlights the importance of physical inactivity and exposure to air pollution as important risk factors for death and disease globally.But poor access to public spaces with safe air quality means that air pollution affects health both directly, increasing risk of several diseases, and indirectly, by acting as a barrier to physical activity, a healthy behaviour that improves physical and mental health.
Studies on air pollution exposure monitoring have shown how locations and activities can explain the variations in exposure [5].Sources of air pollutants include domestic wood and charcoal burning, industrial combustion, road transport and use of solvents and industrial processes.Criteria air quality pollutants include carbon monoxide (CO), nitrogen dioxide (NO2), ozone (O3), sulphur dioxide (SO2), and particulate matter PM1, PM2.5 and PM10.These pollutants greatly impact health as PM can be transported around the body, with greater impacts in vulnerable groups like older persons, pregnant women, children and individuals with underlying conditions (asthma, chronic obstructive pulmonary disease (COPD) [6] and COVID-19 [7]).Air pollution has been reported to lead to stunted growth, reduce lung function, increase risk of developing asthma, acute lower respiratory infections, behavioural disorders and impaired mental development in children [8].Other documented health effects include low birth weight, premature birth and infant mortality to pregnant women and their children, childhood cancer, increased risk of coronary heart disease, non-insulin-dependent diabetes, hypertension and stroke in adulthood [9][10].
Ambient air quality, while dependent on emissions, is also strongly influenced by meteorological conditions, including atmospheric circulations, weather systems, structures of the atmospheric boundary layer, and the corresponding meteorological parameters [11].The total amount of pollutants emitted in a particular period of time is usually stable [12][13][14], with observed variabilities in pollutant concentrations due to the impact of meteorology and loss processes which help modulate concentrations of ambient air pollutants [15].Seasonal effects such as wet/dry seasons, as well as long-range transport are evident particularly for PM during the Harmattan haze affecting sub-Saharan Africa.In addition the time of the day and days of the week have been reported to play crucial roles in exposure [16].
According to a report by UNICEF [17], despite deaths from indoor air pollution declining in Africa due to cleaner and more efficient cooking methods, mortality from outdoor air pollution is increasing.The report also noted that only 6% of children in Africa live near reliable, ground-level monitoring stations that provide real-time data on the quality of air they are breathing, compared to about 72% of children across Europe and North America.Increasing reliable, local, ground-level measurements would greatly aid effective responses to this poorly-understood direct and indirect threat to health across the continent.In this study, we sought to investigate air quality in public spaces used for physical activity, and to understand how air pollution varied daily, weekly and across seasons over a 12-month period.The novelty of this study includes: 1) using evidencedriven data to highlight the risk associated with air pollution when engaging in physical activities in both cities under study, 2) long-term monitoring of air pollution in Yaoundé or any other city in Cameroon for multiple criteria species such as gases and PM including CO2, previous studies have only covered very short duration (weeks to couple of months) and have mostly focused on PM [18][19].
Materials and Methods
This study was conducted as part of a larger study to understand the health risks encountered by people informally appropriating public spaces for physical activity (AL-PhA study) in Lagos, Nigeria and Yaoundé, Cameroon [20].
Climatology of Lagos and Yaoundé
Climate data for the two cities were obtained from an online resource [21].Climatic data are based on 30 years of hourly weather simulated data.Figure 1 shows the climatology of both cities, presented as monthly averages.Yaoundé experiences relatively more precipitation (>100mm in the main wet season) than Lagos, with the most intense rainfall in the months of September to October compared to Lagos where the main rainy season is earlier (June-July).Lagos is relatively warmer than Yaoundé, with mean daily maximum temperatures ranging from 27ºC to 34ºC compared to 23ºC to 29ºC.Conversely, Yaoundé is characterised by cooler nights compared to Lagos, consistent with the topography of the two cities.Yaoundé is 726 metres above sea level (ASL) surrounded by seven hills, compared to Lagos, a coastal city approximately 40 metres ASL.
Urbanisation Contexts
Lagos State has a very high concentration of commercial, industrial, and educational activities, consequently resulting in urbanisation, overpopulation, and traffic congestion.The state accounts for about 30% of all traffic in Nigeria [22][23], a significant challenge given the limited road infrastructure and development.In addition, 70% of Nigeria's industrial and commercial activities are in the Lagos region, making it the commercial nerve centre and the most populous state in the country [24].It has been estimated that the population of Lagos city (24.6 million in 2015, [25]) is increasing ten times faster than New York and Los Angeles [26].
Yaoundé, the political capital of Cameroon, is an industrial and commercial city.Given its relatively huge population (as a proportion of the national population) of 4.5 million [27] , while smaller, shares many air pollution emitting activities with Lagos.Some of the anthropogenic factors that contribute to air pollution in mega cities like Lagos and Yaoundé include economic development, urbanisation, energy consumption, transportation, dumpsites, open incinerators, power generators, domestic heating, industrial and agricultural activities, and rapid population growth.
Public spaces appropriated for physical activity
As part of the ALPhA study, members of the public were recruited as citizen scientists and invited to share information of the types of public spaces used for physical activity.These data provided insight into the typologies of public spaces appropriated for physical activity and included: vacant plots of land, under and next to bridges, parks, side of the road and roundabouts.These public spaces pose very high risks to those using these spaces for physical activity due to the proximity to vehicular emissions.These findings informed the siting of the air quality sensors in each city, with one of these sites selected in each city to capture public spaces that are regularly used for physical activity.
Instrumentation for air quality observation
One commercial low-cost air quality device (AQMesh) was deployed in each city .Parameters monitored by the nodes include CO, NO, NO2, O3, CO2 and particulate matter (PM2.5 and PM10).Each node also recorded meteorological parameters including relative humidity (RH), temperature, ambient pressure, all at 15-minute average resolution.
AQMesh uses electrochemical sensors to detect the toxic gas species.This work on the principle of amperometry where the current generated by the target gas species is proportional to the concentration of the gas species.A Non-Dispersive Infra-Red (NDIR) sensor is used in the detection of CO2, concentration is proportional to the ratio of the intensity of the transmitted light (which is affected by an absorbing gas) relative to a reference light when absorption is absent.AQMesh uses an Optical Particle Counter (OPC), to measure PM.This works on the principle of Mie Scattering which allows the number concentrations for different size ranges to be determined.By making assumptions on the particle density, refractive indices, the mass concentrations are calculated from the different PM mass sizes (i.e.PM2.5 and PM10).
Air quality node location
The fixed station was sited along Admiralty Way, Victoria Island (AW-VI) (6°26'53.40"N,3°28'21.50"E) in Lagos, and in the Melen Mini-Ferme (MM-F) area of Yaoundé (3°51'56.40"N,11°29'46.61"E)as depicted in Figure 2. Based on a physical survey of the site and soliciting expert knowledge from local partners, we can describe the site in Lagos as a mixed environment, with emissions from road traffic, residential and small local business operations expected to dominate emissions (Figure 3).The low cost sensor (LCS) was installed at heights of ~2.8m and ~8m in secured premises in AW-VI, Lagos and MM-F, Yaoundé respectively.Installation height was mainly determined by security and access to power.Observations were made for a year, from 1 June 2021 to 31 May 2022, to capture the long-term trend and seasonal variabilities at the two locations.Descriptive daily logs of perceived weather conditions such as visibility in the later months of 2021 and 2022 which often coincided with Saharan dust episodes were also recorded qualitatively by the project team in each city.This information was used to further interpret the air trajectory analysis presented in the results section.The Openair package in R [28] was used to create a trend analysis and the back trajectory including the Concentration Weighted Trajectory (CWT), a Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) based model.We also used the UK Met Office's Numerical Atmospheric-dispersion Modelling Environment, NAME [29] run in backward format to investigate the Saharan dust episodes.
Characterisation of air quality nodes
Due to lack of reference grade instrumentation in the study locations, prior to shipping, the two AQMesh nodes were characterised by conducting a co-location study (between 13 April and 2 May 2021) at the ambient urban air quality site located in the Department of Chemistry, University of Cambridge, UK. Figure 4 presents the comparisons between the two devices for some parameters.Both devices showed excellent precision and reproducibility (slope ~ 1 and r>0.8) for all parameters at the urban background station (see Figure S1 for the other parameters).We derived the absolute calibration parameters for the LCS nodes by comparing the measured parameters with the reference observations using a subset of the data from 13 April to 21 April 2021 during the co-location period.These calibration factors were used for all the subsequent analysis presented in this study.We validated calibrated data using the second half of the co-location study (22 April to 2 May 2021) The results showed that a generally good correspondence was observed between the calibrated data and the reference observations when compared to the raw LCS observations (Figure 5).The gaps in the reference data represent periods when the corresponding reference analysers were not operational.
Results and Discussion
The results from the monitoring in the two cities are presented separately in context of the sampling environment.Due to the large amount of missing data in Lagos (as a result of frequent power outages) in the later stages of the study, intercomparison between the two locations were based on periods when there were measurements available at both locations.
Gaseous and particulate observations in Lagos and Yaoundé
The average data capture between MM-F, Yaoundé and AW-VI, Lagos over the study period (June 2021 and 31 May 2022), were approximately 90% and 50% respectively.Levels of gaseous parameters in both locations were influenced by local emissions and the meteorology.With the exception of NO and CO2, the magnitude of the observations in Yaoundé were generally higher than observed in Lagos throughout the measurement period (Figure 6 and Figure S4).A significant drop was observed in combustion related pollutants (CO, NO, NO2) in Yaoundé between late July and early September, coinciding with the onset of the second wet season (Figure 1).Prolonged precipitation events can affect pollution levels due to (a) wet deposition of pollutants (washout), and (b) reduction in local emissions due to decrease in outdoor activities.A similar effect was also observed, albeit over a short period (days), during the dry season in Yaoundé.The sudden drop in pollution level in January (11-17 January, 2022) was due to intense rainfall during the period.Trends in particulate matter (PM2.5 and PM10) observations were similar between the two locations, with lower levels recorded during the wet seasons compared to the significantly higher readings in the dry seasons, noticeably between December 2022 and February 2023 when there was substantial enhancement in the PM background level.This also coincided with a drop in the relative humidity (RH) levels at both locations (Figure 7 and Figure S5), likely linked to the influence of the Harmattan haze, the dry and dusty northeasterly trade wind which blows from the Sahara over West Africa.A summary of the statistics of the measured parameters at the two locations for the duration of the deployment is presented in Table S1.The average concentrations of PM2.5 and PM10 recorded at the two locations were similar, the main difference in PM statistics is captured in the standard deviation, a measure of the variabilities in local emissions (MM-F, Yaoundé ~ 3.5 times the value of AW-VI, Lagos).This is expected given varied local emission sources noticed at the two sites (see section 2.5).Much of the PM average statistics is driven by the high pollution events associated with the Harmattan haze (more details in 3.4).The annual mean PM2.5 from our study (26 µg/m 3 ) is close to the lower end of the annual concentration range (30-97 µg/m 3 ) reported in a similar study done in Lagos [30].The lowest annual concentration in the World Bank study was observed at a coastal site, which is very similar to the coastal site (AW-VI, Lagos) in our study.The average NO mixing ratio is similar between the sites (~22 ppb) with both locations also showing comparable standard deviations.The mean NO2 recorded in Yaoundé was about 1.5 times lower than the observed value (41 ppb) in Lagos.In contrast, the mean CO mixing ratio in Yaoundé was approximately twice that observed in Lagos.All these can be related to characteristics of the study site as previously discussed.We observed a similar average CO2 mixing ratio at the two locations (428 ppm) which was above the 2022 global surface average of 417.06 ppm [31].
Meteorological observations in Lagos and Yaoundé
The pressure readings in Lagos were relatively higher than Yaoundé.This is expected because the latter is situated at significantly higher altitude.These relative pressure observations can be described by using the hydrostatic relationship, represented by equation 1 for two altitudes, which predicts a mean pressure for Yaoundé of 936 mBar, within 1% of the pressure recorded (924 mBar).
where P represents the pressure (mean pressure for Lagos, PLagos=1012 mBar), H is the scale height (assumed to be 8,800m for 300 K), Z are the altitudes in metres of the locations.
Although the average RH reading at both locations were similar (see Table S1~84%) during the measurement period, the diel profile was remarkably different.The night-day RH range was larger in Yaoundé (68-92%) compared to Lagos (78-90%), an indication of the larger temperature range observed in the former (20-27ºC) as shown in Figure 8.These observations are expected for the city of Yaoundé, situated at relatively higher altitude and therefore expected to experience much colder nights compared to coastal Lagos (Figure 1).The summary statistics (Table S1) shows that Lagos was on average 4ºC warmer than Yaoundé even though the two sites have similar average RH values.This could be partly because the relatively higher nighttime RH values observed in Yaoundé are being compensated for by the relatively lower daytime values when compared to the profiles observed in Lagos (Figure 8).Additional reason could be the impact of the Harmattan haze period when the RH readings are very similar at the two locations (Figure S2).As expected, the average pressure reading in Yaoundé on average was ~90mBar lower than Lagos.
Temporal comparison of gaseous and particulate observations in Yaoundé
We compared the temporal trends in both cities to gain insights into the potential drivers for the pollutants profiles observed during our study.We only considered periods where there was data at both sites for this comparison to avoid bias.
Temporal trend analysis
A comparison of the temporal trend at the two sites show that the CO at the Yaoundé site was relatively higher than the Lagos site (mean value 873 ppb compared to 564 ppb).The pattern was similar for NO although the difference was not as marked (28 ppb relative to 21 ppb), as shown in Figure 9.The emission profiles were unique to the two sites even though both study locations can be characterised as mixed used urban environments.A distinct morning and evening road traffic rush hour signal was detected in CO around 0600 and 1800 UTC (for about an hour) in Lagos but not for Yaoundé, where elevated levels were maintained from 0500 UTC until 2000 UTC.Experiential knowledge from the project team revealed that commercial activities at this site extend well into the night, unlike Lagos which is mainly dominated by traffic emissions which tails off towards the end of the day.The difference in magnitude can be explained by the composition of the vehicle fleets and volume of traffic in both locations.While surveying the installation site, it was observed that a larger volume of old vehicles were present on the road in the vicinity of the study location in Yaoundé compared to Lagos, and the study site was very close to a busy junction even though sampling was at a height of ~ 8 metres compared to 2.8 metres in Lagos.
Although the weekday NO measurements were similar between both locations (daytime peaks of 60 ppb), the main difference occurred on weekends.NO levels in Lagos were significantly lower than Yaoundé particularly on Sundays, while the mean concentration in Lagos was less than 10 ppb and dominated by night-time emissions likely linked to residential emissions.Monthly averages showed reduced levels during the wet seasons at both locations.Unlike the gas species, the mean PM2.5 concentrations during the measurement period (June 2021 to May 2022) were very similar, 26 µg/m 3 (Lagos) and 28 µg/m 3 (Yaoundé) as presented in Figure 10.The main reason for this is the contribution associated with long-range transport of PM during the dry season due to the Harmattan haze episode (accompanied by a drop in RH and an increase in day-night range, see Figure S2).Excluding the haze related observations (Figure 10 (b)), we noted that PM2.5 levels in Yaoundé (24 µg/m 3 ) were slightly higher than Lagos (19 µg/m 3 ).The PM2.5 diel profiles differed at the two locations.Levels tended to peak in the early hours in Lagos possibly due to a mixture of local emissions and the evolution of the nocturnal boundary layer, unlike Yaoundé where the early morning profile was similar to Lagos but with a unique night-time maximum around 1800 UTC.Unlike gaseous pollutants, obvious weekdayweekend distinctions in PM levels were not observed, indicating that the PM might be dominated by sources that were non-local, for instance transboundary pollution events.
Impact of meteorology on PM observation
Observed long-term PM concentrations at the two locations were impacted by changes in meteorological conditions particularly related to the Harmattan haze.By comparison, south and southwest sea breeze dominates long-range transport at other times of the year.Figure 11 shows the back trajectory plotted as a smooth average of PM10 mass concentration presented as quarterly groups starting March 2021 for both locations.Elevated PM levels were observed between December 2021 and February 2022 (December-January-February=DJF) when the air mass originated from the north-easterly region.The similar origin of the air mass is consistent with the similarities in the RH diel profiles between the two locations which tended to differ significantly outside of non-haze events (Figure S2).This interpretation also agrees with diary logs for the ambient conditions recorded by researchers during this period, which was described generally as foggy and hazy for both locations.To further verify the impact and origin of the high PM episodes, we ran a back trajectory model using the NAME model [29].Two dates were chosen for the model run: I) 4 January 2022 when we noticed elevated PM values and II) 9 July 2021 when the PM concentrations were less impacted by long-range transport.The results for the runs for the January at both locations show that the history of the air over the past 6-days extended through the northeast of the both countries, traversing the Sahel regions and will be impacted by the natural mineral dusts.In contrast, on typical no-haze days, (Figures xx (cd)), the air originates from the coast travelling across the Gulf of Guinea predominantly from south and southwest of the locations.Both the CWT and the NAME back trajectory runs show evidence of long-range transport imparting the local PM leading to high PM concentrations.Although only two models are presented here, previous studies [32] have shown that the three widely used Lagrangian particle dispersion models (LPDM) including Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT), Stochastic Time-Inverted Lagrangian Transport (STILT) and Flexible Particle (FLEXPART) have comparative performance, and we expect the results from these other models will be similar when applied to our study locations.These elevated PM episodes in these regions of the continent are consistent with the atmospheric phenomenon Harmattan haze, when dust dominated particles originating from the Sahara desert are blown across the western coast of the continent.Although the air trajectory is over 90% of the time from the coast during our study (Figure S3), the smaller fraction of time it switches to a northeast long-range transport is significant enough to impact the mean daily exposure as presented in Figure 12.If one were to exclude the Harmattan haze period (shaded region in Figure 13), the daily PM2.5 was generally below the 2015 WHO guidelines (25 µg/m 3 ) at AW-VI, Lagos and MM-F, Yaoundé.However, both locations would exceed the 2021 WHO guideline of 15µg/m 3 if this standard is used.In contrast, observed daily PM10 levels were generally above the guidelines at both study locations even if the haze periods were excluded.PM10 levels only get close to the 2015 guidelines during the wet season.Our study shows that the haze episode resulted in PM loading that was more than five times the recommended exposure levels based on the WHO 2021 guidelines.This is of particular interest for policy implementation because the elevated PM during these episodes are driven by natural emissions.Whilst the mass PM loading during the haze episodes was significantly high, they may not necessarily be as toxic as the local emissions (which can sometimes be dominated by heavy metals [30]) because the particles associated with the haze episodes are mostly composed of mineral dust.However such high levels can still cause severe irritation of the respiratory system particularly in vulnerable groups and can also worsen conditions for individuals with underlying ailments such as asthma.The daily mean PM2.5 and PM10 from our study both fall within range of daily average concentrations reported in other studies in Sub-Saharan Africa which ranged between 21-49.4 µg/m 3 for PM2.5 and 49-534.7 µg/m 3 .Note that most of these studies are for locations with different characteristics and generally shorter monitoring duration [18][19].In addition, most of the study period does not necessarily fall within the Harmattan haze which we found contributed significantly to the high daily concentrations recorded in our study.
Implications of observations on leisure-time physical activity
Appropriation of public spaces for leisure-time physical activity is one of the ways city dwellers can keep fit and improve their wellbeing.In doing this, they may try to avoid obvious visible dangers and safety issues.However, the lack of awareness on the negative impact of air pollution could negate the benefits from engaging in leisure-time physical activity.Our temporal analysis of long-term observations at two typical outdoor locations in Lagos and Yaoundé, suggest careful consideration of the time of day, day of week and the time of year of exercise would reduce exposure risk.Weekends and periods outside rush hour on most days tended to have the best air quality in both cities and so would be most conducive for physical activity.Significant reduction in ambient pollution is associated with the wet season, so utilising sheltered outdoor spaces during this period would also maximise health benefits of exercise.The harmattan period poses a conundrum for public health, with tension between consistency of public health messaging that encourages physical activity and minimising harm from air pollution exposure when PM levels are highest.This would require evidence informed public health interventions with tailored messaging for different population groups.For example, early warning systems to encourage indoor physical activity, notifying the most vulnerable to avoid exercising outdoors when PM levels are over a particular threshold and to use PM nose masks if they need to go outdoors.Beyond messaging, urban design interventions could be explored to increase green infrastructure including encouraging non-motorised transport to improve air quality and safer exercise routes, as well as providing more accessible and free spaces for physical activity indoors when air pollution is highest.This evidence is also critical to drive increased demand for action on air pollution.Our findings also suggest stronger regulations are needed in both cities to reduce emissions from vehicles
Strengths and Limitations
This study extends the research providing long-term air quality data in sub Saharan Africa and it is one of the first studies with multiple air pollutants measurements in Yaoundé.It should be noted that the air quality observations in this study were limited to a single site in both cities.However, it nonetheless generated extended air quality data that were used as evidence to engage relevant stakeholders and create community awareness of the importance of air pollution measurement in both cities.Another caveat to this study was low data capture at one of the locations (Lagos) due to frequent power failures, although we accounted for this in our inter-site comparisons.This experience highlights the importance of using devices that are tailored to a given context; in this case the need for devices that can be powered by solar energy to reduce data loss.
Conclusions
We present observations of air pollution in two major African cities (Lagos, Nigeria and Yaoundé, Cameroon) over a 12-month period.We explored the effect of meteorology particularly long-range transport and seasonal effect on pollution levels.While combustion dominated pollutants are strongly influenced by local emissions, we noted that the average PM levels in our study period were dominated by haze events during the dry season.Analysis of the temporal profiles of gaseous pollutants gives insights on the local drivers for the observations in both cities which varied in patterns from more sustained high pollution levels at the Yaoundé site to rush hour patterns related to traffic emissions particularly on weekdays in Lagos.We explored the implications of our results on leisure-time physical activity in public space in both locations.Our findings highlight the importance of continuous air quality monitoring to inform public health messaging, shape urban design and protect health for all.Informed Consent Statement: Not applicable.
Figure 1 .
Figure 1.Simulated climatology at the two cities, showing monthly information of precipitation and temperature.(a) Yaoundé, Cameroon and (b) Lagos, Nigeria.Note that the primary and secondary axes in (a) and (b) differ.
Figure 2 .
Figure 2. Map showing the locations of air quality monitoring in the two cities including a high-resolution image of the two sites.
Figure 3 .
Figure 3. Images of study locations and the installed low cost sensor air quality device.(a) Melen Mini-Ferme area, Yaoundé, Cameroon and (b) Admiralty Way, VI, Lagos, Nigeria.
Figure 4 .
Figure 4. Time series and scatter plots of CO, O3, PM2.5, RH and CO2 for the two AQMesh nodes (S1 and S2) during the co-location trial at the urban background station in Cambridge, UK.Statistics shown inset in the time series are for S1 relative to S2.
Figure 5 .
Figure 5.Comparison of reference and calibrated AQMesh data for CO, NO2, PM2.5, and CO2 the co-location trial at the urban background station in Cambridge, UK.(a) raw dataset and (b) calibrated dataset.
Figure 6 .Figure 7 .
Figure 6.Time series of daily CO, NO, NO2, O3 and CO2 observation from June 2021 to May 2022.(a) MM-F, Yaoundé, Cameroon and (b) AW-VI, Lagos, Nigeria.Gaps in data are due to power outages at the sites.
Figure 8 .
Figure 8. Diel plot of all the temperature and relative humidity (RH) observations from May 2021 to May 2022 in Yaoundé (blue) and Lagos (red).
Figure 9 .
Figure 9. Temporal variation as day of week diel, average diel, monthly averages and day of week averages for the entire deployment period at the two locations.(a) CO and (b) NO.
Figure 10 .
Figure 10.Temporal variation for PM2.5 as day of week diel, average diel, monthly averages and day of week averages at the two locations.(a) For the entire deployment period and (b) Excluding the Harmattan haze episodes.Note a similar pattern is observed for PM10.
Figure 12 .
Figure 12.Maps showing the output run of NAME back trajectory over six days for daily continuous particle release of 2000 units/hour at 1g/s at 0.1 x 0.1 grid resolution at the two locations (red star in maps).(a) trajectory run for Lagos covering the period 30 December 2021-4 January 2022, (b) trajectory run for Yaoundé covering the period 30 December 2021-4 January 2022, (c) trajectory run for Lagos covering the period 4-9 July 2022 and (d) trajectory run for Yaoundé covering the period 4-9 July 2022.
Figure 13 .
Figure 13.Daily observed average and PM2.5 and PM10 in relation to the 2015 and 2021 daily WHO air quality guidelines at the two study locations for the duration of the campaign.
Figure S1 .
Figure S1.Time series and scatter plots of NO, NO2, PM10, temperature and pressure for the two AQMesh nodes during the co-location trial at the urban background station in Cambridge, UK.Statistics shown inset in the time series are for S1 relative to S2.
Figure S2 .
Figure S2.Diel profiles at AW-VI, Lagos and MM-F, Yaoundé for a 4-day period during haze and non-haze episodes.
Figure S3 .
Figure S3.Map showing the 6-cluster solution to back trajectories for AW-VI, Lagos, Nigeria (pink bordered area) for the duration of the campaign (2021-2022).The number represents the percentage mean for each cluster relative to the overall trajectory.
Figure S4 .Figure S5 .
Figure S4.Time series of 15-minutes CO, NO, NO2, O3 and CO2 observation from June 2021 to May 2022.(a) MM-F, Yaoundé, Cameroon and (b) AW-VI, Lagos, Nigeria.Gaps in data are due to power outages at the sites.
|
2023-11-22T16:43:38.725Z
|
2023-11-17T00:00:00.000
|
{
"year": 2023,
"sha1": "7fbb1364b66fa1d78ac449ab672f28033f653938",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/14/11/1693/pdf?version=1700184496",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9b8756902148e24a3e5024ff13ee1574569a888a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
150245695
|
pes2o/s2orc
|
v3-fos-license
|
RECYCLING AND ITS USE IN CONCRETE WASTE PROCESSING BY HIGH-SPEED MILLING
This article discusses the possibility of recycling of concrete waste using the high-speed milling method. The resulting of milling is micronize old concrete. Used old concrete was created by crushing of old concrete, which served as a structural concrete for the construction of a supporting column. Two level of milling process was used to recycle old concrete. The main use of waste is the possibility of partial replacement of commonly used binder and microfillers in concrete. For this reason, properties as particle size distribution, dynamic modulus of elasticity, flexural strength and compressive strength were observed. The aim is to replace as much cement as possible while maintaining mechanical properties.
Introduction
Amount of produced waste increases with new concrete structures per year.Most of the concrete waste is recycled, approximately 10 -20 wt.% of old concrete is stored in landfills [1].This waste is largely contaminated and can not be used.The most common way of recycling is to reuse old concrete in the form of aggregate to new concrete.First, the reinforcement is removed and then concrete is crushed into fractions.New waste is generated during crushing.This waste is composed of particles sized from 0-1 mm.This waste is not used in new concrete because it increases the water content of the concrete recipe and thus also increases the shrinkage and reduces the mechanical properties [2][3][4].Waste generated during recycling contains most of the old cement matrix.It is already hydrated cement, which also contains a part of unhydrated clinker because during hydration the water does not reach the center of the clinker minerals and thus the hydration is stopped.The amount of unhydrated clinker is about 10 % in the old matrix [5].Amount of unhydrated clinker directly depends on the age of concrete, its use, and quality.Using the high-speed milling method, these unhydrated clinker centers can be uncovered and mechanically activated.Recycled concrete can act as a microfiller and binder in the future composite [6][7][8].The process of highspeed milling uses the principle that has been known for thousands of years.It is the rotation of friction elements between which the milled material passes.This process has been enhanced by the use of patented friction elements (teeth, pins) at LAVARIS Ltd. and the use of high milling speed [9].However, this process becomes very energy-intensive and thus creates an economic burden for recycling.For this reason, it is necessary to optimize the milling process and thereby economically improve the recycling process using high-speed milling [10].
Materials and samples
Experimental work deals with cement pastes, where part of cement was replaced with micronized concrete powder.Micronized old concrete was formed by micronization of old concrete.It was old structural concrete used for the load-bearing structure, directly for the columns of the old engine factory.Two levels of milling were used to recycle old concrete.For the experiment, fractions from 0 to 1 mm were used and it contained a large amount of the old cement matrix (OC).At the first level of the milling process, patented teeth with a diameter of 400 mm were used as the milling element.The result was a micronized old concrete (MOC A), which was taken from two places directly behind the mill (MOC A _1) or behind the filter (MOC A _2).At the second level of the milling process, patented pins with a diameter of 300 mm were used as the milling element.The result was finer micronized old concrete powder (MOC B), which was, as in the previous case, taken from the recycling line in two places, behind the mill (MOC B _1), or behind the filter (MOC B _2). Collection points and the entire recycling line can be seen in Figure 1.Individual grain curves are shown in Figure 2. It is clear from results that in the case of recycled MOC A _2, MOC B _1 and MOC B _2 we get a finer material than the used cement.This effect can also be seen in the Table 1, where these three micronized old concretes had the highest specific surface area and therefore the possible future activity.
Cement pastes were made up of 70 wt.% of Port- The water-binder ratio was set to w / b = 0.4 in all cases (Table 2).Six beams of 40 × 40 × 160 mm were produced from each mixture.They were stored in a water bath with 100 % humidity and air temperature 23 ± 3 °C for 28 days.The size of grains affected the workability of mixtures (Table 2).The workability was measured by the cone spill method after 15 impulses.The cone spill was not measured on the mixture with non-milled old concrete because the mixture was too fluid.
Experimental methods
The resulting activation of the old unhydrated clinker was measured by the indirect method using the mechanical properties of the resulting cement paste.Set of parameters describing the mechanical properties of the resulting cement paste was chosen for this purpose, namely the dynamic shear modulus, the dynamic modulus of elasticity, the flexural strength and the compressive strength.The dynamic shear modulus and dynamic modulus of elasticity were measured by the non-destructive method -the resonance method.
The advantage of a non-destructive measurement was that the property was traced over time.concrete the majority of unhydrated clinkers are composed of belite that hydrates in a long time.Figure 5 shows the flexural strength of the tested materials.
Discussion
All recycled material mixtures have a higher flexural strength than the reference cement.This is because old concrete is mostly inert and therefore does not release large amounts of hydration heat and thus smaller volumetric changes.These aspects have resulted in a decrease in the amount of microcracks and thus an increase in flexural strength.Figure 6 shows the results of the compressive strength of tested materials.The highest average compressive strength was achieved at the sample with the coarsest micronized old concrete (CEM + MOC A_1) at 74.0 ± 1.9 GPa and it is 30 MPa less than the reference cement sample (CEM).
Results of the compressive strengths of the sample with recycled materials were approximately 30 % less than reference sample.Because it was used 30 wt. % of recycled material, it can be stated that during milling there was minimal activation of unhydrated clinkers in old concrete.
Conclusions
This work focuses on the effect of micronized old concrete as a partial substitute for Portland cement.The resulting cement pastes contained 70 wt.% Portland cement and 30 wt. % different kinds of micronized old concrete.Based on the results, it can be concluded that: • The results show a gradual increase in the dynamic modulus of elasticity between 7 and 28 days in all samples with micronized old concrete.
• The highest average values of the dynamic modulus of elasticity were achieved for the sample with the finest micronized old concrete (CEM + MOC B_2) of 9.5 ± 0.1 GPa.
• All recycled material mixtures have a higher bending strength than the reference cement.
• The results of the compressive strengths of the sample with the recycled materials are approximately 30 % less than reference sample.
In the future, the research will focus on direct detection of the amount of unhydrated clinker and confirmation of the effect of micronization on their uncover.
Figure 1 .
Figure 1.Recycling line with individual components and filing points.
Figure 2 .
Figure 2. Partical size distribution curve of used materials.
Figure 3 .
Figure 3. Development of dynamic modulus of elasticity.
Figure 3
Figure 3 and 4 shows the development of the dynamic shear modulus and the dynamic modulus of elasticity.Results show a gradual increase in the dynamic modulus of elasticity between 7 and 28 days in all samples with micronized old concrete.The dynamic modulus of elasticity increased by about 2 GPa.Highest average values of the dynamic modulus of elasticity after 28 days had a sample with the finest micronized old concrete (CEM + MOC B_2) of 23.6 ± 0.5 GPa.This is 0.7 GPa less than the reference Portland cement (CEM) and 0.2 GPa more than the reference sample of a paste composed of cement and non-milled old concrete (CEM + OC).A similar trend as with the dynamic modulus of elasticity is seen in the dynamic shear modulus.Highest average values of the dynamic modulus of elasticity had a sample with the finest micronized old concrete (CEM + MOC B_2) of 9.5 ± 0.1 GPa.This is the same value as a reference sample with Portland cement (CEM) and 0.3 GPa more than a reference sample of a paste composed of cement and non-milled old concrete (CEM + OC).Also, a reference sample composed of cement (CEM) shows a faster increase in the dynamic modulus of elasticity between 7 and 28 days by 3.5 GPa.It is because in old
Table 1 .
Characterization of grains of used materials.
Table 2 .
Composition of the individual set/mixtures.
In our case,
|
2019-05-12T14:14:24.893Z
|
2019-03-21T00:00:00.000
|
{
"year": 2019,
"sha1": "f13543f2a18e48f4a15fa687161d449636607c72",
"oa_license": "CCBY",
"oa_url": "https://ojs.cvut.cz/ojs/index.php/APP/article/download/5461/4977",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f13543f2a18e48f4a15fa687161d449636607c72",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
239278367
|
pes2o/s2orc
|
v3-fos-license
|
Study on the Coupled Vibration Characteristics of a Two-Stage Bladed Disk Rotor System
: This paper conducts a coupled vibration analysis of a two-stage bladed disk rotor system. According to the finite element method, the bladed disk rotor system is established. The substructure modal synthesis super-element method (SMSM) with a fixed interface and free interface is presented to obtain the vibration behaviors of the rotor system. Then, the free vibration results are compared with the ones calculated by the cyclic symmetry analysis method to validate the analysis in this paper. The results show that the modes of the two-stage bladed disk not only include the modes of the first-and second-stage bladed disk, but also the coupled modes of the two-stage bladed disk.
Introduction
An aero-engine is a kind of high-speed rotating machinery with a complex structure. The rotating blades and the fixed bladed disk are the important key parts of the aero-engine. In an aero-engine, multi-stage bladed disks are assembled together, and the study of the coupling interaction between the multi-stage bladed disks is particularly important for understanding the dynamics characteristics of the whole engine. In the vibration analysis of the bladed disk, the vibration coupling form between the blade and disk is usually analyzed. However, the interstage coupled effect is usually ignored. For the multi-stage bladed disk system, the coupling between stages is an important factor affecting the energy propagation between the disks. The multi-stage rotor has a specific type of mode and response mode that extends a multi-stage bladed disk structure. Therefore, it is very important to analyze the interstage coupled vibration of the multi-stage bladed disk, which is also the basis for further study of the interstage coupled vibration caused by the mistuned single-stage bladed disk.
In recent years, scholars have carried out theoretical, numerical simulation, and experimental studies on the dynamic characteristics of the bladed disk. In view of the actual structure in engineering, the finite element method is usually used to model and analyze the complex bladed disk. For the multi-stage and multi-component integral bladed disk assemblies, potential topics have been proposed in [1]; for instance, building a more effective and applicative model with higher precision. Based on the Timoshenko beam theory and Kirchhoff plate theory, Laxalde et al. [2] propose a new method that combines the cyclic modelling of each stage with a realistic inter-stage coupling. Study cases are presented to evaluate the efficiency of the method. Joannin et al. [3] introduces a novel reduced-order modelling technique well-suited to the study of nonlinear vibrations in large finite element models. The performance of the method is appraised on a nonlinear finite element model of the bladed disk in the presence of structural mistuning. HS et al. [4] proposed an improved shaft-disk-blade coupling model to study the influence of disk position and flexibility on critical speed and natural frequency in the coupling-disk-blade unit. Zhao et al. [5] established the finite element model of the fully flexible shaft-disksleeve system with tip rub fault by using the Lagrange multiplier method, and proposed an improved disk-blade interface coupling method. Ma et al. [6] established a rotor-blade system dynamics model. With the increase in the number of blades, complex coupling modes, such as the vane-blade coupling mode, rotor lateral vibration and blade bending coupling mode, and rotor torsional vibration and blade bending coupling mode, have also appeared. Zhao et al. [7] established the coupled model of spinning shaft-disk assemblies under sliding bearing supports. For the multi-stage assembly of cyclic structures, Wang et al. [8] simulated and analyzed the vibration characteristics of ceramic matrix composite monolayers and found that the blade had a great influence on the vibration mode of the entire blade disc. On the basis of previous studies, Tang et al. [9] further established a vane-disk-shaft coupling model and explained the influence of tuning/detuning on the coupling modal characteristics. Al Bedoor B.O. [10] established a mathematical model of reduced order and studied the natural frequencies of shaft-torsion and blade-bending coupling. Huang et al. [11] studied axial-torsional, disk-lateral, and blade-bending coupled vibrations in a shaft-disk-blade unit. Chiu et al. [12] investigated the influence on the coupling vibrations among shaft-torsion and blade-bending coupling vibrations of a multidisk rotor system. Chiu et al. [13] studied the influence of shaft torsion, blade bending, and wire-drawing coupling vibration on the coupled vibration of a multi-disk rotor system with group blades. Ma et al. [14] analyzed the effects of blade stagger angles on the blade rubbing-induced responses of a rotational shaft-disk-blade system. Wang et al. [15] analyzed the nonlinear dynamic behavior of a rotor-bearing system with interaction between the blades and rotor. Luo et al. [16] investigated the natural frequency of the free transverse vibration of blades in rotating disks to examine the relationship of the natural frequencies, blade stiffness, and nodal diameters, to study how neighboring blades react upon each other and affect the blade's natural frequency. Rzadkowski et al. [17] adopted the forced vibration analysis method and considered the influence of multistage coupling on the dynamic characteristics of an octave-disc rotor on a solid shaft. The results show that multi-level coupling must be considered in the design of rotor blades and discs to avoid the resonance caused by low-frequency flow excitation. Bladh et al. [18] studied the influence of interstage coupling on the dynamic performance of harmonic and detuned multistage blade-disk structures and pointed out that the dynamic performance of a single-stage rotor depends on the selection of the interstage coupled boundary conditions. Based on sector mistuning, Vargiu et al. [19] established a reduced order model for the dynamic analysis of mistuned bladed disks. Sector frequency mistuning is preferable to capture blade-to-disk irregularities. Petrov et al. [20] proposed an efficient method for analysis of nonlinear vibrations of mistuned bladed disk assemblies. For a practical high-pressure bladed turbine disk, considering several types of nonlinear forced response, the analysis of nonlinear forced response for simplified and realistic models of mistuned bladed disks has been performed. Rzadkowski et al. [21] studied the forced vibration of eight detuned blade discs on a solid shaft and found that when the blade disc was on the shaft, detuning had little influence on the blade stress. Chaofeng Li et al. [22] studied the coupling vibration characteristics of a flexible shaft-disk-blade system with detuning characteristics. Due to the detuning characteristics, the natural frequency and coupling mode types will change accordingly. Huang et al. [23] used a disk comprising of periodically shrouded blades to simulate the weakly coupled periodic structure. The effects of Coriolis force and the magnitude of disorder on the localization phenomenon of a rotating blade-disk system were investigated numerically. Zhao et al. [24][25][26][27][28][29][30][31][32][33][34] studied the vibration characteristics of a graphene nanoplatelet (GPL)-reinforced blade-disk rotor system by the experimental method and the finite element (FE) method, and studied the parallel intelligent algorithm based on a computed unified device architecture. The genetic particle swarm optimization algorithm is used for optimization arrangement on mistuned blades. The above works are mainly based on the finite element method to study the coupled modes and responses of the shaft-disk-blades system under the tuned and mistuned bladed disk. The coupled modes of the multi-stage blade-disc system have not been studied, and the coupled model is greatly simplified compared with the actual structure.
In this paper, respectively using the substructure modal synthesis super-element method and the cyclic symmetry analysis method, two kinds of accurate finite element model for two-stage bladed disk were established. The accuracy of the substructure modal synthesis super-element method was verified by the cyclic symmetry analysis method. The interstage coupled vibration of the two-stage bladed disk was analyzed. This research fills in the blanks regarding the interstage coupled vibration of a complex bladed disk and lays a foundation for further research on the effect of a mistuned bladed disk on interstage coupled vibration.
Materials and Methods
Using the fixed interface prestress-free interface substructure modal synthesis superelement method, based on the finite element analysis software ANSYS, the dynamic frequency analysis of the first-stage bladed disk system of the compressor was carried out. The finite element model of the basic sector is shown in Figure 1. The parameters of the blade tenon and tenon grooves are respectively as follows: elastic modulus E 0 = 1.135 × 10 11 Pa, Poisson's ratio µ 0 = 0.3, and density ρ 0 = 4380kg/m 3 . The material parameters of the disk are as follows: elastic modulus E 1 = 1.15 × 10 11 Pa, Poisson's ratio µ 1 = 0.3, density ρ 1 = 4640kg/m 3 , and the contact form of the blade tenon and tenon grooves adopts a standard contact.
Using the fixed interface prestresselement method, based on the finite ele quency analysis of the first-stage blade The finite element model of the basic s blade tenon and tenon grooves a The analysis process of the substr shown in Figure 2. For the modal synth prestressed and free interface substructu The analysis process of the substructure modal synthesis super-element method is shown in Figure 2. For the modal synthesis super-element method, for the fixed interface prestressed and free interface substructures, the basic idea is that the finite element model of the basic sector of the bladed disk is established by using the substructure analysis method from bottom to top. The two side outlet degrees of freedom (master degrees of freedom) of the basic sectors of the bladed disk are fixed and the working speed is applied to perform the prestressed contact analysis (bladed binding, bladed contact) for each basic Appl. Sci. 2021, 11, 8600 4 of 28 sector of the detuned bladed disk. One opens the prestress setting and releases the fixed constraints of the two side exit degrees of freedom (master degrees of freedom) of the basic sector of the bladed disk, and then conducts the modal synthesis generation part analysis of the substructure of the free interface. One then generates a supercell and use supercell nesting technology to generate a multilevel supercell to complete the generation part. Secondly, the superunits are connected to analyze the overall bladed disk system (modal, dynamic response), and the use part is completed. Finally, the condensed solution of the dynamic response of the supercell master degree of freedom is extended to all the degrees of freedom in the supercell, so as to obtain the complete solution of the dynamic response of all the degrees of freedom in the bladed disk system, completing the extension part.
Super-Element Power Reduction
The motion equation of the superunit with interfacial for
Super-Element Power Reduction
The motion equation of the superunit with interfacial force is m ii m ij m ji m jj ..
x i ..
x j + k ii k ij k ji k jj where x i is the displacement of interface nodes, which is also the coordinate of main degrees of freedom; x j is the displacement of internal nodes, namely, the coordinate of deputy degrees of freedom; and f i is the interface strength.
To constrain the degree of freedom of the interface, namely x j = 0, can be obtained from the second equation in Equation (2).
by this formula, the main mode [Φ] of the fixed interface is obtained and regularized, and then where Λ j = diag p 2 1 , · · · p 2 k , · · · p 2 m and p k (k = 1, 2, · · · , m) is the natural frequency under the condition that the super-element has a fixed interface. M is the degree of freedom inside the super-element. It can be obtained from Equation (7).
calculate the matrix in Equation (3) from the above equation k jj − ω 2 m jj −1 . where can be written as and substituting this equation into Equations (4) and (5) we obtain Appl. Sci. 2021, 11, 8600 where The above derivation uses precise power reduction. Compared with static shrinkage, dynamic shrinkage introduces the inertia correction term AΛA T on the basis of the static shrinkage term [k 0 ] and [m 0 ]. The modified inertia term [M(ω)] is different from the static reduction value, while the elastic term is unchanged.
It should be noted that the above derivation does not introduce approximation. In practical applications, the higher order modes of the main modes of the fixed interface are generally omitted, and only some of the lower order modes are taken, thus greatly reducing the scale of analysis and calculation.
Substructure Modal Synthesis
The reduced super-element group is integrated into the motion equation of the whole system by using the conditions of interface displacement coordination and interface force balance.
The difference between this equation and the equations of motion of the whole system obtained by other substructure synthesis techniques is that the mass matrix is a function of frequency. Equation (13) corresponds to the nonlinear eigenvalue problem. This kind of eigenvalue problem can adopt the dichotomy method or other methods to solve the nonlinear eigenvalue problem.
Dynamic Frequency Calculation and Precision Check
Firstly, the analysis accuracy of the modal synthesis super-element method for prestressed and free interfacial substructures with fixed interfaces was verified. The dynamic frequency of the standard contact bladed disk system under the working speed was analyzed by using the cyclic symmetry analysis method and the modal synthesis superelement method of fixed interface prestressed and free interface substructures, respectively. In Table 1, the dimensionless dynamic frequency and relative error of the homophonic standard contact bladed disk system was calculated by two methods, and the working speeds are given. Figure 3 shows the dynamic frequency curve of the harmonized standard contact bladed disk system calculated by the two methods under the working speed. It can be seen that compared with the cyclic symmetry analysis method, the maximum relative error of the dimensionless dynamic frequency of the modal synthesis super-element method of the prestressed free interface substructure with a fixed interface is 5.68%. Since the number of modes intercepted by the substructures is the same, and the same finite element mesh model is used, the errors of the two methods at each frequency are relatively consistent.
Static Frequency Analysis of Blades
Since the natural vibration characteristics of the blades have a direct impact on the coupled vibration of the bladed disk system, the blades of the first and second stage of the bladed disk system were taken as the research objects, and the static frequency analysis of the two stages was carried out to obtain the inherent vibration characteristics. The threedimensional solid model of the first-and second-stage bladed disk system and the finite element model of the two-stage blade after meshing are shown in Figures 4 and 5.
Static Frequency Analysis of Blades
Since the natural vibration characteristics of the blades have a direct impact on the coupled vibration of the bladed disk system, the blades of the first and second stage of the bladed disk system were taken as the research objects, and the static frequency analysis of the two stages was carried out to obtain the inherent vibration characteristics. The three-dimensional solid model of the first-and second-stage bladed disk system and the finite element model of the two-stage blade after meshing are shown in Figures 4 and 5. the two stages was carried out to obtain the inherent vibration dimensional solid model of the first-and second-stage bladed element model of the two-stage blade after meshing are shown The first 10 order natural frequencies and mode shapes of the first-and second-stage blades were obtained by modal analysis after the tenon position of the first-and secondstage blades were all constrained.
The first 10 natural frequencies are shown in Table 2. The first four natural frequencies and mode shapes are shown in Table 3: The first four natural frequencies and mode shapes are shown in Table 3: Specific mode shapes are shown in Figures 6 and 7.
The first order The second order The third order The fourth order The first order The second order The third order The fourth order The first four natural frequencies and mode shapes are shown in Table 3: Specific mode shapes are shown in Figures 6 and 7.
The first order The second order The third order The fourth order The first order The second order The third order The fourth order Through the analysis of the modes and vibration shapes of the first-and second-stage blades, it can be seen that the low-order mode shapes of the blades are bending and torsional vibration, and the frequency of the corresponding mode shapes of the second stage blades is slightly higher than that of the first-stage blades.
Modal Analysis of the First-Stage Bladed Disk System
The first-stage bladed disk system model was taken as the object of analysis, and its modal analysis was carried out. Figure 8 shows the three-dimensional solid model of the first-stage bladed disk system, and Figure 9 shows the finite element model of the first-stage bladed disk system. Through modal analysis, the first 150 order natural frequencies and mode shapes of the first-stage blading disk system were calculated and solved. The specific values of natural frequencies of each order are shown in Tables 4 and 5.
The first 150th order mode shapes of the bladed disk system of the first stage are shown in Table 5: The mode diagram of the typical order of the first-stage bladed disk system is as follows ( Figure 10): stage bladed disk system. Through modal analysis, the and mode shapes of the first-stage blading disk system specific values of natural frequencies of each order are and mode shapes of the first-stage blading disk system specific values of natural frequencies of each order are Through analysis, it can be seen that the low order mode shape of the first-stage bladed disk system is the first order bending vibration of the blade according to the pitch diameter. With the increase in mode order, the vibration of the disk is excited, and the coupled vibration of the blade and the disk appears. As the modal order continues to increase, the blade begins to transform from a bending vibration to twisting vibration. In addition, it can be found that due to the coupled action of the blade and the wheel, the first-order bending frequency of the blade is increased.
Modal Analysis of the Second-Stage Bladed Disk System
The second-stage bladed disk system model is taken as the analysis object, and its modal analysis is carried out. Figure 11 shows the three-dimensional solid model of the second-stage bladed disk system, and Figure 12 shows the finite element model of the second-stage bladed disk system. Through modal analysis, the first 150 order natural frequencies and mode shapes of the second-stage blading disk system were calculated and solved. The specific values of natural frequencies of each order are shown in Tables 6 and 7.
The first 150th order mode shapes of the second stage bladed disk system are shown in Table 7: Figure 13 shows the mode diagram of the typical order of the second-stage bladed disk system.
According to the analysis, the vibration law of the second-stage bladed disk system is similar to that of the first-stage bladed disk system. With the increase in the modal order, the first-order bending vibration of the blade is presented, and then the vibration of the disk is excited, resulting in the coupled vibration of the blade and disk. As the modal order continues to increase, the blade begins to transform from a bending vibration to twisting vibration. At the same time, due to the coupling of the blade and the disk, the first-order bending frequency of the blade is increased. Figure 10. Mode pattern of a typical order of the first-stage bladed Through analysis, it can be seen that the low or bladed disk system is the first order bending vibration diameter. With the increase in mode order, the vibrat coupled vibration of the blade and the disk appears. A crease, the blade begins to transform from a bending addition, it can be found that due to the coupled actio first-order bending frequency of the blade is increased
Modal Analysis of the Second-Stage Bladed Disk
The second-stage bladed disk system model is ta modal analysis is carried out. Figure 11 shows the thr second-stage bladed disk system, and Figure 12 shows ond-stage bladed disk system. Through modal analysis, cies and mode shapes of the second-stage blading disk s The specific values of natural frequencies of each order
Modal Analysis of the Two-Stage Bladed Disk Coupled System
For the interstage coupled vibration analysis of multi-stage bladed disk system, the two-stage bladed disk system composed of the first-and second-stage bladed disk systems is selected firstly, and the overall model of the two-stage bladed disk system is taken as the analysis object. Figure 14a shows the overall three-dimensional solid model of the twostage bladed disk system. Figure 14b shows the overall finite element model of a two-stage bladed disk system with meshing and boundary conditions considered. Through modal analysis, the first 195th order natural frequencies and vibration shapes of the two-stage bladed disk system were calculated and solved, as shown in Tables 8 and 9. Table 9. First 160th order natural frequencies and mode shapes of the two-stage bladed disk coupled system. The first 160th order mode shapes of the two-stage bladed disk coupled system are shown in Table 9:
Order Number
The mode diagram of the typical order of the two-stage bladed disk coupled system is as follows: (1) As shown in Figures 15-19, coupled mode shape when the first-stage bladed disk vibration is dominant. According to the modal and mode shape analysis of the two-stage bladed disk coupled system, in the low-order mode, the first-order bending vibration of the first-order blade and the second-order blade is firstly manifested. With the increase in the modal order, the coupled vibration of the blade and the wheel, the twisted vibration of the blade, and the coupled vibration between the two stages of the bladed disk appear. In addition, it can be found that the vibration of the two-stage bladed disk will appear at the same frequency with the same pitch diameter. However, in most two-stage coupled bladed disks, the mode pattern cannot be seen directly. This is because the vibration of the first-stage bladed disk is dominant, while the vibration of the other stage is relatively small.
For the order with obvious inter-stage coupled vibration of the two-stage bladed disk system, the maximum vibration displacements of the first-stage bladed disk system and the second-stage bladed disk system are shown in Table 10. Figures 30-35 show the inter-stage coupled modes. two-stage bladed disk system composed of the first-and second-stage bladed disk systems is selected firstly, and the overall model of the two-stage bladed disk system is taken as the analysis object. Figure 14a shows the overall three-dimensional solid model of the two-stage bladed disk system. Figure 14b shows the overall finite element model of a twostage bladed disk system with meshing and boundary conditions considered. Through modal analysis, the first 195th order natural frequencies and vibration shapes of the twostage bladed disk system were calculated and solved, as shown in Tables 8 and 9.
(a) 3D solid model of a two-stage bladed disk system (b) Finite element model of a two-stage bladed disk system Figure 14. Two-stage bladed disk system model. The mode diagram of the typical order of the two-stage bladed disk coupled system is as follows: (1) As shown in Figures 15-19, coupled mode shape when the first-stage bladed disk vibration is dominant.
(a) Two-stage coupled bladed disk (b) First-stage bladed disk The mode diagram of the typical order of the two-stage bladed disk coupled system is as follows: (1) As shown in Figures 15-19, coupled mode shape when the first-stage bladed disk vibration is dominant.
(a) Two-stage coupled bladed disk (b) First-stage bladed disk (2) As shown in Figures 20-24, coupled mode shapes of the second-stage bladed disk when the vibration is dominant. (a) Two-stage coupled bladed disk (b) First-stage bladed disk Figure 19. Four-pitch coupled vibration of the blade and disk.
(2) As shown in Figures 20-24, coupled mode shapes of the second-stage bladed disk when the vibration is dominant.
(a) Two-stage coupled bladed disk (b) Second-stage bladed disk (a) Two-stage coupled bladed disk (b) First-stage bladed disk Figure 19. Four-pitch coupled vibration of the blade and disk.
(2) As shown in Figures 20-24, coupled mode shapes of the second-stage bladed disk when the vibration is dominant.
(a) Two-stage coupled bladed disk (b) Second-stage bladed disk (a) Two-stage coupled bladed disk (b) Second-stage bladed disk (a) Two-stage coupled bladed disk (b) Second-stage bladed disk (a) Two-stage coupled bladed disk (b) First-stage bladed disk (c) Second-stage bladed disk According to the modal and mode shape analysis of the two-stage bladed disk coupled system, in the low-order mode, the first-order bending vibration of the first-order blade and the second-order blade is firstly manifested. With the increase in the modal order, the coupled vibration of the blade and the wheel, the twisted vibration of the blade, and the coupled vibration between the two stages of the bladed disk appear. In addition, (a) Grade one leaf disk (b) Grade two leaf disk Table 11 shows a comparison of the natural frequencies and mode shapes of the twostage bladed disk system with the first-and second-stage bladed disk systems. Table 11 shows a comparison of the natural frequencies and mode shapes of the twostage bladed disk system with the first-and second-stage bladed disk systems. Table 11 shows a comparison of the natural frequencies and mode shapes of the two-stage bladed disk system with the first-and second-stage bladed disk systems. According to the comparison of the natural frequencies and modes of the two-stage bladed disk system and the single-stage bladed disk system in Table 11, it can be seen that the vibration of the bladed disk system has a certain order: the blade vibration is first, followed by the coupled vibration of the bladed disk. At the intersection of the two stages, the interstage coupled vibration of the bladed disk will occur. It can be seen that the natural frequencies and mode shapes of the two-stage bladed disk coupled system not only include the natural frequencies and mode shapes of the first-and second-stage bladed disk, respectively, but also have the coupled modes of the two-stage bladed disk. Therefore, the two-stage bladed disk system model should be chosen to calculate the coupled vibration modes of the two-stage bladed disk system. For the dominant vibration mode of a single-stage disk, a single-stage bladed disk calculation model should be selected.
Conclusions
The whole coupled vibration mode of a two-stage bladed disk system was analyzed. Firstly, the modal analysis of the first-and second-stage bladed disks was carried out, and then the modes and configurations of the first-and second-stage bladed disks were solved, respectively. Finally, the modal analysis of the two-stage bladed disk coupled system was carried out, and the coupled vibration forms of the two-stage bladed disk were analyzed. The following conclusions were drawn.
(1) For the single-stage bladed disk system, the low-order mode shape is the first-order bending vibration of the blade according to the pitch diameter. With an increase in the mode order, the vibration of the disk is excited, resulting in the coupled vibration of the blade and the disk. As the modal order continues to increase, the blade begins to transform from a bending vibration to twisting vibration. At the same time, due to the coupling of the blade and the disk, the first-order bending frequency of the blade is increased.
(2) For the two-stage bladed disk coupled system, the vibration of the bladed disk has a certain order: the blade vibration is first, followed by the coupled vibration of the bladed disk. For the intersecting position of the two-stage bladed disk frequencies, the interstage coupled vibration of the bladed disk will occur, and the number of pitch diameters or pitch circles of the vibration would be the same. The natural frequencies and mode shapes of the two-stage bladed disk coupled system not only include the natural frequencies and mode shapes of the first-and second-stage bladed disks, respectively, but also have the coupled modes of the two-stage bladed disk, so the multistage bladed disk model is more accurate to analyze.
Conflicts of Interest:
The authors declared no conflict of interest.
|
2021-10-20T15:26:27.079Z
|
2021-09-16T00:00:00.000
|
{
"year": 2021,
"sha1": "e809f520d5768ecd3ad97d78b599485efb7e91b8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/18/8600/pdf?version=1631868621",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "26818d42ed617bdbdc3eb28ebfa1b5b98c522d81",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
248902798
|
pes2o/s2orc
|
v3-fos-license
|
Traditional practices during pregnancy and childbirth among mothers in Shey Bench District, South West Ethiopia
Objective: Pregnancy and child birth is the most critical period in the health of women and children and the objective of this study was to explore traditional practices among mothers during pregnancy and delivery in Shey Bench District, South West Ethiopia, and we hope the evidence generated could benefit decision-makers and concerned bodies who are interested in this important public health issue. Methods: A descriptive qualitative study, which is an ideal approach when an uncomplicated description is desired that focuses on the details of what, where, when, and why of an event or experience, was conducted from March to May 2019 in Shey Bench District and a purposively selected 43 women have participated in the study. In-depth interviews and key informant interviews were conducted and data were analyzed by Open code 4.2 software and summarized following content analysis approach. Findings were narrated based on the major categories and study participants’ words were used as quotes. Results: In this study, it was found that mothers have experience of traditional practices mainly of abdominal massage, use of herbs, prohibition of some food types, and strenuous physical exercise during pregnancy and childbirth. As of the reasons; mothers reported as traditional practices help them to make the labor easy and fast, alleviate discomforts, and avoid unwanted big size of the fetus. Experience of health problem following practice of traditional practices like vaginal bleeding and child death were also reported. However, some study participants indicated as community members are changing their mind because of getting advice from health professionals. Conclusion: Although traditional practices were found to be exercised by mothers believing to get benefits, there were reports of health side effects on mothers and the fetus from applying abdominal massage, herbal medicine, food prohibition, and strenuous physical exercise during their pregnancy and childbirth. Therefore, concerned working bodies shall design and implement necessary interventions, particularly health education programs to bring a better a change against harmful traditional practices.
Background
Traditional cultural practices reflect values and beliefs held by members of a community for periods often spanning generations, which are followed by different population groups and such practices could be beneficial, harmful, or neutral. There are reports as women experience certain beliefs relating to diet, behavior, use of medicinal herbs, and massaging the abdomen during their pregnancy and child birth. 1 It is advisable to promote beneficial cultures and discourage those which have negative impact on mothers and the fetus. 2 Practices related to the perinatal period continue to be sustained by societies, even though they show intercultural differences and change over time in the same cultures. 3 There are women who preferred and had regular abdominal massage in concurrent with antenatal 4,5 and also there are cultures in which women rub abdomen into a wooden post to facilitate delivery of the fetus. 6 A study from Ethiopia has also showed that pregnant women practices abdominal massage to get relief from pregnancy-related complications, and for that, the majority of pregnant women get some kinds of admiration and support from traditional birth attendants, families, and neighbors to carry out such practices to solve their pregnancy-related problems. 7 Food taboo is a traditional practice in which nutritious and safe food types may be restricted or denied thus making women vulnerable. 8 Studies revealed that as mothers avoid to eat beans, eggs, fish, meat products, potatoes, fruits, butternut, and pumpkin, which are rich in essential micronutrients, protein, and carbohydrates. [9][10][11] Studies from Ethiopia have also showed experience of food taboos by mothers during their pregnancy period. Types of food items considered by pregnant mothers include linseed, coffee, tea, cabbage, porridge, wheat bread, banana, pimento, pepper, groundnut, salty diet, nug, sugarcane, pumpkin, and coca drinks. The reasons behind the food taboos were fear of difficulty in delivery, fear of prolonged and pain labor, fear of abortion and miscarriage, large fetus, and feeling of indigestion. In addition to this, other respondents also argued that foods such as milk and milk products are taboo because it sticks on the fetal head and face of the fetus. 7,12 Herbal medicine is another tradition commonly used during pregnancy. 13,14 Nausea, abdominal cramp, and common cold were some of the common indications to use herbal medicines during pregnancy. Ginger, cranberry, valerian, and raspberry were the most commonly used herbs in pregnancy. 15 A study from Ethiopia has also reported use of herbal medicine during pregnancy and most commonly consumed herbal medicine includes ginger, damakasse, garlic, tenaadam, and eucalyptus but is not known that these most commonly consumed plant species have harmful fetal effects. 16 There are areas where mothers may be advised to have extraneous physical exercise, work, and lifting heavy loads during their pregnancy period. However, studies have also showed as lifting in combination with job strain could increase the risk of poor pregnancy outcome like preeclampsia, gestational hypertension, gestational diabetes, birth complication, and low birth weight. [17][18][19] Despite some perceived potential beneficial effects, cultural practices during pregnancy may have harmful effects on both mother's and baby's health which demands detail understanding of the reasons behind to recommend the right measurement for encouraging the beneficiary ones and discourage the harmful ones. 1,3 There are a few studies from Ethiopia which addressed such issues at different time but most of the studies focused on single issues and are more of figurative reports than understanding the details of the traditional practices. However, there are no studies from southern parts of the country, particularly in the South Western region where many culturally diversified people including pastoralist communities live. Hence; it is very important to reach these community members and generating evidences through appropriate approach, especially qualitative study is demanding. Therefore, this study was conducted to explore the experiences and reasons why mothers exercise different traditional practices during pregnancy and childbirth.
Study area and period
The study was conducted from March to May 2019 at Shey Bench District which is part of Bench Sheko Zone. The capital of the zone is Mizan Aman town and located around 585 km away from Addis Ababa. Bench Sheko Zone is one of the zones in the south western Ethiopia which has culturally rich, multi-linguistic, and diversified people. Hence, studying traditional practices in such culturally rich area was found to be reasonable. According to the information gained from the zonal health bureau, the zone has two town administrations and 6 rural districts with total of 625,345 residents, 1 teaching hospital, 26 health centers with 904 health professionals, and 224 health posts with 567 health professionals. Shey Bench District is one of the semi urban woreda which has a total population of 145,569 (72,057 males and 73,512 females). This study was conducted in two kebeles called Maz Kebele, which has 7811 total population and Kusha Kebele, which has 5523 total population.
Study design
A descriptive qualitative study, which is an ideal approach when an uncomplicated description is desired that focuses on the details of what, where, when, and why of an event or experience, was conducted. Scholars have reported as the goal of qualitative descriptive study is a comprehensive summarization, in everyday terms, of specific events experienced by individuals or groups of individuals. It is an approach that is very useful when researchers want to know, regarding events, who were involved, what was involved, and where did things take place. [20][21][22] Again, its flexibility nature as research questions and study findings emerge also makes it to be elective. Hence, this study approach was found to be appropriate to understand the traditional practices among mothers during pregnancy and childbirth in the selected multicultural area.
Sample size and sampling procedure
Applying a purposive sampling method, we selected mothers who have experience of child birth and use of different traditional practices during their pregnancy period. In the beginning, we contacted health extension workers in the study area and made a discussion about how to get mothers who could give better information on experience of traditional practices during pregnancy. Following that, we conducted in-depth interviews among 34 mothers and a key informant interview among 9 mothers who had experience of performing traditional practices for pregnant mothers in the area. These nine mothers were selected purposively to learn their experience of what cultural practices, how, why, and for whom do they apply and what consequences have they ever faced or witnessed. Throughout the data collection process, we considered saturation of the information to decide on the sample size. Accordingly, we have learned that the categories were well developed and no new ideas were reported after reaching the given sample size.
Source population
The source population were all reproductive age group women, who had pregnancy and gave childbirth within 2 years in the study area and women who have experience of performing traditional practice or traditional birth attendants as witnessed by local leader, health extensions, and mothers.
Study populations
Women who reside in the selected area and had experience of child birth at least one time and above within the 2 years prior to the period of data collection, and again women who had experience of performing traditional practices during pregnancy and child birth and who lived in the area at least more than 1 year were the study population.
Inclusion and exclusion criteria
Inclusion criteria • • Women who reside in the selected area and had experience of child birth at least one time and above within the 2 years prior to the period of data collection period were the study population. • • Women who had experience of doing traditional practices during pregnancy (get information/witness about them from local leader, health extensions, and mothers) and child birth who lived in the area more than 1 year.
Exclusion criteria • • Women who were mentally incapable and those who were unable to communicate were not part of the study.
Definition of traditional practices
Harmful traditional practice: all practices done deliberately by untrained person on the body or the psyche of women for no therapeutic purpose, but rather for cultural motives and which have harmful consequences on the health and the rights of the victims.
Abdominal massage: rubbing butter/oil and massage over the abdomen area of pregnant women by nonprofessionals.
Food prohibition: traditionally forbidden food which is enforced against women during pregnancy and delivery.
Herbal medicine: any plants and related things taken during pregnancy and delivery assumed to have medicinal effect.
Extraneous physical activities: any physical activities (not science based) and movement and/or holding materials or lifting objects which are heavy like fetching water and wood believing that such things benefit pregnant mothers.
Data collection procedures
An interview guide, translated to Amharic language, was used to collect data from study participants. The interview guide was developed based on the objective, findings of previous studies 12
Ensuring trustworthiness
Throughout the interview; data collectors have approached participants very friendly and developed rapport. Have informed and discussed in detail about the research goal and process. Interviews were audio recorded and kept for cross checking as needed. Debriefing and feedback from colleagues were used in managing the data. The inquire process and findings were described in detail so that any interested one could benefit from it. Documents of all the study process were kept and colleagues were let to see the neutrality and dependability of the data. A code-recode process was done on separate time and checked for similarity of codes for intra-and inter-coder dependability. Confirmability was maintained by audit trial, and participants' words were quoted in writing the findings. Throughout the study process, researchers' experience and thoughts regarding the study topic were clearly reflected.
Data analysis
Data analysis was done simultaneously with data collection. The information stored on audio recorder was transcribed verbatim with due consideration of field notes. Investigators (two in number) developed codebook and made discussion on the coding process. Repeatedly reading the data and being immersed in it; coding and categorization were done using Open code 4.2 software. Bringing the most related issues together, content analysis was applied to summarize the findings.
Study results
The findings of the study are summarized under major categories of: socio-demographic characteristics, experience of traditional practice, and types and reasons for harmful traditional practices (abdominal massage, prohibition of food, taking different herbs, and doing extraneous physical exercise).
Socio-demographic characteristics
Thirty-four mothers have participated in in-depth interview and nine women were interviewed in key informant interviews. The age of participants ranges from 23 to 52 (elders were key informant participants) and majorities (38) were married, housewife farmers, and did not attend formal education. Key informants had experience of supporting mothers during delivery, acting as traditional birth attendant.
Experience of traditional practices
Abdominal massage, prohibition of food, taking different herbs, and doing extraneous exercise were the major traditional practices reported by study participants. Almost half of the participants had discussed that they used to practice for themselves and do it for others too during pregnancy time and childbirth. However, some of them had reported as they used to observe but did not experience. Again, few of the study participants had explained as they still experience some of the traditional practices. Of course, some participants had reported that these practices were more common formerly than in the recent time because of getting different advices from health professionals.
Types and reasons for harmful traditional practices
Study participants had discussed on the reasons why traditional practices are exercised during pregnancy time and child birth. Accordingly, many reasons were pointed out for each of the commonly practiced traditions and here below discussed under four main categories.
Category 1: abdominal massage
One of the mainly reported traditions was abdominal massage, and it was indicated that mothers apply it targeting to get relief from pain, to correct fetal position, to ease and fasten labor, and to get comfort, when they sustained a fall down accident and when they assumed that the fetus has displaced. It was reflected that abdominal massage is usually done if mothers sustained a fall down accident and mothers believe massaging the abdomen helps to correct the position of the fetus. Abdominal massage is also conducted to facilitate labor, and it is done by someone assumed to have experience or even the one who did not have. The process involves cleaning the abdomen, applying butter or oil on the abdomen, and then rubbing or massaging it by the palm for short period till the position of fetus is assumed to return to right position, labor gets fast, or the pain relieves. The following points are parts of the responses from the study participants regarding this practice: Abdominal massage is usually done with intention of correcting the position of the fetus in the womb. In addition it is believed that, if a woman had abdominal massage, her labour could be hastened and be easy.
(A 50-year-old Key informant woman)
What I know is; if a woman sustained fall down accident, it will be assumed that the fetus may leave its right position/displaces to one side and it will be returned to its right place by massaging the abdomen. So people do that. I know a woman who sustained vaginal bleeding after she had had abdominal massage during her pregnancy of around three months but later she went to clinic and got treatment. And the fetus was aborted at clinic. (A 30-year-old in-depth interview participant woman) I had experience of abdominal massage during my pregnancy; it was done after I had sustained fall down accident. During that time people told me as the fetus has displaced from its right side and they told me massaging the abdomen will returns to its right place. Then I went to an elder woman who has experience of this and she massaged my abdomen by applying a butter; then after I felt well. (A 27-year-old in-depth interview participant woman) However, mothers have also discussed as they are avoiding this experience nowadays because of getting support and advice from health extensions and other health professionals. One of the study participants has explained it as follows: Formerly I used to perform abdominal massage, I used to assist labour at home but now I don't do it. Now I advise women to visit health centers or hospital for her health. (A 50-year-old Key informant woman) One unique finding mentioned to be a reason for abdominal massage was related with the intention of treating a health problem what the study participants called heart displacement (locally called Bu'i): Abdominal massage is also done when people assume that there is bu'i (assumption of heart displaces to lower body or abdomen) and the massage is exhaustively done with the assumption of returning heart to its right place. (A 34-year-old in-depth interview participant woman) Most of study participants have agreed as such tradition has no benefit and wish to stop and also advice for that. Of course, there are also a few participants who support to have it and who still recommend to have it. The following two responses are reflections regarding this concern: Massaging the abdomen has no benefit; I have seen it and I got relieve from clinic. (A 34-year-old in-depth interview participant woman) If abdominal massage is done by those who are well experienced is good. And I wish so as it continues because I saw it helped me. (A 30-year-old in-depth interview participant woman)
Category 2: extraneous physical activities
Regarding extraneous physical activities and or heavy work during pregnancy; participants had explained as it is believed that if a mother does exercise, she might give a birth easily and within short period of time. As a result, mothers used to be encouraged to go long distance walk, lift heavy load, fetch water, and collect wood outdoor. They have also pointed out that heavy work is not advised during first period of pregnancy, because it may leads to vaginal bleeding and pain but when a woman reaches to labor, doing heavy work is believed to be good thinking it hasten labor and makes labor easy. The following points were parts of the responses from different study participants: I know that pregnant women are advised to do heavy work and go long distance on feet. (A 42-year-old Key informant woman) When time of delivery reaches a pregnant mother does heavy work. It is advised because people believe that it hastens labour. (A 30-year-old in-depth interview participant woman) Doing heavy work is good for pregnant mothers because it hastens labour. Again in our community it is also said during first pregnancy a woman should not go to clinic early because her labour may take long time and she shouldn't stay long time out of her home. (A 32-year-old in-depth interview participant woman) Very few study participants have reported as they have witnessed women who have experienced health problems like child death/still birth and even mothers giving birth alone in the field because of engaging in heavy work and have advised to avoid it. Up on summarizing their ideas, almost all study participants have pointed out as such practices might have health impact than the assumed benefit and have advised to go to health facility to make labor easy and secure health of mothers and new borns: It is a long time but now one woman who had delivered in the field while she was collecting wood and it was difficult for her. (A 38-year-old in-depth interview participant woman)
Category 3: food prohibition
Different food types were mentioned which are recommended to be taken and/or to be avoided during pregnancy. Most of the participants had explained that pregnant mothers require additional food and should be allowed to take all food items, especially meat products. However, few had discussed that, there are food types which are believed to cause problems on mothers and the fetus, as a result, a pregnant mother should not take them. Two of the study participants shared the following points: We advise pregnant women should have good diet otherwise labour could be prolonged and mothers could be fatigued during labour. (A 42-year-old Key informant woman) I heard that a pregnant mother should not eat a pepper because it leads for loss of hair of a baby. (A 28-year-old in-depth interview participant woman) It was reported that linseed (telba) is good for hastening of labor, and for that reason, mothers usually drink it when their laboring time reaches and labor starts them. However, sugarcane and godere (local food) are not advised for pregnant mothers because of assuming a baby might get fat and could not be delivered at it is time. Yogurt was also mentioned as it is not good because it may be attached to the fetus and difficult to separate from baby's body. People also advise pregnant mothers to take linseed as liquid form believing that it makes the baby to move well in the abdomen and also it helps to hasten the labor: During my last period of my pregnancy people advised me to take linseed in liquid form because they suggested me as it hastens labour and make it easy. I took it when I reached 9 month of age and my labour started. (A 35-year-old in-depth interview participant woman) What I know is that sugarcane, egg, pumpkin and godere (local food like potato) are not good because a baby may get big and godere may also be attached to the baby. (A 25-year-old in-depth interview participant woman)
Category 4: use of herbal medicine
Herbal medicine or taking plants for medical purpose was also one of the traditions reported by few of the study participants. Accordingly, roots and leaves of plants were mentioned to be used at different times during pregnancy and child birth as well with the intention of treating some illnesses and preserving health of mothers and fetus. The herbal medicines are prepared at home or some may be collected from persons who are thought to have experience in it. A few of the study participants have discussed their experience in taking roots and leaves of plants but also reported no history of complication related to consuming them. Based on their experience, they have reported that such trends are getting to be neglected since most mothers have developed culture of visiting healthcare facilities and discuss with nearby health professionals, particularly with that of health extension workers. Study participants have shared the following points regarding the issue: When one woman reaches to her labouring time roots, of plants is given to her because we believe that it makes labour easy. (A 25-year-old in-depth interview participant woman) People advise pregnant mothers to take a liquid made from different leaves because it is believed that it makes the baby to move well in the abdomen and also it helps to hasten the labour. (A 35-year-old in-depth interview participant woman)
Discussion
Through this study, commonest traditional practices during pregnancy and reason behind practicing them among mothers were explored. Accordingly, it was found that abdominal massage, food prohibition, taking herbs, and doing extraneous physical exercise during their pregnancy and delivery were commonly reported practices. As of the reasons: mothers have discussed that such traditional practices help them to make the labor easy and fast, alleviate discomforts, and avoid unwanted big size of the fetus.
There are a study reports which show as abdominal massage done by non-professionals is dangerous for mother and fetus. It was reported as it may result in abruption placenta, uterine rupture, fetal morbidity, fetal mortality, maternal morbidity, and maternal mortality. 4,5 In this study, abdominal massage during pregnancy was reported to be one of the traditional practices which used to be more commonly applied and became limited to be practiced by few of mothers in recent times, comparable results were reported from Ethiopian studies done in Limu Genet, Debark, and Debretabor, where some mothers reported history of abdominal massage. 23,25,26 This might happen due to closeness in socio-demographic characteristics and social norms. However, the finding of this study was somewhat different than Nigeria's study finding in which only very few mothers have described as they had experience of abdominal massage, 27 it might be explained by differences in study populations' exposure to healthcare service and information regarding such practices. As far as different scholars suggest that experience of abdominal massage could result in bad outcome for mothers and fetus, the finding of this study implies as mothers might suffer from their abdominal massage practice and it is a demanding issue for taking action.
Use of herbal medicine, none specified types of roots and leaves of plants, was reported by few of study participants. There are studies which report as herbal medicines are not necessarily safe alternatives to conventional medicines during pregnancy because their constituents are likely to have pharmacological activity and they might possess toxic constituents. 4,28 Reports from Ghana and Asian countries have also indicated that pregnant mothers have culture of using herbal medicine. 29,30 Comparing with a study conducted in Gonder, where half of study participants reported utilization of herbal medicine, 24 our finding showed limited experience of using herbal medicine during pregnancy and child birth. The difference could result from cultural dependency and health-seeking behavior variation too. Study participants from this study reported as they visit health center/post and use of herbal medicine nowadays is reducing.
Experience of restriction to some food items like sugarcane, yogurt, egg, peppers, pumpkin, and godere (local food like potato) was mentioned by some of study participants. Studies from Limu Genet of Oromia region, Amhara region, and Afar region of Ethiopia have also reported as 1/5th of women had experience of food taboo. 23,25,31 The similarity could be explained by comparability of the study setting and study population background in terms of healthcare-seeking behavior, access to health information, and compliance to recommended health messages. However, studies from Debretabor town of Amhara region and Shashemene of Oromia region had also reported near to half of women had nutritional taboo. 26,32 The gap might be due to differences in time of study and setting as well. Studies from Ghana, India, Madagascar, and Kenya 8,30,33,34 have also reported different levels of food taboos among mothers during pregnancy. Similar food items were reported through these studies which are usually avoided by pregnant mothers, mainly including animal products. The reasons mentioned behind avoiding some food items are related with different misperceptions which demands effort in order to change for better.
Experience of lifting and carrying heavy load and or extraneous physical activities during mothers' pregnancy time was reported by few of study participants. Vigorous physical exercise which is not supported by experts and not scientifically sound is not advisable. 35,36 However, this study identified as women have experiences of having vigorous exercise because of different thoughts like to make labor easy and fast, but at most time, it might end in bad outcome. Similar report was made from Turkey where a few mothers experience jumping from a high point assuming that it helps them to facilitate labor and make it easy. 37 In this study; a few mothers have discussed that mothers are advised to do heavy work at different time of their pregnancy duration which implies that mothers are taking risk. Therefore, it needs attention and strong work should be done to make them understand its impact and take corrective measures.
Generally, although there are indications of reduction in experiencing harmful traditional practices, it is still an area which demands great attention in order to empower women and community members so as they could be able to bring a better change in terms of avoiding harmful traditional practices during pregnancy and child birth. It is important to improve health-seeking behavior, especially to promote antenatal care, which is the golden opportunity to address many of health-related issues during pregnancy, child birth, and even to postnatal period.
Limitation of the study
This study is not free of limitations and indicative that includes we did not get chance to observe none of the listed herbal medicines, kinds of prohibited food items, and even did not get chance to observe how abdominal massage is really done and kinds of strenuous physical exercise. Again, since we did the interview with health extension workers, participants might hide some issues they thought not accepted by their health extension worker, which seems social desirability bias.
Conclusion
In this study, it was found that experience of having abdominal massage, use of herbal medicine, prohibition of food, and experience of extraneous physical activity during pregnancy and child birth should not to be undermined although it is getting reduced. There are also mothers who reported pregnancy-related side effects from experiencing different cultural practices during their pregnancy and child birth period. However, having some awareness of their impact and presence of indications for reducing such harmful practices is one good lesson, but again it is important to give more adequate attention and implement necessary interventions to bring a better change. Therefore, it is recommended to have a health education program which targets addressing the major reasons which push mothers to have such harmful traditional practices.
|
2022-05-20T15:03:47.017Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5024eaf133a8b6f888e514c4ebc9ed9743a94060",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20503121221098139",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "6c30eaffad90774084472960e0748706b907d93f",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
}
|
234147295
|
pes2o/s2orc
|
v3-fos-license
|
Genetic Diversity and Population Structure of Serbian Barley ( Hordeum vulgare L.) Collection during a 40-Year Long Breeding Period
: Determination of genetic diversity and population structure of breeding material is an important prerequisite for discovering novel and valuable alleles aimed at crop improvement. This study’s main objective was to characterize genetic diversity and population structure of a collection representing a 40-year long historical period of barley ( Hordeum vulgare L.) breeding, using microsatellites, pedigree, and phenotypic data. The set of 90 barley genotypes was phenotyped during three growing seasons and genotyped with 338 polymorphic alleles. The indicators of genetic diversity showed differentiation changes throughout the breeding periods. The population structure discriminated the breeding material into three distinctive groups. The principal coordinate analysis grouped the genotypes according to their growth habit and row type. An analysis of phenotypic variance (ANOVA) showed that almost all investigated traits varied significantly between row types, seasons, and breeding periods. A positive effect on yield progress during the 40-year long breeding period could be partly attributed to breeding for shorter plants, which reduced lodging and thus provided higher yield stability. The breeding material revealed a considerable diversity level based on microsatellite and phenotypic data without a tendency of genetic erosion throughout the breeding history and implied dynamic changes in genetic backgrounds, providing a great gene pool suitable for further barley improvement.
Introduction
Cultivated barley (Hordeum vulgare L.) is one of the most important crops, ranking as the fourth most produced cereal, after wheat, maize, and rice. It is one of the most adapted crops to an exceptionally wide range of diverse environmental conditions, grown in more than 100 countries worldwide [1]. A recent stagnation of barley production in Europe has primarily been caused by climate change, socio-economic reasons, and agronomic reasons. Unchanged barley production could be a result of a 15% decline in area, offset by moderate yield growth and increased yield variability (http://faostat.fao.org). Some of these trends over Europe may have been caused by recent climate changes. Nevertheless, it is unlikely that the barley production has reached its maximum genetic yield potential. However, the frequent use of narrow genetic pools in breeding programs could aggravate this phenomenon.
Plant breeding is based on the effective shuffling of genetic variation aimed at generating new and improved combinations of alleles, and assembling them in a single superior genetic background. A sufficient level of genetic diversity is a critical component in successful breeding programs. Nonetheless, modern intensive plant breeding practices led to a reduction of genetic diversity and formation of a genetic bottleneck as a consequence of relatively narrow germplasm pools used throughout breeding processes [2]. Aside from modern breeding, the long-term domestication history also greatly impacted the trend of the loss of genetic diversity [3]. This huge decrement of genetic variability could hinder breeding endeavors in coping with current and future challenges of biotic and abiotic stresses [4].
Numerous studies revealed different levels of genetic diversity in accessions from various geographic origins [5][6][7][8][9][10]. A considerably higher level of genetic diversity and the number of haplotypes was found in wild barley accessions and landraces compared to present-day cultivated barley genotypes [11][12][13]. The level of genetic diversity considering allelic richness, gene diversity, and percentage of unique alleles in cultivated barley genotypes ranged from very low in regions of Europe [14] to extremely high compared to other continents [7,15,16]. In addition, barley genotypes can be divided into groups according to the spike morphology, intended use, and seasonal growth habit, which is the result of strong selection for different target traits such as yield, malting quality, resistance to diseases, and tolerance to abiotic stresses [17,18]. The factors with the largest effects on population structure in plants were determined as mutations, human and/or environmental selection, genetic drift, mating system, and growth habit [19]. In the case of barley, these factors were manifested in a mutation of an ancestral wild-type two-rowed barley resulting in a recessive six-rowed type after domestication [20], geographical segregation and separate breeding of the two-and six-rowed types, and their adaptation to different environments leading to differentiation of the spring and winter forms [21].
The knowledge of allelic composition of parental lines could facilitate breeding for certain agroclimatic regions [10]. Among various methods based on morphological, physiological, and biochemical information in the last decades, DNA-based markers are routinely used for detection of polymorphism, marker-assisted selection, fingerprinting, diversity studies, and many other molecular and genetic analyses [22]. The level of genetic diversity can be estimated with pedigree data; however, one of its major deficiencies is a lack of reliability of pedigree information due to insufficient, faulty, or incorrect data in the available literature. Assessment of genetic diversity directly on the DNA level by estimation of the proportion of alleles identical by state [19] could be of great assistance during the transfer of desired combinations of disease resistance, quality, or yield-related traits into the existent modern genotypes [23]. After several decades of applying molecular markers in different genetic studies, microsatellites are still being used for crop improvement in breeding programs due to their high levels of polymorphisms, codominant and multiallelic nature, an unambiguous designation of alleles, relative assay simplicity, high reproducibility, and stability [21,22,24,25]. The value of simple sequence repeats (SSRs), as a powerful means for genome mapping, variety identification, and genetic analyses in barley breeding, has been highlighted in many studies [6][7][8]23,[26][27][28]. Estimation of genetic diversity by SSR markers associated with specific traits may reveal the real effects of selection during modern breeding compared to detecting polymorphism in non-coding genomic regions [29,30]. The discovery of highly sophisticated breeding tools and access to a broad genetic diversity of barley are two main cornerstones for increasing gain from selection and will probably remain the major key factors for further breeding progress.
Although the presence of valuable genetic diversity in breeding material is one of the main prerequisites for creation of superior yield varieties with high-quality characteristics, limited information is available on genetic diversity of barley genotypes from central and southeast Europe that are widely used in Serbian breeding programs, and it is mostly based on morphological and physiological traits. The use of these genetic resources in barley breeding started more than 70 years ago with collecting local landraces, followed by the introduction of foreign varieties well-adapted to the local agroecological conditions and development of modern varieties [31]. In addition to the considerable achievements during barley breeding history, better insight into genetic diversity on a phenotypic and, especially, on a molecular level is crucial for further breeding improvements.
The main aims of this study are to determine genetic diversity and population structure of a representative barley germplasm using SSR markers, pedigree, and phenotypic data, to compare genetic diversity of three main historical breeding periods, including the diversity between different groups of varieties, and to validate the suitability of the collection for further quantitative trait studies.
Materials and Methods
The barley collection of the Institute of Field and Vegetable Crops (IFVCNS) in Novi Sad, Serbia, comprises more than 700 winter and 400 spring barley varieties and elite breeding lines. From this collection, a core set of representative 90 genotypes, originating from Bulgaria, Czech Republic, Croatia, France, Germany, Hungary, Romania, and Serbia, was selected based on their row type, seasonal growth habit, and good adaptability to environmental conditions of central and southeastern Europe, for phenotypic and molecular characterization. All the genetic materials were obtained from the IFVCNS collection, which were adapted to wide geographical regions across the country (Table 1). This representative panel consisted of varieties released from the early 1970s until 2012. The spring barley varieties NS Vujan, NS Marko, and NS Mile were included in the field trials in 2011 and 2012 as experimental lines, before their official release in fall 2012. Based on growth habit and row type, the panel represented 36 winter two-rowed, 35 winter six-rowed, and 19 spring two-rowed barley genotypes. The spring six-rowed genotypes were not represented in the panel. The lack of industrial need for the spring six-rowed barley in this part of southeast Europe has led to its absence on the market and subsequently in breeding programs and commercial production.
For molecular analyses, total genomic DNA was extracted from a young leaf seedling of each of the 90 genotypes using a modified Cetyl trimethylammonium bromide (CTAB) method [32]. PCR amplifications were performed according to the protocol outlined by Röder et al. [33]. Fifty SSR markers with their primers and annealing temperatures were obtained from the GrainGenes database (Table S1). The selection of markers was made based on their associations with important agronomic traits and their even distribution along all seven chromosomes: 1H (7), 2H (8), 3H (6), 4H (7), 5H (7), 6H (7), and 7H (8). PCR was performed in a reaction mixture of 10 µL containing 30 ng of template DNA, 1 × PCR buffer, 2 mM MgCl 2 , 0.2 mM of each deoxynucleotide, 5 pmol of each fluorescently labelled forward and unlabeled reverse primers, and 1 unit of Taq polymerase (Applied Biosystems, Foster City, CA, USA). The amplification protocol included initial denaturation step, 5 min at 94 • C, followed by 35 cycles with 30 s at 94 • C, 45 s at annealing temperature (55, 58, 60 or 62 • C) and an extension for 45 s at 72 • C, with a final extension step of 10 min at 72 • C. After PCR procedure optimization, the obtained products were determined using fragment analysis on Genetic Analyzer 3130 (Applied Biosystems, Foster City, CA, USA) and analyzed in Gene Mapper software version 4.0 (Applied Biosystems, Foster City, CA, USA). The reaction volume of 10 µL consisted of 2 µL of mixed differently-labelled PCR products, 0.2 µL GeneScan 500 LIZ as a size standard and 7.8 µL of Hi-Di. For each microsatellite and barley group, the parameters of genetic diversity were obtained in GeneAlEx software 6.5 (The Australian National University, Canberra, Australia) [34], namely, the number of detected alleles per locus, the number of effective alleles, Shannon's information index, the number of private alleles, polymorphic information content (PIC), observed heterozygosity, unbiased expected heterozygosity, Wright fixation index, and allelic richness. The PIC value of the individual markers was used to evaluate the diversity level of each SSR marker using formula: PIC = 1 − ∑(pij) 2 , where pi represents frequency of jth allele for marker i. Allelic richness was calculated as the total allele count within the group divided by the group size. The number of private alleles represented the number of alleles unique to a single group. The population structure of 90 barley genotypes was inferred by Bayesian statistics model implemented in the program Structure v.2.3.4 (Stanford University, Stanford, CA, USA) [35]. The algorithm was performed using admixture model with 10 runs for 2 to 10 assumed groups using 100,000 Markov chain repetitions after a burn-in period of 100,000 iterations. The most probable number of clusters was estimated by plotting the estimated likelihood values the Ln Pr(X|K) and by calculating the delta K (∆K) model, as an ad hoc statistic based on the rate of change in the log probabilities between successive assumed number of groups (K) developed by Evanno et al. [36]. To determine the true number of groups that best fit the data, likelihood values across multiple values of K were compared and visualized by the software Structure harvester v.0.6.94 [37] and reported as multiple modes from Clumpak results within replicate runs for given K [38]. A cut-off limit of 50% was used to assign each genotype to an individual cluster. In order to verify the results obtained with Structure and examine the genetic relationships among the genotypes, a principal coordinate analysis (PCoA) based on molecular data was performed using covariance matrix with standardized data in the program GeneAlex 6.5 (The Australian National University, Canberra, Australia) [34], considering the whole population and the row type groups individually. An analysis of molecular variance (AMOVA) was performed on the clusters obtained in the program Structure to assess the population differentiation in the software GeneAlex 6.5 (The Australian National University, Canberra, Australia) [34].
The pedigree data were obtained from the Barley pedigree catalogue (http://genbank. vurv.cz/barley/pedigree/pedigree.asp) and from available breeders' records (Table 1). For pedigree analysis, the coefficient of co-ancestry between genotypes was calculated using the Winkin2 program (Agriculture and Agrifood Canada, Ottawa, ON, Canada) [39] obtained from the authors upon request. The coefficient of co-ancestry was assumed to range from 0, in the absence of any degree of relatedness, to 1, which explained the maximum degree of kinship. The obtained kinship matrix was transformed into a distance matrix. The Mantel test was used to determine a correlation between the genetic similarity matrix based on SSRs and the pedigree matrix in the program GeneAlex 6.5 (The Australian National University, Canberra, Australia) [34].
The field experiment was performed at the experimental site Rimski šančevi (45 • 20 N, 19 • 51 E, 84 m a.s.l.), Serbia. The experimental trials were conducted in a randomized complete block design, sowing in three replications (blocks). The sowing date was conducted on 5, 8, and 10 October for winter varieties, and on 16, 8, and 20 March for spring varieties, during three growing seasons of 2010-11, 2011-12, and 2012-13, respectively. The plot size was 1 m wide and 5 m long with 0.2 m spacing between rows. The block sizes were 215 m 2 , 209 m 2 , and 113 m 2 , for two-rowed winter, sixed-rowed winter, and two-rowed spring barley varieties, respectively, with 1 m between each block. Eight morphological and agronomical important traits were evaluated, namely heading date (HD), flowering time (FT), stem height (SH), spike length (SL), grains number per m 2 (GN), hectoliter weight (HW), thousand grain weight (TGW), and yield (YLD). Heading date and flowering time were recorded as the number of days from the seedling's emergence until spikes emerged out of the flag leaf sheaths and the first anthers were visible, respectively, on 50% of plants on the plot. Stem height was measured from the ground to the spike base, while spike length was measured without the awns. Multivariate (MANOVA), univariate analysis of phenotypic variance (ANOVA) and Tukey's multiple comparison test were performed in the statistical software R project [40]. Non-sample size dependent type III ANOVA was used to address the unequal number of observations in each group. The partial eta-square was used as a measure of effect size for MANOVA and ANOVA [41]. The Mantel test was used to determine the correlation between SSRs and phenotypic similarity matrices in the program GeneAlex 6.5 (The Australian National University, Canberra, Australia) [34].
Results
Out of 50 primer pairs used to evaluate genetic diversity of 90 varieties, 48 microsatellites produced a clear and polymorphic band pattern covering all linkage groups, while two were monomorphic. Allelic variants with allele frequencies below 1% were excluded from further analysis. The molecular diversity parameters implied a considerable variability in the barley collection. A total of 338 polymorphic alleles were detected with an average of 6.76 alleles per locus. The average number of alleles per locus ranged from 1, for locus Bmac0030 and Bmag0223, to 16, for Bmag0225 (Table S2). The chromosome 4H had the highest number of detected allelic variations. The number of effective alleles varied from 1 to 9.5, with a mean of 3.6. The Shannon's information index was the highest in locus AWBMS56 and the lowest in locus GBM1164. Observed heterozygosity ranged from 0 to 0.067. The smallest PIC value except monomorphic loci (0.166) was observed in the locus GBM1164, while the highest PIC value (0.895) was determined in the locus AWBMS56. The average PIC was 0.625 in all analyzed loci, while only seven loci had the PIC value less than 0.5. Wright fixation index values were high, ranging from 0.868 to 1.000. The presence of null alleles was observed in two loci (Bmag0120 and Bmag0613). Structure runs were performed for K = 2 to K = 10 based on the 50 SSR data. Ln Pr(X|K) values increased sharply until K = 4, following which the increase was slow without reaching the plateau. The Clumpak identified the highest ln Pr(X|K) value for K = 9 as the most appropriate number of clusters in this collection ( Figure S1). The primary division at K = 2 was observed mainly between winter (orange) and spring type (bright blue) (Figure 1). The third group (K = 3) comprised of genotypes differing in growth habit and row type marked with orange (W2R), bright blue (W6R), and purple (S2R) colors. Within K = 4, the genotypes mostly originating from France, Germany, and Hungary were separated in a distinct cluster (dark green). These genotypes were additionally divided into two new clusters generally based on the row type (K = 5). The next clusters were obtained from the spring genotypes, where varieties mostly from the third breeding period were singled out into a new sixth group (K = 6). The seventh (K = 7) emerged from the two-rowed winter group, comprising some varieties from the first breeding period. Several six-rowed varieties mostly from the second breeding period were further differentiated into a new cluster (K = 8). The ninth subcluster (K = 9) could be explained by the first, second, and third breeding periods detected in spring varieties, although the division was imprecise and not clear-cut. The results of population structure obtained with Evanno's principal model showed that analyzed genotypes could be distributed into three separated clusters ( Figure S2), which correspond to the classification based on their seasonal growth habit and row type. Each variety is represented by a vertical line which is divided into K colored segments proportional to the likelihood of its membership to the assigned cluster from K = 2 to K = 9. W2R-two-rowed winter genotypes, W6R-six-rowed winter genotypes, S2R-two-rowed spring genotypes, BPI-the first breeding period, BPII-the second breeding period, BPIII-the third breeding period.
The PCoA revealed genetic differentiation of the barley genotypes also into three clusters clearly divided according to their row type and growth habit (Figure 2). The first two main coordinates accounted for over 35% of total molecular variation. The first coordinate separated most of the winter two-rowed barley genotypes from the other two groups, whereas the second principal coordinate additionally split winter six-rowed barley from the spring two-rowed genotypes. The clustering of the barley genotypes by the PCoA corresponded to the groups determined by the structure analysis. Taking into account different year of release of the analyzed varieties, three partly overlapping groups could be noticed, reflecting three different breeding periods (Figure 3). The first group consisted of 24 varieties released from 1973 to 1990. The second group comprised 28 varieties released from 1991 to 2004 and in the third group contained 38 modern genotypes developed from 2005 to 2012. The shifts in genetic diversity throughout the breeding periods could also be observed by PCoA, which showed broader genetic diversity of the genotypes that contributed to the second and the third periods in comparison to those that represented the first breeding period. The first group of genotypes released from 1970s to 1990 displayed significantly narrower molecular variability and was separated by the first coordinate from the other groups. The genotypes from the third breeding period were more distant from the barley genotypes from the first period, and the varieties from this breeding period were the most dispersed on the PCoA biplot and markedly overlapped with the second breeding period clusters.
The level of molecular diversity was compared between the groups of barley genotypes based on the historical breeding periods, the row type, and growth habit ( Table 2 Considering the grouping based on spike architecture and seasonal growth habit, the spring two-rowed barley manifested the highest values for all molecular diversity parameters, although this group comprised the smallest number of genotypes ( Table 2). The winter two-rowed varieties were more diverse than winter six-rowed types, taking into account all diversity parameters.
The AMOVA analysis showed that 37% of the total molecular variance attributed to genetic variation among populations, the main proportion of the total molecular variation (62%) was explained by variation among individuals within the groups, while only 1% of the total variance was associated with differentiations within individuals (Table 3). Pairwise Nei genetic distances between groups ranged from 0.309 (between W2R and W6R) to 0.499 (W6R and S2R). In order to examine the correlation between molecular and pedigrees data, the genetic distances matrices based on the microsatellite and pedigree data were compared. The comparison of SSR and pedigree distance matrices using Mantel test showed a significant and a moderately low positive correlation (r = 0.53, p < 0.001) (Figure 4a). In addition, the correlation between the molecular and phenotypic data ( Figure 4b) were also positive and slightly higher (r = 0.66, p < 0.001) than the correlation between molecular and pedigrees data. In order to test if the combination of the independent variables simultaneously explained a significant amount of variance in the dependent variables, MANOVA was performed (Table S3). The multivariate Wilk's lambda test showed significant main effects of the season, row type, breeding period, and their interactions. Partial eta-square values were used to indicate the proportion of the variation in the dependent variables associated with the main effects and their interaction. The results showed that 50.7% and 83.7% of the variance is accounted for by the season and the row type, respectively, while the breeding period and the interactions accounted for much less.
The ANOVA for each trait separately showed that most of the investigated traits varied significantly between seasons, row types, and breeding periods ( Table 4, Tables S4-S11).
Groups
Comparison of three groups with different row types revealed that two-rowed winter barley had earlier heading and flowering time than six-rowed types. Furthermore, the tworowed winter group showed shorter stems, longer spikes, less grain number per m 2 , greater thousand grain weight, greater hectoliter weight, and greater yield than the six-rowed types. Most of the yield-related traits had the lowest values for the spring two-rowed barley group. Yield and grain number per m 2 had the highest coefficients of variation ranging from 21.9% to 26.4% and from 19.3% to 28.9%, respectively. The smallest variation was observed for the heading and flowering time with coefficients of variation varying from 2.8% to 7% and from 2.9% to 7.1%, respectively ( Table 4).
The varieties that belonged to different historical breeding periods significantly differed with respect to almost all of the investigated traits. The earliest heading and flowering time were observed in the third historical period for the six-rowed barley varieties, and in the second and third period for the two-rowed winter barley group. The average value of plant height significantly decreased in the last two breeding periods compared to the first period in the two-and six-rowed winter varieties. The significant increase of spike length during the breeding periods was noticed in two-rowed winter and spring barley, while the changes for the six-rowed barley over time was not detected. Both two-and six-rowed winter varieties showed an increase in grain numbers per m 2 over time. Thousand grain weight significantly increased throughout the investigated historical periods, especially in the two last for all three barley groups. For both winter and spring two-rowed varieties, there was a significant increase in hectoliter weight in the third period. No significant changes of hectoliter weight over 40 years were observed for the six-rowed type. Yield, as one of the most important agronomic traits, improved significantly over time. This trend of a graduate increase was more pronounced in six-rowed winter types, with the mean values ranging from 6.7 tha −1 over 7.25 tha −1 to 7.66 tha −1 . The yield was also considerably improved in the second and the third period in two-rowed winter barley, with the average values of this trait increasing from 7.09 tha −1 to 8.06 tha −1 (Table 4).
Discussion
Globally, the breeding of modern cereals caused a rapid decrease of genetic diversity level over time due to focused selections on targeted genes or quantitative trait loci (QTLs) [42]. Therefore, the information of the current state of genetic diversity and the level of a potential genetic reduction in European germplasm could be of great importance for barley breeding in effectively improving important traits and accurately estimating genetic relationships and diversity [43]. Gougerdchi et al. [8] emphasized the assessment of genetic diversity based on molecular markers as one of the primary and essential steps in the modern breeding strategy. Considering the significance of revealing allelic changes and population structure over time [44,45], the aim of our study was to detect changes that occurred during several decades of breeding efforts at IFVCNS. The molecular diversity parameters implied a considerable variability in our barley collection. Among 90 analyzed barley genotypes, the chosen set of 50 markers amplified 6.76 alleles per locus, with a range from 1 to 16 alleles and mean PIC of 0.62. The results were comparable with the findings of Varshney et al. [46], who reported an average PIC value of 0.58 in barley lines from six countries using 28 microsatellites. Our results also agreed with the mean PIC value of 0.57 in Brazilian genotypes obtained with 34 SSR loci [10]. In other diversity studies, mean PIC values were somewhat lower than those presented in this study, ranging from 0.28 to 0.46 [8,47,48], which could probably be due to a relatively small area of the genotypes' origin [5]. Rajala et al. [9], however, demonstrated a satisfactory level of variability in northern European barley genotypes, hence contesting the effect of genetic erosion implicated by geographical frontiers and complying with a high level of genetic diversity found in our barley collection.
The introduction of new breeding material had great importance for improving the most important selection traits. In this study, two-rowed spring and winter varieties had higher values of genetic diversity parameters than the six-rowed varieties, which is in agreement with the findings of Surlan-Momirovic et al. [49] and could be explained with the use of more diverse breeding material for developing two-rowed varieties and more intensive germplasm exchange of two-rowed barley than that of the analyzed six-rowed barley. Therefore, the introduction of novel germplasm and a more comprehensive usage of genetic resources led to the enlargement of variability with new alleles in the Serbian breeding program, which was especially evident in the second and third breeding periods.
Genetic relatedness among the barley varieties was estimated using Bayesian clustering, PCoA, and analysis of molecular variance. The results of both methods outlined by Pritchard et al. [35] and by Evanno et al. [36], as well as biological factors that could influence the choice of K [50], were considered when selecting the appropriate number of clusters (in our case K = 3). To avoid underestimating population structure using only one method to only two clusters as the top level of hierarchical structure and to ensure reproducibility of structure results, we performed a hierarchical analysis, including structure bar plots for multiple values of K according to the Janes et al. [51]. In our study, although the maximum value of Ln (Pr(X|K) was reached for nine clusters, this result did not have its full biological and agronomical justification. On the other hand, the results based on Evanno method in our case did not underestimate the number of groups and the obtained three clusters best defined the studied barley collection according to growth habit and row type. The division of the genotypes into more groups was not clear and could be only partially explained by different breeding periods, row types, and the counties of origin. Many studies of the worldwide [52], European [53], American [54], and Nordic [9,55] barley germplasm confirmed that population structure was largely conditioned by differences in row type. Moreover, Mathies et al. [56] in a genome wide association study of malting and kernel quality showed that grouping of European barley according to seasonal growth habit and row number could be achieved more preciously and accurately with fewer SSR markers than with more Diversity arrays technology (DArT) markers, which was also confirmed in the studies that compared SSRs with other types of markers [57,58].
The AMOVA results supported the PCoA and structure analysis. The partitioning of molecular variation showed that highest variation was determined among individuals within the same group, implicating differentiation of both seasonal growth habit and ear row type. Similarly, Khodayari et al. [16] reported the highest diversity (60.7%) among Iranian accessions detected within the same row type, while Koebner et al. [44] noted a significant part of molecular variance attributed to the seasonal group. Malysheva et al. [45] accounted for 17% and 19.5% of variation between spring and winter, and between tworowed and six-rowed varieties, respectively, which was similar to the variance share observed between the groups in our study.
The presence of moderately low correlation (r = 0.53) between microsatellites and pedigree data was comparative to the correlation (r = 0.46) between the pedigree data of 92 Canadian barley varieties and 50 SSR markers [19]. The relatively low correlation between microsatellites and pedigree data could be a consequence of pedigree errors which are common in breeding. Moreover, inaccurate pedigrees could also be due to incomplete data, as a lack of some ancestry information could prevent pedigrees from being traced back for several generations. Since breeding is a complex multistep process, the presence of incorrect pedigrees could subsequently lead to inadequate estimates of genetic parameters such as additive variance, heritability, genetic correlations, and breeding value [21]. This deficiency in pedigree data could be corrected by simultaneous genotyping parents and progeny applying dense panel of molecular markers [59]. A stronger correlation could be obtained with more markers that would allow more precise estimation of actual relationships between related genotypes and identification of the genome regions that were inherited from a common ancestor [60]. A slightly higher correlation was determined between molecular and phenotypic data (r = 0.66). This was considerably lower than the correlation (0.82) found between similarity distances of 21 microsatellites and 21 morphological traits in the study of Koebner et al. [44]. It is possible that the agronomic traits used in our study had less discriminative power, which was, in turn, reflected in a lower correction. A positive effect on observed yield progress during the 40-year long breeding period could be partly attributed to breeding for shorter plants, which reduced lodging and thus provided higher yield stability. This is in agreement with Ortiz et al. [61], who observed a reduction of stem height in Nordic spring barley varieties by 0.20 cm per year from 1948 to 1988. The observed effect of the season on plant height is in accordance with [3,62] who demonstrated a large influence of environmental factors on the expression of stem height. The shift from later heading and flowering varieties from the first and the second breeding periods towards earlier heading and flowering genotypes from the third period could be interpreted as a strategy to avoid drought [63], which is in the Pannonian Basin and other European countries one of the main limiting factors for agriculture production [64]. The increase in thousand grain weight of both two-and six-rowed types during the breeding periods was in accordance with the findings of Schwarz et al. [65] who detected consistent, although not significant, improvement of thousand grain weight from 1910 to 1990. Both two-and six-rowed varieties developed during the most recent period showed improved yield-related traits, such as thousand kernel weight and grain numbers per plot, reflecting an enlargement of genotypic diversity, which was also confirmed by the molecular analysis.
The selected microsatellites revealed a considerable level of genetic diversity, proving suitable for the characterization of barley germplasm and its more efficient use in barley selection process. Unlike the barley breeding in some countries that underwent a decline in genetic diversity [3,66], molecular and phenotypic analyses in our study indicated no genetic erosion in barley genotypes from central and southeast Europe used over the last several decades. A considerable molecular and phenotypic diversity of the analyzed barley varieties implied their great potential for further barley improvement and quantitative trait studies.
Conclusions
The introduction of novel germplasm and more comprehensive use of genetic resources could be of great importance in increasing the variability with new alleles in Serbian breeding program, which was especially evident in the second and the third breeding periods. The relatively low correlation between microsatellites and pedigree data could be a consequence of unavailable pedigree or pedigree errors, which are common in most breeding programs. Yield progress during the 40-year long breeding tradition could be partly attributed to breeding for shorter plants, which reduced lodging and thus provided higher yield adaptability and stability. The selection of earlier heading and flowering genotypes from the third period could be interpreted as a strategy to avoid drought as one of the main limiting factors for agriculture production in the Pannonian Basin and other European countries.
Supplementary Materials: The following are available online at https://www.mdpi.com/2073 -4395/11/1/118/s1, Figure S1: Estimation of the number of subpopulations (∆K) calculated by Evanno s approach (2005) obtained in program Structure harvester and Clumpak program. Figure S2: Probability of data (Ln) for number of clusters (K) ranging from 2 to 10 obtained by Structure harvester and Clumpak program. Table S1: The name of markers, their forward and backward primer sequences, annealing temperatures, and repeat motifs. Table S2: The name of markers, their position on the chromosome, size range of detected alleles, and basic molecular diversity parameters. Table S3: Multivariate analysis of variance (MANOVA) using Wilks' Lambda test differences between group means for a combination of the analyzed agronomical traits.
|
2021-05-11T00:04:26.552Z
|
2021-01-09T00:00:00.000
|
{
"year": 2021,
"sha1": "52aec838d2ebb9d477046456eba4d0b9703d0d6d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/1/118/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8f1fd87dfb9a1366786afa46f2c9bd925047a0ae",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
259340850
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of MRI Versus Arthroscopy in Assessment of Anterior Cruciate Ligament Injuries of the Knee Keeping Arthroscopy as Gold Standard
1,
In trodu ction MRI clearly displays the ACL, menisci, ligaments, and articular surfaces of the knee, it has become an essential tool for assessing ACL damage. MRI provides extensive information about the ACL's architecture and condition through the use of different imaging sequences like T1and T2-weighted imaging. ACL injuries can be detected with great sensitivity and specificity 1 , allowing for a precise initial diagnosis and the creation of targeted treatment programmes.
When compared to other joints, knees are the most likely to be injured in sports and car accidents. 2 Menisci, tendons, ligaments, and bones all make up the knee joint. 3 These structures are crucial in keeping the bones in their proper
Original Article
positions and the joints stable. 4 Internal knee joint disorders are a common health concern for young athletes. 5 This can cause damage to the menisci and ligaments, preventing the joint from functioning normally. To arrive at a correct diagnosis, it is necessary to isolate the relevant mechanisms. The severity of a knee injury can be estimated from the results of a clinical examination and initial imaging (often an X-ray). 6 Common knee injuries include tears in the meniscus and the anterior cruciate ligament (ACL). The clinical examination was once the mainstay of medical diagnosis. 7 However, modern diagnostic tools have improved the likelihood of a correct diagnosis. 8 The use of MRI has greatly enhanced the accuracy and non-invasiveness of diagnosing ACL and meniscal injuries. MRI allows for a more in-depth understanding of the knee than is achievable with more traditional testing methods. 3 Compared to computed tomography, magnetic resonance imaging (MRI) provides a more comprehensive evaluation of the knee's soft tissues and bones. 9 Arthroscopy is another common method since it allows for in-depth examination of the knee joint and, consequently, more accurate diagnosis and treatment. Arthroscopy is the best diagnostic tool for identifying knee problems. 10,11 It is essential to keep in mind, however, that arthroscopy is an invasive procedure that calls for a hospital. Accurate results are highly dependent on the operator's skill and experience. The study's overarching objective is to determine the relative benefits of magnetic resonance imaging (MRI) and arthroscopy for diagnosing ACL injury stay. 11 The purpose of this research is to improve our knowledge of ACL injuries and provide reliable guidance for diagnosis and rehabilitation. The results will help medical personnel choose the most appropriate imaging modalities, factoring in factors like precision, invasiveness, cost, and level of expertise. The results will be more effective treatment for patients and less waste of healthcare resources.
Meth odo log y
This study employed a prospective cross-sectional design to compare the diagnostic accuracy of Magnetic Resonance Imaging (MRI) and Arthroscopy in assessing anterior cruciate ligament (ACL) injuries. The study was conducted at the Combined Military Hospital in Rawalpindi, Pakistan, from February to August 2019. The study received approval from the ethical review committee, and all participants provided written consent before participating in the research.
The study comprised 127 people who were showing symptoms of an ACL tear. Patients who presented with edoema, instability, or pain in the absence of a suspected ACL injury met the inclusion criteria. Patients were limited to those between the ages of 18 and 50, and those who were either incompatible with anaesthesia or had metal implants were disqualified. Fractures to the femoral condyle, plateau, or tibial spine, or isolated injuries to the anterior, lateral, or posterior cruciate ligaments, ruled out patients.
Using a GE 1.5 TESLA MRI scanner, participants were scanned. T1 and T2 weighted sequences were used to create images of the knee in the coronal and sagittal planes. The hospital's Radiology department reported the MRI scans.
The arthroscopic inspection and MRI findings were entered into SPSS 23 software for tabulation and analysis. Depending on whether MRI and arthroscopy disagreed on the presence of an ACL tear, the results were classified as either true positive (arthroscopy confirmed the MRI diagnosis) or true negative (both procedures showed no ACL injury). When necessary, both descriptive and inferential statistics were used.
Results
Out of the 127 patients, 109 (85.8%) were male, while 18 (14.2%) were female. This gender distribution can be attributed to the fact that males are typically more physically active in sports. Table I displayed the frequency distribution of age groups among the patients. Table II provided descriptive statistics and frequency distribution related to the MRI and arthroscopy results, indicated that 107 patients (true positive and true negative) had the same diagnosis on both MRI and arthroscopy. Ten patients had ACL instability that was missed on MRI but diagnosed on arthroscopy (false negatives). Conversely, ten patients had ACL instability detected on clinical evaluation and MRI, but arthroscopy did not show an ACL injury. An independent sample t-test was performed to evaluate the gender distribution. Table III presented the results of the t-test, showing no statistical difference between the genders in terms of the MRI and arthroscopy diagnoses.
ANOVA was used to analyze the distribution according to age, as shown in Table IV. The results indicated statistical significance when the findings were distributed based on age.
Discussion
Due to its complexity, MRI is commonly used and recommended by doctors for evaluating knee injuries. 3 Knee injuries are commonly diagnosed by MRI. 12 MRI scans have the benefit of not necessitating intravenous contrast dyes or needle sticks. 13 The menisci and both the anterior and posterior cruciate ligaments (ACL and PCL) can be injured, and MRI can identify these lesions. 14 However, a doctor's expertise and the MRI equipment itself can affect how reliable the results are. 7,15 Here, we looked at how well arthroscopy and MRI both diagnosed ACL injuries. Men are more prone than women to have knee injuries, according to a prior study by Avcu et al. Furthermore, they discovered that the right knee is more prone to damage than the left. 16 Injuries that necessitate prompt surgical intervention are more common in younger men. 8,17 ACL tears are the most common kind of knee ligament damage, as reported by Shetty et al. 15 Hetta et al. 18 observed that 15 of the 30 patients in our study (60%) had ACL tears, and that 35 of the patients overall had a history of trauma. Out of 54 patients in another study, 31 (57.5%) had a medial meniscal tear and 11 (20.3%) had an ACL tear. 19 Since measuring joint instability during a clinical evaluation of patients with knee injuries is rather straightforward, we restricted our investigation to ACL rips. Berquist et al. 20 found that mid-substance tears were the most common form of ACL injury in our patients. Ankle ligament injuries are best detected using T2weighted scans of the knee. 21 The incision was checked using axial and coronal pictures. T2-weighted pictures are the gold standard for diagnosing ACL rupture, according to research by Mink et al. 22 ACL injuries can be diagnosed 25 , whereas sensitivity and specificity can range from 61% to 100% and 82% to 97%, respectively.
They rated the MRI as 88 percent accurate, with "extremely good" interpretation. 8 A radiologist's skill in interpreting MRI scans is highly dependent on their level of education and experience. Other studies 14,27 find that MRI and Arthroscopy are the best ways to assess knee health.
The skill of the surgeon is crucial to the outcome of an arthroscopic procedure. 28 Due to its oblique position at the knee joint, the ACL is difficult to capture in a single MRI sequence. 21 Although useful, arthroscopy is not a substitute for magnetic resonance imaging (MRI). 29 It's crucial to educate the patient on the surgical approach beforehand.
Arthroscopic procedures rely heavily on the knowledge and experience of the operating surgeon. 28 The anterior cruciate ligament (ACL) at the knee joint lies at an oblique angle, making it unusual for a full ACL to be visible in an MRI sequence. 21 Although useful for diagnosis, arthroscopy is not a replacement for magnetic resonance imaging (MRI). 29 Therefore, it is crucial to provide the patient an in-depth explanation of the surgical method before beginning the operation.
Conclu sion
Non-invasive imaging techniques like MRI have allowed for the early diagnosis of meniscal and ACL tears in the knee. Without the need for ionising radiation or intrusive treatments, it provides an accurate assessment of ACL damage and soft tissue anomalies. MRI is noninvasive and therefore free of the dangers and restrictions of arthroscopy, a surgical procedure. The posterior capsule may be difficult to examine during arthroscopy, and extraarticular knee problems may not be amenable to evaluation in some clinical settings. Despite its reliance on operator expertise, arthroscopy continues to be the gold standard for assessing ACL damage. MRI is the gold standard for evaluating internal and exterior knee abnormalities following a knee injury.
|
2023-07-06T15:01:06.456Z
|
2023-05-31T00:00:00.000
|
{
"year": 2023,
"sha1": "08aa1727bca8750f2a6a62e1c6401aa971d4b5b4",
"oa_license": "CCBYNC",
"oa_url": "https://apims.net/index.php/apims/article/download/598/566",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7dbf79090aa1245477a550d59de4fafdcbc51154",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
73620369
|
pes2o/s2orc
|
v3-fos-license
|
Preliminary studies for an integrated assessment of the hydrothermal potential of the Pechelbronn Group in the northern Upper Rhine Graben
The northern Upper Rhine Graben is due to its tectonic setting and the positive geothermal anomaly a key region for geothermal heat and power production in Europe. In this area the Upper Eocene to Lower Oligocene Pechelbronn Group reaches depths of up to 2800 m with temperatures of locally more than 130 C. In order to assess the hydrothermal potential of the Pechelbronn Group a large dataset is compiled and evaluated. Petrophysical parameters are measured on core samples of eight boreholes (courtesy of Exxon Mobil). Additionally, 15 gamma-ray logs, 99 lithology logs as well as more than 2500 porosity and permeability measurements on cores of some of these boreholes are available. The Lower Pechelbronn Beds are composed of fluvial to lacustrine sediments, the Middle Pechelbronn Beds were deposited in a brackish to marine environment and the Upper Pechelbronn Beds consist of fluvial/alluvial to marine deposits. In between the western and eastern masterfaults of the Upper Rhine Graben several fault blocks exist, with fault orientation being sub-parallel to the graben shoulders. During the syntectonic deposition of the Pechelbronn Group these fault blocks acted as isolated depocenters, resulting in considerable thickness and depositional facies variations on the regional and local scale (few tens to several hundreds of meters). Laboratory measurements of sonic wave velocity, density, porosity, permeability, thermal conductivity and diffusivity are conducted on the core samples that are classified into lithofacies groups. Statistically evaluated petrophysical parameters are assigned to each group. The gamma-ray logs serve to verify the lithological classification and can further be used for correlation analysis or joint inversion with the petrophysical data. Well data, seismic sections, isolines and geological profiles are used to construct a geological 3-D model. It is planned to use the petrophysical, thermal and hydraulic rock properties at a later stage to parametrize the model unit and to determine, together with the temperature and thickness of the model unit, the expected flow rates and reservoir temperatures and thus the hydrothermal potential.
Introduction
Due to the increasing awareness of the anthropogenic impact on global warming and the finite supply of fossil fuels, renewable energies play an important role in the attempted reduction of greenhouse gas emissions.The debate about alternative energy resources often focuses on the supply of electricity, even though heating accounts for approximately 55 % of the annual final energy consumption in Germany (BMWi, 2017).As opposed to other renewable energy sources, geothermal energy can be extracted regardless of the season, the time of day or the weather conditions.It can therefore be used to cover the base load, for both power and heat production depending on the extraction temperature.
The Upper Rhine Graben (URG) is due to its tectonic setting and the positive geothermal anomaly a key region for geothermal heat and power production in Europe.More than 15 geothermal wells have been drilled in the Upper Rhine Graben since the 1980ies (Vidal and Genter, 2018).Furthermore, it is one of the most densely populated areas of central Europe with an accordingly high heat demand while the rural areas are used for agriculture, which could also be extended by greenhouse farming, where geothermal heating could provide a huge potential of increase in productivity.Because of its hydrocarbon exploration history, an abundance of exploration wells and seismic surveys cover the URG.The oil and gas industry usually carries out well logging as well as porosity and permeability measurements on core samples (if available).The reservoir horizons for conventional oil and gas exploration are sandstone layers with high effective porosity and matrix permeability.Hydrothermal applications have very similar requirements and additionally require a minimum net thickness of approximately 20-50 m to allow for sufficiently sustainable flow rates (Kaltschmitt et al., 1999).
This study focusses on the Cenozoic graben fill of the northern URG as potential reservoir for direct heat use and seasonal heat storage by hydrothermal well doublets.More than 3400 porosity and permeability data of core samples for several Cenozoic units are available (Bär et al., 2013;Bär and Sass, 2015).The Upper Eocene to Lower Oligocene Pechelbronn Group was chosen for further analyses, because of the large amount of data available from various oil and gas exploration wells and the relatively high porosity and matrix permeability of its sandstone layers (Jodocy and Stober, 2011;Kött and Kracht, 2010).With depths of 1200 to 2600 m at its top (Kött and Kracht, 2010) and temperatures of locally more than 130 • C, the Pechelbronn Group seems suitable for geothermal applications.
The aim of this study is to assess the hydrothermal potential of the Pechelbronn Group for direct heat use by means of an integrated 3-D structural-geothermal model that serves to locate potential exploration areas.The assessment is based on reservoir temperature, (net)thickness of the reservoir horizon as well as on petrophysical, thermal and hydraulic rock properties.
We present preliminary results, as the petrophysical property measurements are still being analysed and the 3-D structural model, which is the basis for the assessment of the hydrothermal potential, is not yet completed.Nevertheless, the methodology is described for the entire workflow.
Geology of the study area
The study area is located in the northern Upper Rhine Graben (Fig. 1).In most parts of the northern Upper Rhine Graben the Pechelbronn Group covers the Rotliegend discordantly except for some locations where Eocene clays were deposited in the initial phases of taphrogenesis (e.g.Dèzes et al., 2004).The top of the Rotliegend can therefore be regarded as equivalent to the base of the Pechelbronn Group.
The Pechelbronn Group is subdivided into three formations according to lithostratigraphy.The Lower Pechelbronn Beds are composed of fluvial to lacustrine sediments, containing moderately to poorly sorted sandstones and conglomerates intercalated with silt-and claystone layers (Gaupp and Nickel, 2000).The upper part of the Lower Pechel-bronn Beds was deposited in a brackish environment, indicating the following marine transgression in the Middle Pechelbronn Beds, which comprise brackish to marine claystones alternating with fine grained calcareous sandstones (Gaupp and Nickel, 2000).The Upper Pechelbronn Beds consist of fluvial/alluvial to brackish/marine deposits.Lithology and facies show high regional variation with alternating sequences of claystone, limestone, sandstone and conglomerate (Grimm et al., 2011).According to Gaupp and Nickel (2000) and Derer et al. (2003Derer et al. ( , 2005) ) a braided delta rapidly advancing eastward or southeastward from the western graben shoulder caused a relatively coarse grained clastic sedimentation in the vicinity of the proximally situated town of Eich.The distal location of the area around Königsgarten towards Stockstadt entailed brackish to lacustrine sedimentation with claystones and fine-grained calcareous sandstones (locations shown in Fig. 1).
In between the western and eastern masterfaults of the Upper Rhine Graben several fault blocks exist, with fault orientation being sub-parallel to the graben shoulders.During the deposition of the Pechelbronn Group these fault blocks acted as isolated depocenters (Derer, 2003), resulting in considerable thickness and depositional facies variations on the regional and local scale (few tens to several hundreds of meters).The differences in thickness of the Pechelbronn deposits between structural highs and lows are predominantly attributed to the thicker pelitic intervals in the depressions, whereas the conglomeratic and sandy facies of the upper part of the succession are more uniformly distributed over highs and lows (Gaupp and Nickel, 2000).
Dataset
The construction of the stratigraphic horizon of the Pechelbronn Group for the 3-D structural model is based on 99 lithological well logs (locations shown in Fig. 1), most of which reached the base of the Pechelbronn Group.The fault geometry in the geological 3-D structural model is adopted from Arndt (2012) (Fig. 1) and modified where necessary in order to fit the input data.These faults are modelled based on the tectonic map of Germany (TK1000, Zitzmann, 1981), Anderle (1974) and Derer (2003).In our first modelling step the Pechelbronn Group will comprise only one model unit and should later on be further subdivided according to the three existing formations or to lithological criteria.Additional literature and other data, which could allow to consider syntectonic basin and depositional evolution and facies distribution in the modelling process to avoid a mere interpolation between the lithological well logs, is currently being compiled and reviewed.
In order to assess the hydrothermal potential of the Pechelbronn Group a large database with porosity and permeability data from more than 2500 core plugs of 16 oil and gas exploration wells with multiple core sections are available from the Geological Survey of Lower Saxony (Bär et al., 2013;Bär and Sass, 2015).This database also contains a petrographic classification.The samples used in this study are classified as claystone, siltstone, fine, medium and coarse sandstone as well as gravelly sandstone.Claystone and siltstone samples were merged into one unit during further data processing because of the small number of siltstone samples and the very similar porosity and permeability of clay-and siltstones.This petrographic classification and own petrographic core descriptions lead to the definition of five lithofacies groups, that serve to cluster the measurement results and statistically evaluate the parameters for each group.
From the 16 wells with porosity and permeability measurements eight cores were chosen for further analyses.From the existing core plugs 150 were used for measurements of thermal conductivity and diffusivity and sonic wave velocity (P and S wave velocity).The selected samples are representative for their lithofacies group in terms of lithology, porosity and permeability.The cylinder-shaped samples were drilled perpendicular to the core axis, such that the sample axis is parallel to the bedding plane.Samples have a diameter of 30 mm and lengths of 25-50 mm.
Additionally, gamma-ray logs of 15 wells were provided by Exxon Mobil and are used for correlation with lithology and the porosity and permeability data.
Methods
The workflow used for the construction of the 3-D structural model and the assessment of the hydrothermal potential is shown in Fig. 2. The structural model is built with SKUA-GOCAD ™ .
Thermal conductivity and diffusivity
The measurements of the thermal conductivity and thermal diffusivity were conducted both under oven-dry and fully water saturated conditions, to be able to correct these properties to reservoir conditions.Samples were oven-dried at 60 • C for 48 h and saturated with de-ionized water in a vacuum desiccator for 48 h.
Bulk thermal conductivity and diffusivity were measured using the optical scanning method (Popov et al., 1983(Popov et al., , 1999)), which is based on the contactless heating of the samples and determination of the subsequent cooling rate.In order to minimize the influence of varying optical reflection, the samples are covered with a black coating of uniform thickness.The working surface is the plane bottom face of the cylindershaped samples.Further details on the measurement of the thermal diffusivity are given in Popov et al. (2016).According to the manufacturer Lippmann & Rauen, the device supports thermal conductivity measurements in the range of 0.2-25 Wm −1 K −1 with an accuracy of ±3 %, the accuracy for thermal diffusivity measurements is ±8 %.
Assessment of the hydrothermal potential
As indicated in Fig. 2, the measured rock and reservoir properties will be used to parametrize the 3-D structural model.The assessment of the hydrothermal potential will be carried out based on this parametrization.Parameters with the highest influence on the performance and efficiency of a hydrothermal doublet are according to Stober et al. (2016) the porosity, permeability, temperature and transmissibility.During inspection of the cores of the eight boreholes that were used for further analyses no indication of fractures in the potential reservoir could be observed.Neither do well log files mention any evidence of fractures.Hydraulic tests (if existing) are confidential and not available to the authors.According to Kött and Kracht (2010), the aquifer horizons within the Upper and Lower Pechelbronn Beds are porous aquifers.It is therefore assumed that fractures have no significant positive impact on reservoir permeability, which is thus considered to be in the same order of magnitude as the matrix permeability.This assumption might lead to an underestimation of the reservoir permeability and can thus be seen as a conservative estimate of the hydrothermal potential.
According to Eq. (1) the geothermal power (P th ) extracted by the heat exchanger depends on the heat capacity of water (c p ), the temperature difference between production and injection ( T ) and the mass flow (Q m ). (1) Higher permeabilities and greater thicknesses yield higher flowrates (see Eq. 2).It is therefore convenient to use the transmissibility for the parametrization of the 3-D model.
It is commonly assumed that hydrothermal systems require a minimum transmissibility of 5 × 10 −12 m 2 (Stober et al., 2016).For borehole locations where there is a detailed lithology log available, the relative thickness of each lithofacies group is known.The permeability data is evaluated separately for each lithofacies group as shown in Fig. 4. The transmissibility can then be calculated for each lithofacies group.As the thickness might vary considerably from one depocenter to another, the transmissibility cannot be interpolated over the whole study area, but only within fault blocks that display (semi)isolated depocenters and only if enough data is available.Here it is planned to not only rely on the lithology logs of the wells, but also use gamma-ray logs, if available, to identify the defined lithofacies types.The interand extrapolation in areas without available borehole data implicate higher uncertainties.The reservoir temperature from the temperature models of Arndt et al. (2011) and Rühaak et al. (2014) is used as production temperature.For the injection temperature three scenarios are assumed: 90 • C for power generation with binary power plants, 50 • C for direct heat production and 30 • C for greenhouse farming.For a given pressure difference between production and injection well (e.g. 1, 3 and 6 MPa) the flow rate can be calculated following Eq.(2) (after Van Wees et al., 2012;Mijnlieff et al., 2014), where Q v is the volumetric flow rate, ρ is the brine density, p is the pressure difference between the initial hydrostatic pressure in the aquifer and the well pressure, K i is the permeability of the higher permeable lithofacies groups (that are used for the water extraction and injection), H i is the thickness of these permeable layers, µ is the dynamic viscosity, L is the well distance, r out the outer well radius and S the skin factor (the skin factor could be used to account for deviated wells, Rogers and Economides, Figure 3. Porosity and permeability data measured on core samples of 16 boreholes (Bär et al., 2013;Bär and Sass, 2015).Values inside the boxes show the median.n: Number of samples.
With known flow rate the geothermal power can be calculated (Eq.1).Here a maximum threshold for the drawdown needs to be included to ensure economic production.
Petrophysical properties
The Box-Whisker diagram in Fig. 3 shows the porosity and permeability data for core samples of 16 wells compiled by the Geological Survey of Lower Saxony (Bär et al., 2013;Bär and Sass, 2015) grouped according to the petrographic description in the database.There is an increase in porosity and permeability with increasing grain size from clay-/siltstone to coarse sandstone.Porosities of the gravelly sandstone samples are lower, which can be explained by their poor sorting.
Figure 4a shows the results of the thermal conductivity measurements on oven-dry and fully water saturated samples.The medians of the dry thermal conductivity for all lithofacies groups range between 2 and 2.3 Wm −1 K −1 .The trend in the saturated thermal conductivity values clearly reflects the porosity trend between the different lithofacies groups (see Fig. 3).The diagram on the right hand side of Fig. 4 shows the relationship of the thermal conductivity ratio (TCR) and the porosity.The TCR is calculated according to Eq. ( 3), where λ sat and λ dry are the thermal conductivity of the saturated and dry sample, respectively.
The coefficient of determination for a linear regression of the whole data set is 0.59.The coefficients of determination for each lithofacies group separately are given in the legend.
Correlation of gamma-ray log with lithology, porosity and permeability
Figure 5 shows the gamma-ray log together with the lithology log and the porosity and permeability data exemplarily for one borehole.The gamma-ray log is in good agreement with the lithology and shows a significant negative correlation with the porosity and permeability.These general trends are observed for all boreholes, but quantitative conclusions (in terms of correlation coefficients) still need to be drawn taking into account all available gamma-ray logs and porosity and permeability data.In the presented example the lithology log was not depth-corrected and needed to be shifted 3 m upward in order to fit the gamma-ray log.The depth is given in MD (measured depth), but the borehole penetrates the Pechelbronn Group almost vertically, so that the true vertical thickness corresponds to the penetrated thickness.It can be seen from the figure that more than 20 m of the whole section shows permeability values of more than 10 −14 m 2 .
Discussion
The number of samples for which porosity and permeability data are available is not evenly distributed over the different lithofacies groups.Nevertheless, the amount of measurements is sufficient for each group to provide statistically evaluated parameter and uncertainty ranges that can be used for the parametrization of the model unit.The number of thermal conductivity measurements under both oven-dry and fully water saturated conditions is much lower and results might therefore not be statistically representative.Especially for the gravelly sandstone (which is the most heterogeneous sample group in terms of lithology, grain size and sorting) the saturated thermal conductivity range is very high.However, the results of the saturated thermal conductivity measurements clearly reflect the porosity values and are therefore assumed to be reasonable.Furthermore, given the correlations shown in Fig. 4 it is reasonable to use mean values for the dry thermal conductivity and calculate saturated bulk thermal conductivity using the measured porosity values for each lithological unit.
As consequence from the fact that the thermal conductivity of water exceeds the thermal conductivity of air by a factor of approximately 23 (at room temperature) the thermal conductivity ratio increases with increasing porosity as indicated in Fig. 4b.The low coefficient of determination for Silt-/Claystones might be caused by the (different) swelling capacity of some clay minerals.Still, the number of saturated thermal conductivity measurements for this lithofacies group is too low to allow for statistically meaningful conclusions.For the parametrization the properties have to be corrected to reservoir conditions (pressure and temperature) as suggested by Bär (2012).The assessment of the hydrothermal potential will account for uncertainties by using the upper and lower end of the parameter ranges (e.g.Q 90 / Q 75 and Q 10 / Q 25 ) as well as the median values, resulting in an optimistic, a conservative and a realistic estimation, respectively.This statistical approach also allows for the calculation of the probability of occurrence to reach a certain geothermal potential.
The correlation of gamma-ray and other borehole geophysical logs with petrophysical properties can be used to give a quite satisfactory estimation of the latter if no core samples are available (e.g.Hartmann et al., 2005;Fuchs and Förster, 2014).In order to quantify the negative correlation between gamma-ray amplitude and porosity and permeability, more boreholes are currently being analyzed.The aim is to assign porosity, permeability and thermophysical property ranges to certain gamma-ray API values.
Conclusion and outlook
Results of porosity, permeability and thermal conductivity measurements yield statistically evaluated values for five lithofacies groups.The mean property values for each lithofacies group are shown in Table 1 together with the standard deviation.
As soon as the 3-D structural model is completed and the units are parametrized with the relevant properties, the hydrothermal potential can be assessed in 3-D.If there are areas that turn out to be sufficiently thick and permeable, these areas can be studied and modelled in more detail.Provided that there is enough lithological and structural input data, the et al., 2012) would simulate a hydrothermal doublet for direct heat generation (e.g.Kastner et al., 2015).These approaches could also be used for a sensitivity analysis to better assess the impact of over-or underestimation of each property on the performance and efficiency of such an application.
Figure 2 .
Figure 2. Flowchart illustrating the input data and steps for the construction of the 3-D structural model (highlighted in blue) and the assessment of the geothermal potential (highlighted in red).Steps in dashed line boxes are only shown for completeness but are not further discussed in this paper.
Figure 4 .
Figure 4. (a) Thermal conductivity data measured on fully water saturated and oven-dry core samples of eight boreholes.The values inside the boxes show the median.n: Number of samples.(b) Thermal conductivity ratio (TCR; Eq. 3) against porosity.The values given in the legend are the coefficients of determination for a linear regression.
Figure 5 .
Figure5.Exemplary lithology profile, gamma-ray log (both courtesy of Exxon Mobil), porosity and permeability data (from the Geological Survey of Lower Saxony;Bär et al., 2013;Bär and Sass, 2015) of one of the analysed wells.
Table 1 .
Average (median: Q 50 , arithmetic mean: x) property values, standard deviation (σ ) and number of samples (n) for each lithofacies group.Pechelbronn Group can be further subdivided in a local scale 3-D structural model.Additionally, a numerical model with a reasonable geological setting (probably provided by the local scale 3-D structural model) or other more simplistic approaches of geothermal well doublet calculators (Van Wees
|
2018-12-26T19:45:08.798Z
|
2018-04-09T00:00:00.000
|
{
"year": 2018,
"sha1": "a904405229246d29b44cf7465ac533478d5fccb9",
"oa_license": "CCBY",
"oa_url": "https://adgeo.copernicus.org/articles/45/251/2018/adgeo-45-251-2018.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a904405229246d29b44cf7465ac533478d5fccb9",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
254096265
|
pes2o/s2orc
|
v3-fos-license
|
Scattered $P$-spaces of weight $\omega_1$
We examine dimensional types of scattered $P$-spaces of weight $\omega_1$. Such spaces can be embedded into $\omega_2$. There are established similarities between dimensional types of scattered separable metric spaces and dimensional types of $P$-spaces of weight $\omega_1$ with Cantor--Bendixson rank less than $\omega_1$.
Introduction
A topological space is said to be a P -space, whenever G δ subsets are open. A topological space is scattered (dispersed) if every non-empty subspace of it contains an isolated point. If X is a topological space and α is an ordinal number, then X (α) denotes the α-th derivative of X, compare [9, p. 261] or [15, p. 64]. If X is a scattered space, then Cantor-Bendixson rank of X is the least ordinal N(X) such that the derivative X (N (X)) is empty, see [7, p. 34]. Thus, if X (N (X)) = ∅ and β < N(X), then X (β) = ∅, also if X is a scattered space of cardinality ω 1 , then N(X) < ω 2 .
This paper is a continuation of [1], where we have investigated crowded P -spaces of cardinality and weight ω 1 . Here, we examine scattered Pspaces of weight ω 1 . Following the idea that some proofs on P -spaces are similar to proofs concerning (scattered) metric spaces, compare [2,Lemma 2.2.], the readers can modify our argumentation to obtain results stated in [5], and also contained in [11] and [17].
It will be convenient to use the notation from [3] and [6]. A scattered P -space is assumed to be regular and of weight ω 1 , nevertheless, we shall repeat these assumptions in the statements of facts. For brevity, we write γ ∈ Lim instead of γ < ω 2 is an infinite limit ordinal. Also, a closed and open set will be called clopen. The sum of a family of κ many homeomorphic copies of a space X we denote κ X. Basic facts about sums can be found in [3, pp. 74-76]. If topological spaces X and Y are homeomorphic, then we write X ∼ = Y . Following [4], [15, p. 130] or [9, p. 112], if X is homeomorphic to a subspace of Y , then we write X ⊂ h Y . If X ⊂ h Y and Y ⊂ h X, then we write X = h Y and say that X and Y have the same dimensional type.
The paper is organised as follows. First, we observe that any scattered space of weight ω 1 has to be of cardinality ω 1 and then we establish a lemma on embeddings of spaces with a point together with a decreasing base consisting of clopen sets, Lemma 2. In Section 3, we are concerned with properties of elementary sets, i.e. clopen sets with the last non-empty derivative of cardinality 1. Lemma 4 says that a scattered P -space of weight ω 1 can be represented as the sum of a family of elementary sets. Theorem 6 generalises a result of B. Knaster and K. Urbanik, see [8] and [17, Theorem 9], that each scattered metric space is homeomorphic to a subspace of an ordinal number with the order topology. To be more precise, dimensional types of scattered Pspaces of weight ω 1 are represented by dimensional types of subspaces of ω 2 . Corollary 7 states that any scattered P -space of weight ω 1 has a scattered compactification. The notion of a stable set enables us to reduce dimensional types of scattered P -spaces with countable Cantor-Bendixson rank to those of finite ranks. In Section 6, we examine spaces J(α) for any α < ω 2 , in particular, we have established that the space J(α) is maximal among elementary sets with Cantor-Bendixson rank not greater than α. Our main results are contained in Section 7. Theorem 30 and Corollary 31 are counterparts of [5,Theorem 19] and [5,Corollaries 29 and 31]. Finally, we add some remarks concerning Pspaces with uncountable Cantor-Bendixson ranks. We think that a more detailed description of such spaces requires new tools, therefore it seems to be troublesome.
Preliminaries
One can readily check the following properties of a P -space, see [1]. A regular P -space has a base consisting of clopen subsets, hence it is completely regular, [1,Proposition 1]. For a countable family of open covers, there exists an open cover which refines each member of this family. If a regular P -space is of cardinality ω 1 , then any open cover has a refinement consisting of clopen sets, [1,Lemma 14], and also a countable union of clopen sets is clopen, [1,Corollary 15].
Note that, there exist P -spaces of cardinality ω 1 and of weight greater than ω 1 . Indeed, let X = ω 1 + 1 be equipped with the topology such that countable ordinal numbers are isolated points and is a base at the point ω 1 ∈ X, where a closed unbounded subset of ω 1 is called a club, compare [6,Definition 8.1.]. The intersection of countably many clubs is a club and any base for filter generated by the family of all clubs is of cardinality greater than ω 1 , which follows from [6,Lemma 8.4.]. Therefore X is a P -space of cardinality ω 1 and of weight greater than ω 1 . Proposition 1. A scattered space of weight at most ω 1 is of cardinality at most ω 1 .
Proof. If X is a scattered P -space of weight at most ω 1 , then where β < ω 2 . The inherited topology of each X (α) \ X (α+1) is discrete and of cardinality at most ω 1 , hence |X| ≤ ω 1 .
Suppose f : ω 1 → ω 1 is an injection. Clearly, we have the following.
The next lemma looks to be known, but for the readers convenience, we present it with a proof. Lemma 2. Let X and Y be topological spaces such that the families of clopen subsets B x = {V x α : α < ω 1 } and B y = {U y α : α < ω 1 } are decreasing bases at points x ∈ X and y ∈ Y , respectively. If : α < ω 1 }, and f : ω 1 → ω 1 is an injection, and {F α : α < ω 1 } is a family of embeddings Proof. We obtain an injection F : It remains to show that the function F is continuous at the point x ∈ X and F −1 is continuous at the point y ∈ Y .
Fix a set U y β ∋ F (x). By ( * ) there exists α < ω 1 such that f [(α, ω 1 )] ⊆ (β, ω 1 ). Therefore because of ( * * ). We have Moreover, the sets V α \ V α+1 will be called slices. Also, we have Note that if X is a P -space and x ∈ X, then there exists a P -base at point x ∈ X. Indeed, let {V α : α < ω 1 } be a base at a point x ∈ X, which consists of clopen sets. Putting W α = γ<α V γ , we obtain the family {W α : α < ω 1 } which is a desired P -base.
For the purpose of Theorem 15 we will need the following notions and Lemma 3. Let (P, ≤) be an ordered set. An antichain in P is a set A ⊆ P such that any two distinct elements x, y ∈ A are incomparable, i.e., neither x ≤ y nor y ≤ x. A nonempty C ⊆ P is a chain in P if C is linearly ordered by ≤. Now, assume that (P, ≤) is a well-ordered set. If 1 ≤ n < ω, then let be the coordinate-wise order on the product P n , i.e. (a 1 , . . . , a n ) (b 1 , . . . , b n ), whenever a i ≤ b i for 0 < i ≤ n. The following variant of Bolzano-Weierstrass theorem seems to be known, it can be deduced from [5, Lemma 28]. Lemma 3. If (P, ≤) is a well-ordered set, then any infinite subset of (P n , ) contains an infinite increasing sequence. In particular, any antichain and any decreasing sequence in (P n , ) should be finite.
On elementary sets
A clopen subset E of a P -space is elementary, whenever the derivative E (N (E)−1) is a singleton. Clearly, a singleton is an elementary set and if E is an elementary set, then N(E) is not a limit ordinal. Lemma 4. If X is a regular scattered P -space of weight ω 1 , then any open cover of X can be refined by a partition consisting of elementary sets.
Proof. Let {U γ : γ < ω 1 } be an open cover of X. If N(X) = 1, then X is a discrete space, so there is nothing to do. Assume that the hypothesis is fulfilled for each scattered P -space Y with N(Y ) < α. If N(X) = α is a limit ordinal number, then the family {X \X (γ) : γ < α} is an open cover of X. So, there exists a partition {V γ : γ < ω 1 } which refines both covers {U γ : γ < ω 1 } and {X \ X (γ) : γ < α}. By the induction hypothesis, we can assume that each V γ is the union of elementary subsets, since N(V γ ) ≤ γ < α. In the case N(X) = β + 1, the derivative X (β) is a discrete space. Let {V γ : γ < ω 1 } be a partition of X which refines {U γ : γ < ω 1 } and such that each , then, by the induction hypothesis, it is the union of a family of elementary subsets.
Proposition 5. If X is an elementary set and α < N(X), then there exists an elementary subset E ⊆ X such that N(E) = α + 1. Moreover, if α + 1 < N(X), then there exists uncountable many pairwise disjoint elementary subsets E ⊆ X such that N(E) = α + 1.
B. Knaster and K. Urbanik showed that a scattered separable metric space can be embedded in a sufficiently large countable ordinal number, see [8]. Later, R. Telgársky removed the assumption of separability, showing that each metrizable scattered space can be embedded in a sufficiently large ordinal number, see [17]. Theorem 6. Any regular scattered P -space of weight ω 1 can be embedded into ω 2 .
Proof. We proceed inductively with respect to the rank N(Y ) < ω 2 of scattered P -spaces Y . If N(Y ) = 1, then Y is discrete, hence it is homeomorphic to the family of all non-limit countable ordinals. First, we present the second step of the induction. Let Y be a scattered Pspace with N(Y ) = 2. The derived set Y (1) is discrete and closed, so (1) . Let P * be a partition which refines P. Thus each member of P * has at most one accumulation point and also |P * | ≤ ω 1 . Members of P * should be homeomorphic to We inductively assume that if Z is a P -space with N(Z) < α, then Z is homeomorphic to a subspace of an initial interval of is not a singleton, then Y is the sum of elementary sets with Cantor-Bendixson rank γ + 1. As previously, we embed these elementary sets into successive disjoint intervals of ω 2 .
Theorem 6 shows that all scattered P -spaces of weight ω 1 share topological properties of the generalised ordered spaces, compare [10]. We omit a detailed discussion of this kind and confine ourselves to a counterpart of the Knaster-Urbanik result, see [8].
Corollary 7. A regular scattered P -space of weight ω 1 has a scattered compactification of cardinality ω 1 .
Proof. Any regular scattered P -space of weight ω 1 has a homeomorphic copy contained in a initial interval of ω 2 , thus the closure of this copy is the desired compactification.
Clearly, among regular P -spaces only finite ones are compact, so any compactification of an infinite P -space is not a P -space.
Stable sets with finite Cantor-Bendixson rank
Assume that J(0) is the empty set and J(1) is a singleton. But is defined, then i(n) is the P -space with i(n) (n−1) = {x} and a P -base at x such that slices are homeomorphic to the sum of ω many copies of i(n − 1).
Adapting the idea from [13, p. 248], we change the definition of a stable set. Namely, among the elementary sets we shall single out stable sets as follows. Let E be an elementary set such that E (n) = {g}, where n < ω. Considering E as a P -space, we say that E is a stable set, whenever there is a P -base at g ∈ E such that any two slices are homeomorphic. A singleton is a stable set. Let X be a P -space such that X (1) = {g}. If there exists a P -base at g ∈ X with countably infinite slices, then X is a stable set. By Lemma 2, such a space X is unique up to homemorphism, in fact it is i(2). The space J(2) is a stable set, but the elementary set i(2) ⊕ D is not stable, whenever D is uncountable and discrete. Obviously, J(2) and i(2) are the only stable sets in the class of all P -spaces with Cantor-Bendixson rank 2. This class consists of spaces which have three different dimensional types: i(2), i(2) ⊕ D and J(2); and moreover If E is a stable set and N(E) = n + 1, then the set E is sometimes called n-stable. By Lemma 2, we have the following.
In the class of all elementary sets with Cantor-Bendixson rank 3 there is countably many elementary sets having different dimensional types. For example, spaces i(3) ⊕ n J(2) are of different dimensional types, depending on n.
Lemma 9. For each n < ω, there exist only finitely many non-homeomorphic n-stable sets. Also, any elementary set with finite Cantor-Bendixson rank is the sum of a family of stable sets. Proof. We proceed by induction on n. If n = 1 and E is an elementary set with N(E) = 1, then there is nothing to do. Let F k be a family which consists of all, up to homeomorphism, k-stable sets. Suppose that the family F k is finite, for each k < n, and any elementary set with Cantor-Bendixson rank < n is the sum of a family of stable sets. Suppose E is an elementary set with By the induction hypothesis, assume that each slice V α \ V α+1 is the sum of a family U α of elements from {F k : k < n}. Let us define a subsequence of B as follows.
-If Y ∈ {F m : m < n} appears only in countably many families m < n} appears uncountable many times only in countably many U α , then there exists γ Y < ω 1 such that each U α contains at most countably many copies of Y , where an increasing function f : Clearly, the set V f (0) is n-stable. Since the family {F k : k < n} is finite, it follows that the family of all n-stable sets is finite. By the induction hypothesis and Lemma 4, the set E \ V f (0) is the sum of a family of stable sets.
For technical reasons, the sum 0 X is understood as the empty set.
Lemma 10. If X is a scattered P -spaces with finite Cantor-Bendixson rank, then there exists a partition where F 1 , . . . , F m are all stable sets with Cantor-Bendixson rank not greater than N(X) and κ i ∈ ω ∪ {ω, ω 1 }.
Proof. Let X be a scattered P -space with N(X) = n. By Lemma 4, the space X is the sum of a family F of elementary sets. By Lemma 9, each E ∈ F is the sum of a family of stable sets. Thus, X is the sum of a family of k-stable sets, where k < n. Therefore (again by Lemma 9), if F 1 , . . . , F m is a sequence of all (up to homeomorphism) k-stable sets, where k < n, then where and κ i ∈ ω ∪ {ω, ω 1 }.
Theorem 11. There are at most countably many non-homeomorphic scattered P -spaces with finite Cantor-Bendixson rank.
Proof. If X is a scattered P -space with N(X) < n, then X is homeomorphic to the sum as in Lemma 10, where N(F i ) < n for each F i . There are at most finitely many k-stable sets with k < n, hence there exist at most countably many such sums determined by the number of occurrences of k-stable sets with k < n, which suffices to finish the proof.
Corollary 12. There exist countably many dimensional types of scattered P -spaces with finite Cantor-Bendixson rank.
Dimensional type of P -spaces with finite Cantor-Bendixson rank
Some technical problems of P -spaces with countable Cantor-Bendixson rank can be reduced to studying P -spaces with finite Cantor-Bendixson rank.
Proof. If n < 2, then there is nothing to do. If N(Y ) = 2, then check that Y ⊂ h J(2), using Lemma 2. Suppose that the hypothesis is fulfilled for each k < n. Fix a P -base {W γ : γ < ω 1 } at the point y ∈ Y (n−1) and fix a P -base {V γ : γ < ω 1 } at the point x ∈ J(n) (n−1) such that V γ \ V γ+1 = ω 1 J(n − 1) for each γ < ω 1 . By Lemma 4, where subsets E µ ⊆ Y are elementary and with N(E µ ) ≤ n − 1. So, by the induction hypothesis, there exist embeddings Putting f (y) = x, we are done.
If k ≥ 2n and X (k) = {g}, then there exists Y ⊆ X such that the derivative Y (2n) is a singleton, hence J(n + 1) ⊂ h Y ⊆ X. The assumption k = 2n is minimal. Indeed, if n = 2 and k = 3, then i(4) and J(3) have incomparable dimensional types.
Theorem 15. Let (F , ⊂ h ) be an ordered set, where F is a family of scattered P -spaces of weight ω 1 with Cantor-Bendixson ranks ≤ n. Then every antichain is finite and every strictly decreasing chain is finite.
Proof. Let X be a scattered P -space with N(X) = n and F 1 , . . . , F m be all k-stable sets with Cantor-Bendixson rank k ≤ n. Applying Lemma 10, fix a partition Consider the coordinate-wise order (A m , ). Using elementary properties of the sum of spaces, we have the following implications: Condition (1) implies that if U ⊆ F is an antichain with respect to ⊂ h , then {ϕ(X) : X ∈ U} is an antichain in (A m , ). Therefore, by Lemma 3, there is no infinite antichain in F .
Suppose that (X n ) is a strictly decreasing sequence with respect to ⊂ h . Put Corollary 16. Let (F , ⊂ h ) be an ordered set, where F is a family of scattered P -spaces of weight ω 1 with finite Cantor-Bendixson rank. Then every antichain is finite and every strictly decreasing chain is finite. However, among spaces of F , there is ω-many but not ω 1 -many different dimensional types.
Proof. Let A be an antichain of scattered P -spaces with finite Cantor-Bendixson rank. If X, Y ∈ A, then 2N(X) < N(Y ) is impossible. Suppose otherwise and put n = N(X) + 1, then X ⊂ h J(n) by Proposition 13. The inequality N(Y ) ≥ 2n − 1 and Proposition 14 imply J(n) ⊂ h Y , a contradiction. Thus A has to be finite.
. is a strictly decreasing sequence of scattered P -spaces with finite Cantor-Bendixson rank. Then all spaces X n have Cantor-Bendixson rank ≤ N(X 1 ). By Theorem 15, the sequence is finite.
Observe that if m = n, then spaces m J(2) and n J(2) have different dimensional types. By Lemma 4 and 9, there is at most countably many different dimensional types among spaces with Cantor-Bendixson rank n, hence no family of dimensional types of spaces with finite Cantor-Bendixson rank can be uncountable.
Maximal elementary sets
Proposition 13 states that J(n) is maximal with respect to ⊂ h in the class of all P -spaces with Cantor-Bendixson rank ≤ n. We proceed to a definition of maximal P -spaces with infinite Cantor-Bendixson ranks. Namely, let J(ω) be the sum of the family {J(n) : n < ω}, i.e.
Proof. Since J(ω) = {J(n) : n ∈ ω}, using Lemma 17, we get Assume that if γ ∈ β ∩ Lim, then the hypothesis of the lemma is fulfilled. According to Lemma 17, we get In fact, for each β ∈ Lim we have
Dimensional type of P -spaces with countable and infinite Cantor-Bendixson rank
Note that J(2) cannot be embedded as a clopen subset of i(n). Indeed, if U ⊆ i(n) is a non-discrete clopen subset, then U contains clopen homeomorphic copy of i(2), but J(2) does not contain a clopen homeomorphic copy of i (2). Consequently no i(n), for n > 1, can be homeomorphic to a clopen subset of J(ω). Analogously, no J(n), for n > 1, can be homeomorphic to a clopen subset of {i(n) : n < ω}. So, one can readily check that J(ω) is not homeomorphic to {i(n) : n < ω}. Nevertheless, we have the following.
Proposition 19. If X is a scattered P -space of weight ω 1 such that N(X) = ω, then X = h J(ω).
Corollary 21. If X and Y are elementary sets with Cantor-Bendixson rank ω + 1, both of the weight ω 1 , then X = h Y .
For each E ∈ R α , we have N(E) < ω, hence the sum of R α can be embedded into U α \ U α+1 . Sending the point y to x, we get Y ⊂ h J(ω + 1).
Corollary 23. If X is a crowded P -space of weight ω 1 and Y is a scattered P -space of weight ω 1 , then Y ⊂ h X.
Proof. If β = ω and n = 1, then there is nothing to do, as it is observed just before this theorem. Assume that ( * ) β If ω ≤ γ < β, then there exists λ < β such that γ < λ and if E is an elementary set with N(E) = λ, then J(γ) ⊂ h E.
Indeed, X is the sum of a family of elementary sets E µ such that β is the supremum of ordinal numbers N(E µ ), hence if γ < β, then one can choose an elementary set E µ ⊃ h J(γ), for each γ a different one.
The relation = h is an equivalence, where is the equivalence class of X. If λ ∈ Lim ∪ {0}, then let P λ be the family of equivalence classes of P -spaces X such that λ < N(X) ≤ λ + ω. Putting [X] h < h [Y ] h , whenever X ⊂ h Y , we obtain the ordered set, which we denote (P λ , < h ). By Theorem 24, classes [J(λ + 1)] h and [J(λ + ω)] h are the least element and the greatest element of P λ , respectively. If [X] h ∈ P λ , then X \ X (λ+1) is the sum of a family of elementary sets with Cantor-Bendixson rank λ + 1. If [X] h ∈ P λ , then [X (λ) ] h ∈ P 0 and by Proposition 25 we have Note that, the class [J(ω 2 )] h does not belong to any family P λ , despite the fact that if X is a P -space with N(X) < ω 2 , then [X] h < h [J(ω 2 )] h . A similar statement holds when ω 2 is replaced by γ ∈ Lim such that γ = λ + ω for each λ ∈ Lim.
The lemma below is probably well known.
Lemma 26. If f : X → Y is a continuous injection, then we have f [X (α) ] ⊆ Y (α) , for any ordinal number α. Moreover, if X ⊂ h Y , then Proof. If f : X → Y is a continuous injection and x ∈ X (1) , then we have If α is a limit ordinal number, then It appears that Lemma 26 can be reversed.
Lemma 27. If Z is a P -space such that 0 < N(Z) ≤ ω and λ ∈ Lim, then there exists a P -space Z * such that Z * (λ) ∼ = Z. Moreover, } are disjoint copies of J(λ + 1). Equip the set Z * with a topology as follows. Each subset of the form J x is clopen in Z * and it is homeomorphic to J(λ + 1). If z ∈ Z (1) and V is a neighbourhood of z in Z, then let Let the family B z = {V * : V is a neighbourhood of z ∈ Z} constitute a base at the point z ∈ Z * . We leave the reader to check that the space Z * is as desired.
Lemma 29. Let X, Y be scattered P -spaces and λ ∈ Lim ∩ ω 1 . If Proof. Fix an embedding f : X (λ) → Y (λ) . For each x ∈ X (λ) \ X (λ+1) there exists an elementary U x ⊆ X such that U x ∩X (λ) = {x}. Without loss of generality, we can assume that sets U x are pairwise disjoint and, by Theorem 24, each U x = h J(λ + 1). Similarly, choose a family we have N(V x ) ≥ λ + 1 and U x = h J(λ + 1). Thus we can define an embedding g x : U x → V x as in Lemma 28. If F : X → Y is such that F | X (λ) = f and F | Ux = g x , for each x ∈ X (λ) \ X (λ+1) , then F is a desired embedding.
It remains to show that ψ is a desired isomorphism.
By Lemma 27, the function ψ is a surjection. Again, by Lemma 26, we have , then X ⊂ h Y , which implies injectivity of ψ. So, the proof is finished.
Corollary 31. Let F be a family of scattered P -spaces of weight ω 1 with countable Cantor-Bendixson ranks. In (F , ⊂ h ), any antichain and any strictly decreasing chain are finite.
Proof. Assume that A is an antichain in (F , ⊂ h ) and X ∈ A, and N(X) = λ + n, where λ ∈ Lim and n < ω. By Theorem 24 and Proposition 22, if Y ∈ A, then N(Y ) = λ + m, where m < ω and λ < ω 1 . By Corollary 16 and Theorem 30, the family A has to be finite. Analogously, we proceed in the case A is a decreasing chain. 8. A few remarks on P -spaces with Cantor-Bendixson rank ≥ ω 1 + 1 As we have learned there is only one sensible way of defining an elementary set with Cantor-Bendixson rank β + 1 for β ∈ Lim, since any two such elementary sets have the same dimensional type by Proposition 25. This is not the case for elementary sets with Cantor-Bendixson rank > ω 1 . Namely, let Y (ω 1 ) be an elementary set such that Y (ω 1 ) (ω 1 ) = {g} and each slice V α \ V α+1 = J(α).
Without loss of generality, it remains to consider the case when N(V 0 \ V 1 ) = ω 1 and N(V α \ V α+1 ) < ω 1 for α ≥ 1. Then we have which completes the proof.
One can readily check that and no two of these four spaces have the same dimensional type.
Conclusions
Compare cardinal characteristics of some classes of dimensional types with classes of non-homeomorphic spaces.
By Corollary 12 and Theorems 24 and 30, if λ < ω 1 , then there are only countably many dimensional types of P -spaces with Cantor-Bendixson ranks ≤ λ. Therefore, there is exactly ω 1 -many dimensional types of P -spaces with countable Cantor-Bendixson ranks.
There exist 2 ω 1 -many non-homeomorphic P -spaces X with N(X) = ω 1 , but if λ ∈ Lim∩ω 1 , then there are continuum many non-homeomorphic P -spaces X with N(X) = λ. Indeed, for each A ⊆ {α + 2k : α ∈ Lim ∩ λ and k < ω}, let X be a P -space such that constructing such a P -space needs some extra work which we leave to the reader. We get that different subsets of λ are assigned to non-homeomorphic P -spaces. Hence, there are continuum many nonhomeomorphic scattered P -spaces with countable Cantor-Bendixson rank. Similarly, one can prove that if λ = ω 1 , then there exist 2 ω 1many non-homeomorphic scattered P -spaces.
It seems that examination of P -spaces with uncountable Cantor-Bendixson ranks needs several new ideas and extra efforts. As it has been noted earlier, the readers would easily conclude results concerning scattered separable metric spaces mimicking our argumentation. The same can be said about the so-called ω µ -additive spaces, introduced by R. Sikorski [16], compare [12, p. 1]. Indeed, consider the family of all scattered ω µ -additive spaces of weight ω µ . Replacing ω 1 by ω µ , one can adapt our results to this family.
|
2022-12-01T06:42:23.036Z
|
2022-11-30T00:00:00.000
|
{
"year": 2022,
"sha1": "07a590eb6c03c8f1e4152e57f1d82389d0b7ea38",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07a590eb6c03c8f1e4152e57f1d82389d0b7ea38",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
233998034
|
pes2o/s2orc
|
v3-fos-license
|
A case report and literature review of heterotopic mesenteric ossification
Introduction and importance Heterotopic mesenteric ossification is a benign bony tissue growth in the mesentery that mostly follows repetitive or severe abdominal injuries leading to reactive bone formation in the mesentery. There are only 73 cases (51 publications) identified in the literature up to the beginning of 2020. Case presentation 45-year-old Saudi male underwent multiple laparotomies to manage complicated appendicitis which ended with a diverting ileostomy and a colostomy as a mucus fistula. After 9 months, the patient was admitted to the General Surgery department in Al-Hada Armed Forces Hospital for an open ileostomy and colostomy reversal surgery where several irregular bone-like tissues of hard consistency and sharp edges with some spindle-shaped structures resembling needles were found in the mesentery of the small intestine and histopathology revealed of trabecular bone fragments confirming the diagnosis. Clinical discussion The majority of cases occur mid to late adulthood with a predilection in the male gender, and usually present with bowel obstruction or an enterocutaneous fistula. Although it has no malignant potential, it may cause severe bowel obstruction that can lead to mortality, it's a rare occurrence and, therefore, is difficult to diagnose among many common abdominal disturbances. Conclusion Here we report a rare case of heterotopic mesenteric ossification, which should be considered as one of the delayed complications of abdominal surgery or trauma. The time range of expecting the presentation of heterotopic mesenteric ossification following major abdominal trauma or surgery should be extended and continuously considered during differential diagnosis.
Introduction
Heterotopic mesenteric ossification (HMO) is a benign bony tissue growth in the mesentery that mostly follows repetitive or severe abdominal injuries leading to reactive bone formation in the mesentery [1]. It is an abdominal catastrophe, and it requires multiple abdominal surgeries to manage. There are only 73 cases (51 publications) identified in the literature up to the beginning of 2020. The pathogenesis of the HMO is currently not well recognized, it is thought to be formed by the stimulation of mesenchymal osteoprogenitor stem cells to differentiate into osteoblasts due to mechanical trauma, ischemia, or intraabdominal infection [2]. It is also assumed to be caused by implantation of bone periosteum into soft tissue [3].
The majority of cases occur mid to late adulthood with a predilection in the male gender, and usually present with bowel obstruction or an enterocutaneous fistula [4,5]. Although HMO has no malignant potential, it may cause severe bowel obstruction that can lead to mortality in already sick patients [6]. The usual time elapsed from the time of the predisposing trauma to operation ranged from 2 to 4 weeks. However, this might extend to 7 years after the initial insult [1]. Because HMO is a rare occurrence and, therefore, is difficult to diagnose among many common abdominal disturbances, here we present a case of a 45-year-old Saudi male with a typical HMO discovered 9 months after right hemicolectomy in addition to a comprehensive literature review of similar published cases since it was first described in 1983 until 2020. This work has been reported in line with the SCARE 2020 criteria [7].
Case presentation
A 45-year-old Saudi male presented to the emergency department of a local hospital in March of 2018 with a typical picture of acute appendicitis; he was admitted for an open appendectomy. Intraoperatively, they discovered a perforated appendix; histopathology revealed a severely inflamed perforated appendix. After 4 days, his first operation was complicated by a feculent discharge from the peritoneal drain due to a complicated cecal fistula with a septic clinical picture. He was admitted for an exploratory laparotomy, and segmental resection of the involved bowel with primary anastomosis was done. Two days after International Journal of Surgery Case Reports 82 (2021) 105905 the second operation, he had an anastomotic leak with peritonitis, and he had feculent discharge from the wound site and the peritoneal drain; he was shifted to the operating room for an exploratory laparotomy where a right hemicolectomy was done with primary anastomosis. On the seventh day, and despite the two operative attempts, the patient had intraperitoneal dissemination of fecal material and generalized peritonitis for the third time; he was sent for an exploratory laparotomy where a diverting ileostomy and a colostomy as a mucus fistula was done.
The patient did not have any remarkable family history, he is medically free, not a smoker or alcoholic and doesn't have any significant medical history.
After 9 months, the patient was admitted to the General Surgery department in Al-Hada Armed Forces Hospital for an open ileostomy and colostomy reversal surgery. His abdominal examination revealed a normal soft and lax abdomen with a right ileostomy and left colostomy openings. On admission to Al-Hada Hospital, his white blood cell count was 6.12 × 10 −9 /l, mostly lymphocytes (3.27 × 10 −9 /l). His hemoglobin was 146 g/l, platelet count was 370 × 10 −9 /l. C-reactive protein (CRP) was 1.5 mg/l, erythrocyte sedimentation rate (ESR) was 15 mm/h. Carcinoembryonic antigen (CEA) was 0.9 ng/ml.
White blood cell count normal range is 4 to 11 × 10 −9 /l, lymphocytes normal range is 0.1 to 1.1 × 10 −9 /l. Hemoglobin normal range is 135 to 180 g/l. Platelets normal range is 150 to 400 × 10 −9 /l, C-reactive protein normal range is 0.0 to 5.0 mg/l, erythrocyte sedimentation rate (ESR) normal range is 0.0 to 10.0 mm/h, and Carcinoembryonic antigen (CEA) normal range is 0.0 to 5.0 ng/ml.
Pre-operative abdominal computerized tomography (CT) with the contrast given intravenously, orally, rectally, and through the ileostomy. The axial CT view is shown in (Fig. 1). The coronal and sagittal CT views are shown in (Fig. 2).
Pre-operative abdominal CT insured a patent passage of the bowel. But the calcified densities and fat stranding opacities were thought to be related to post-operative changes. Intraoperatively, laparotomy under general anesthesia showed adhesions and several irregular
HMO
Heterotopic mesenteric ossification HO Heterotopic ossification CRP C-reactive protein ESR Erythrocyte sedimentation rate CEA Carcinoembryonic antigen CT Computerized tomography BMPs Bone morphogenic proteins bone-like tissues of hard consistency and sharp edges with some spindle-shaped structures resembling needles were found on the mesentery of the small intestine (Fig. 3). All the bone-like tissues were carefully removed. The bone-like tissues were examined histologically (Fig. 4). It showed trabecular bone fragments, suggestive of heterotopic ossification. Post-operatively, the patient was advanced slowly to a normal diet, and he improved gradually. The patient's last followup was in January 2021; he showed complete recovery with no complications.
Discussion and conclusion
Heterotopic mesenteric ossification (HMO) was first reported in the literature in 1983, where three patients developed heterotopic mesenteric ossification after abdominal surgery [8,9]. Ectopic calcification is classified histologically into dystrophic calcification (where deposition of calcium happens without osteoblasts) and heterotopic ossification (which differs from dystrophic calcification by the presence of osteoblasts and lamellar bone) [2]. Before 1983, multiple reports of ossification in the abdominal wall due to scars from previous laparotomies were published, and in 1973 a theory was proposed to explain the pathogenesis of abdominal scars heterotopic ossification, which is the differentiation of multipotent embryonic cells [10]. The differentiation of multipotent mesenteric cells as a result of trauma or abdominal surgery can be applied in our case. To date, there is no strong evidence to prove this theory. Another theory was introduced in 1975 in which heterotopic bone formation of laparotomy scars was theorized to result from osteogenic cells deposition from bones adjacent to the scar [11]. Symphysis pubis or xiphoid process irritation during the vertical abdominal incision can lead to periosteal cell implantation, which can be supported by the fact that when horizontal and vertical incisions are made in one patient, the vertical incision is the one that develops calcification [12]. In our case, where the heterotopic ossification developed in the mesentery, this theory can be challenged due to the lack of pre-formed ossified bone around the mesentery. HMO is extremely difficult to diagnose in patients presenting with abdominal pain and discomfort due to its rare occurrence and very low frequency worldwide. The diagnosis of mesenteric heterotopic ossification can be challenging; abdominal CT scans can help in identifying it preoperatively; however, the differentiation between dystrophic calcification, bone neoplasms, a leakage of contrast, foreign material, or extra-skeletal osteosarcoma from mesenteric heterotopic ossification can be difficult [13]. The only way to reach the definitive diagnosis is through excision and histopathological analysis [14].
We performed an extensive literature search of the Medline and Embase databases for articles published from 1983 up to 2020. No language restrictions were applied, and reference lists of all included studies were manually searched for other potentially eligible studies. We identified only 51 published case reports, including a total of 73 cases. One of whom was an 11-year-old child (Table 1). About (90%) of all the reported cases of mesenteric ossification were males, with a mean age of 48.38 ± 18.27; the most common presenting symptom was bowel obstruction (41%). About (16.4%) of the cases were discovered incidentally by imaging, while (13.7%) of the cases were discovered during surgery. Most (80%) of the reported cases had a surgical history of laparotomy, and (71.2%) of the ossification developed in the mesentery. Detailed statistical analysis of all reported cases is shown in ( Table 2). The current case is in line with the majority of HMO cases, with a history of abdominal surgery that has preceded the formation of HMO.
The time that passed from the last surgical operation to the intraoperative discovery of HMO in the current case was 9 months. The time required for the formation and appearance of HMO clinical symptoms is not exactly known but ranged from 2 weeks to 2 years [15]. Although HMO is rarely encountered, due to the increased cases reported in the last decade, it should be considered in the differential diagnosis in patients presenting with intestinal obstruction or if dense calcified shadows were observed on abdominal CT in patients who underwent previous abdominal trauma or surgeries. Bone morphogenic proteins (BMPs) are multifunctional cytokines that are a part of the transforming growth factor-β family released from inflammatory cells at the site of inflammation, injury, wounds, or sepsis, and have been reported to stimulate the formation of abnormal cartilage and bone tissues [16,17]. BMP and its signalling were observed to be increased in experimental models of trauma-induced heterotopic ossification (HO); meanwhile, BMP antagonism has been shown to decrease HO expansion. Anticipated HO formation after abdominal surgical operations was prevented by the use of anti-inflammatory [18]. Interestingly, rapamycin, which decreases inflammatory signalling through inhibition of the mTOR mechanism of activation, was reported to alleviate HO formation [19]. Moreover, the levels of both local and systemic inflammatory markers were suggested to be increased in traumatic HO as there is a positive correlation between inflammatory cytokines levels and the likelihood of HO formation [20].
In our case, the patient was admitted with severe abdominal pain that reoccurred with each complication and necessitated multiple surgeries. This pain is sensed by substance P, a member of the tachykinin peptide family, that was demonstrated to transmit nociceptive sensation via primary sensory fibres to the spine and brainstem [21]. This substance P was demonstrated to increase and mediate BMP-dependent HO formation [22]. The serum level of substance P is elevated in HO patients, and serum from neurogenic HO mice was demonstrated to induce osteogenic transformation of mesenchymal progenitor cells in vitro [23].
Mesenteric ossification can recure after the removal of the mesenteric bony fragments surgically; calcium and alkaline phosphatase levels can predict the recurrence. If the patient had a low calcium level and a high alkaline phosphatase level, which might indicate an ongoing process of osteogenesis and an active osteoblast [2]. Our patient had normal calcium and alkaline phosphatase levels preoperatively (Fig. 5), suggesting that mature ossified bones has already been formed, which is confirmed by histopathology.
Among the 52 HMO cases presented in the literature, only five cases showed elevated levels of alkaline phosphatase, of which four cases presented 3 weeks after the predisposing trauma or surgery while the patient in the current case was admitted 9 months after the inciting operation. This might indicate the vast variation in the speed of the HMO pathogenesis from case to case, which might be attributed to the levels of inflammation during and after the surgeries, amount of released cytokine, and the ability of the body to control and adjust the orchestra of inflammation. Moreover, the pathogenesis of the HMO might be accelerated or delayed depending on the post-operative management of the case, as precise management through proper anti-inflammatory drugs might prevent or delay the pathogenesis course of the HMO. Additionally, the delayed formation of the HMO, as we encountered in the current case, might indicate the need for long-time management with continuous monitoring of the serum inflammatory cytokines even after the subside of the pain associated with the surgical operation as to continue controlling the inflammatory milieu to avoid delayed HMO formation.
Conclusion
In summary, here we report a rare case of HMO, which should be considered as one of the delayed complications of abdominal surgery or trauma. The time range of expecting the presentation of HMO following major abdominal trauma or surgery should be extended and continuously considered during differential diagnosis, especially when there is a history of previous surgery or trauma. Diagnosis of HMO should be based mainly on the characteristic radiographic findings without relying on the level of alkaline phosphatase, which is elevated only in the period of active osteogenic stag. Continuous monitoring and controlling of the inflammatory cytokines not only for a short time post-operatively but for an extended period may prevent or delay the HMO formation.
Sources of funding
No funding was received.
Ethical approval
The study was approved by the Research Ethics Committee at Al-Hada Armed Forces Hospital and is available upon request from the corresponding author. (reference number, 19200).
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Research registration (for case reports detailing a new surgical technique or new equipment/technology) Not applicable. CRediT authorship contribution statement Sara Assiri and Raad Althaqafi led the writing of the case report and literature review, Rawan Aloufi, Fawaz Althobaiti, Budur Althobaiti, and Mohammad Al Adwani assisted with writing and revision of the manuscript All authors read and approved the final manuscript.
|
2021-05-09T06:16:25.022Z
|
2021-04-27T00:00:00.000
|
{
"year": 2021,
"sha1": "bd3503d3aa797658b4223ec73503fee4854505be",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2021.105905",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e3c506ddc26a4fc2416470be134dffb51b4993a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
86829392
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of Reattach Therapy in Management of Emotional Dysregulation with OCPD, PTSD, Anxiety and Stress in Young Adults
Emotional dysregulation has three major components which contribute to some of the major symptomatology in disorders like Obsessive Compulsive Personality Disorder, Post-Traumatic Stress Disorder, anxiety and stress. These components are: excessive intensity of emotions, poor processing of emotions and negative reactivity to emotions, which overlap as well as distinct symptoms with possible manifestations of emotional dysregulation like angry outbursts or behaviour outbursts such as destroying or throwing objects, aggression towards self or others, and threats to kill oneself, especially in young adults. These patients have a chronically and ongoing difficult time with the level of cooperation and social ability required for a healthy and fulfilling existence. ReAttach Therapy through its Multiple Sensory Integration Processing by Cognitive Bias Modification, has been found to be very helpful in the effective management of maladaptive emotions and helps developing interpersonal effectiveness, emotion regulation skills (expressing emotions effectively), behaviour control and distress situations management skills, which in turn helps the overall decrease in symptomatology of the above mentioned disorders.
Introduction
Emotional dysregulation generally refers to a condition in which the person's emotional response is poorly modulated and does not fall within the conventionally accepted range of emotive response, which may also be referred to as labile mood, marked fluctuation of mood or mood swings. Emotions play a great role not only in our lives but also affect our history, philosophy and religion. Emotions assist us in evaluating our alternatives, provide motivation to make some change and tell us about our needs. Emotions are like 'somatic markers' which tell us what we 'want' to do. (Damasio, 2005) Emotions help us link to others and constitute a shared 'theory of mind'. Unable to accurately assess the emotions of others results in awkward and dysfunctional interpersonal behaviour. (Baron et al, 2009). Taylor (1984) described Alexithymia as 'The inability to recognize, label, differentiate and link emotions to an event'. (A personality construct characterized by the subclinical inability to identify and describe emotions in the self). Alexithymia is associated with a wide variety of psychopathology in the form of GAD, PTSD, substance abuse and other problems (Taylor, 1984). When during stressful experiences, the intensity of emotions increases and coping skills play an important role. Difficulty or inability in coping with the experience or in processing emotions leads to emotional dysregulation. This dysregulation may manifest in either excessive intensification of emotions or excessive deactivation of emotions. Excessive intensification of emotions includes any rise of emotions experienced by the individual which is unwanted, intrusive and overwhelming, leading to panic, trauma, dread and terror. Excessive deactivation of emotions leads to dissociative experiences like depersonalization, derealisation, splitting or emotional numbing. The major components which cause emotional dysregulation are: 1) Emotion sensitivity or excessive intensity of emotions: the person has heightened awareness to subtle stimuli, processes environmental information more thoroughly and has more vivid perceptions of both positive and negative stimuli.
2) Negative affect or negative reactivity to emotions pertains to our reaction to various stimuli, which arises from a preconditioned sensitivity triggering a perception of the stimulus to be threatening and interpret certain situations in a negative light. This may be the reason that some situations, events and behaviours are upsetting and threatening for some people, but not for others who are also present there at that time.
3) Poor processing of emotions: the resultant factor of inadequate or partial processing and maladaptive emotional regulation strategies. Lack of emotional regulation is generally seen as a set of symptoms common to a number of different psychological conditions but, in some specific disorders, it plays a larger role in the possible manifestation of its symptomatology, which may lead to behavioural problems and may deeply interfere with a person's social interactions and relationships. These emotional dynamics constitute significant response parameters that are influenced by the emotional regulation processes (Thompson, 1990).
Aim of the study
In this multiple case study, a probable relationship has been explored, between emotional regulation and rest of the disorders, which is linear in nature and prevails as cross-sectional symptoms/features of emotional dysregulation. This relationship has been observed, existing and prevailing as a pattern, cutting across the symptomology of disorders like PTSD, OCPD, anxiety, and stress, which often leads to aggressive behaviour. This study is also an observational experiment to explore the possibility that by controlling the symptoms of emotional dysregulation through ReAttach therapy, the symptomology of the subsequent disorders will be altered, or not. This case study was carried out with aim of: 1) Identifying a probability of the larger role that emotional dysregulation plays in the symptomology of some major disorders like OCPD, PTSD, anxiety and stress in young adults. 2) To evolve an effective treatment strategy for emotional dysregulation and its management with ReAttach Therapy.
Research Design and Methods
A multiple case study design was used to study the effectiveness of ReAttach Therapy in the management of symptomology of emotional dysregulation.
Study population
The study was conducted based on case history and therapeutic treatment data collected from five patients:
Tools
Apart from the primary diagnosis, the patients were administered with two comprehensive evaluations for symptomatology of emotional dysregulation.
1)
Core Symptoms Evaluation -Reattach Therapy Institute A 35 item self-report evaluation with a rating scale of 0-5, based on the thoughts or problems that someone might experience and how much these thoughts and problems affect them. The evaluation is calculated on subscales of: i) Risky behaviour ii) Short Symptom Inventory iii) Happiness iv) Total score It gives a comprehensive measurement of symptomology instead of compartmentalised, narrow, disorder based psychopathology.
An 18 item self-report scale with a five-point response scale calculated with subscales on: i) Strategies to emotional regulation ii) Non-acceptance of emotional response iii) Impulse control difficulties iv) Goals directed behaviour v) Awareness about emotional dysregulation vi) Clarity about emotional dysregulation and resultant behaviour and total score. The patients were also evaluated for the primary disorder-based assessments pre and post ReAttach therapy to find out the effect of intervention in the improvement of symptomology.
Detailed reports are https://jrtdd.com presented in tabular form in the results section.
Procedure
Five patients diagnosed with Obsessive Compulsive Personality Disorder, Post-Traumatic Stress Disorder, General Anxiety Disorder, emotional dysregulation and stress with an age range of 18 to 24 were taken for this study. The initial sessions were held for diagnostic interviews, self-report questionnaires and Psychodiagnostic assessments to confirm the diagnoses. The patients were then administered with the above-mentioned evaluations to record the pre and post results for emotional dysregulation symptomology. Five ReAttach sessions for intervention and mindfulness were used for interaction and as adjunct therapy or follow up therapy, with this procedure to be continued for 6 weeks. For this study, only ReAttach therapy and its procedure has been discussed. The therapy process was smooth for three patients A, D, and E, but, there were moments of emotional dysregulation and aversion to the therapy evident from patient B, diagnosed with OCPD and patient C, diagnosed with PTSD. Reasons were explored and overcome with the help of counselling and sessions of mindfulness.
ReAttach process
It is said that words are powerful tools of thoughts and communication, but, when applied with ReAttach therapy along with visual imagination and Multi-Sensory Integration Processing by Cognitive Bias Modification, it opens vast possibilities to capture intricate relationships between specific facts, beliefs, assumptions, emotions, thoughts and memories by providing special access to cognitive structures or schemas and helps to identify and restructure the distortion inherent in them. It works upon the individual holistically and in a comprehensive way that the linear analytic verbal techniques cannot.
ReAttach is an intervention in which people do not have to discuss their problems. ReAttach assists with the collection of facts, impressions and events, which are later processed quickly to ensure that the process will not overwhelm the participants. During ReAttach, the therapist focuses on the process and not on the content of the information. The participants are asked to listen to the thinking assignments given to them during cognitive training. The subsequent insights that follow are the participants' own insights because they process information better. (Weerkamp-Bartholomeus, 2015) Every individual has a treasure trove of personal memories, emotions, events and experiences stored in the long-term memories. The challenge of ReAttach is to access these pieces of information, fragmented or hidden, stored in the longterm memory and then reprocess this information in a coherent manner to reflect the following concepts: self, significant others and social.
To reprocess information, the arousal level of a patient must be regulated slightly above the level of 'falling asleep' at the Alpha-Theta border (7)(8). This arousal level is important for transitioning from deep relaxation, visualization, creativity and learning to information acquisition from the long-term memory (Kirov, Weiss, Siebner, Born, Marshall, 2009); (Molle, 2010).
ReAttach procedure
For the ReAttach procedure, instructions explaining the process of ReAttach therapy sessions were given to the patients. The therapy starts by regulating the arousal level of the patients. This is achieved by altering the tactile input by tapping the hands of the patients along with modulation in voice, change in attitude, attention and presence levels on the part of the therapist. In other words, it's the combination of multi-sensory inputs which works on the patients leading to sensory integration processing. The ReAttach process works by combining various steps in the following way: a) Providing the essential tactile stimuli needed to stimulate the tactile sensory channel simultaneously with auditory and visual inputs. b) External arousal regulation to gain and maintain joint attention. c) Stimulating multiple sensory integration processing through tapping, to teach the multi-tasking skill. d) Improvement of the information processing; thus promoting skill enhancement in context of social and personal growth in individuals. e) Oxytocin, administered through physical contact, to improve the social reward system. (Tapping is done gently on the back of the patient's hand without overstimulating oxytocin production) (Weerkamp-Bartholomeus, 2015).
Results
The results were compiled for comprehensive evaluations of emotional dysregulation symptomology in tabular form
I.
Results for overall decrease in symptomology of CSE and Emotional Dysregulation
Neural underpinnings of Emotional regulation
Human beings are uniquely qualified to employ language, rational thinking, relational processing and memory to execute deliberate, conscious emotion regulation strategies. The ability to self-regulate negative emotions in distress enhances mental and physical well-being and loss of such capacity confers risk towards psychopathology (John & Gross, 2004). A https://jrtdd.com fundamental question in cognitive affective neuroscience is; which neural circuit is involved in the control of emotion? Interrelated regions of the brain may serve as our emotional regulation circuitry. "Neural architecture" of emotion regulation in a way that distinguishes between two complementary but highly interconnected neural systems: a ventral system that underlies emotional arousal, emotional significance evaluation, motivational processes and a dorsal system that underlies relatively effortful, executive control functions such as attention regulation and cognitive control. (Critchley, 2005); (Luu, Tucker, D. M, & Derryberry, 1998). The ventral system is sensitive to information that is motivationally significant and thus capitalizes on rapid and relatively automatic evaluative and regulatory processes. This system is activated under emotional conditions and is modulated by the use of cognitive emotion regulation strategies such as reappraisal. In emotion regulation research, four key structures have been emphasized: the amygdala, the insula, the striatum, and the medial orbitofrontal cortex (Fox, Morgan, Fidler, Daunhauer, & Barrett, 2013) and also, the Hippocampus, anterior cingulate cortex (ACC) and Dorsolateral and Ventral regions of the Prefrontal cortex (PFC) (Davidson, 2000).
Ochsner & Gross (2007), hypothesized that both bottom up (emotion as a response to environmental stimuli) and top down (emotions emerge as a result of cognitive process) models of emotional processing are involved in emotion regulation. When an aversive stimulus is encountered in the environment, a bottom up emotional response ensues, thus, the Amygdala, Nucleus Accumbens and Insula become active.
These appraisal systems communicate with the cortex and Hypothalamus to generate responses. A top down emotional response may also begin with a stimulus in the environment. However, it may be a discriminative stimulus, which suggests that the individual might predict that an aversive stimuli or sensation maybe on its way. The stimulus in top down processing may be a neutral one, which may provoke a negative response in a given context. In such cases, higher cognitive processes are involved in generating a modulated emotional response. These processes involve, PFC appraisal systems acting through lateral and medial PFC as well as ACC. It proves that modes of affective processing is interdependent and too complex to predict.
Role of Oxytocin in Emotional regulation
Oxytocin is a peptide hormone synthesized in the supra-optic and paraventricular nuclei of the hypothalamus with direct projections into other brain areas where it acts as a neurotransmitter. It is also released into the bloodstream via the posterior pituitary gland to peripheral targets. Oxytocin is not a classical neurotransmitter, i.e. limited to local actions by crossing a synapse between an axon and dendrite for its effects. Rather, oxytocin appears to be released from the neuronal soma, axons and dendrites, acting broadly in the nervous system as a neuromodulator. Upon release, oxytocin may flow through neural tissue by a process termed volume transmission (Neumann & Landgraf 2012). For example, there is evidence that oxytocin from the paraventricular nucleus (PVN) of the hypothalamus can reach the central amygdala via anatomical "expressways", allowing this molecule to quickly modulate emotional functions of the amygdala and brain stem (Stoop 2012). In the presence of oxytocin, avoidance or fear may be replaced by approach and positive emotional states (Carter 1998 Human studies have confirmed oxytocin's role as a social hormone, mediating many social behaviours involved in forming attachments. In healthy controls, oxytocin decreases cortisol release and anxiety in response to social stress and reduces amygdala activity to fearful or threatening visual images or emotional faces . ( Cochran, Fallon, Hill, & Frazier, 2013).
Oxytocin may mediate the buffering effects of positive relationships and modulate reactivity to stressful experiences. The study by Lane et al. provides the first evidence that oxytocin increases people's willingness to share their emotions. Importantly, oxytocin did not make people more talkative (word counts were comparable across the two groups) but instead increased the willingness to share the specific component that is responsible for the calming and bonding effects of social sharing: emotions. (Lane, Luminet, Rimé, Gross, de Timary, & Mikolajczak, 2013).
Thus, the capacity to be close to and sensitive to others, which is typical of loving relationships, can be supported by oxytocin's behavioural effects. In the face of a severe challenge, oxytocin could initially support an increase in arousal and activation of the sympathetic nervous system and other components of the HPA system but, in the face of chronic stress, the anti-stress effects of oxytocin may take precedence, permitting a more passive form of coping and immobility without fear (Porges, 1998). These findings suggest that oxytocin has effects on the regulation of emotion, stress, anxiety, coping and healing.
Neurochemical effects of ReAttach and Oxytocin
During ReAttach, physical contact is provided to the patient, gently and frequently in the form of tapping on the back of the patient's hand to activate arousal, which in turn stimulates the brain to produce the hormone oxytocin, which plays an important role in the bonding process and is a direct reward of social contact. (Bartholomeus, 2015). Oxytocin facilitates social bonds and trust as well as counteracts stress by dampening the activity of physiological stress-systems, increases parasympathetic activity and regulates glucocorticoid receptor expression in the hippocampus. Oxytocin is implicated in the ontogenetic development of the neocortex and thus plays an important role in the construction of internal models. For example, it stimulates genetic regulation of the growth of the neocortex and the maintenance of the blood supply to the cortex -conditions that are prerequisites for the formation and sustentation of internal autobiographical models. In humans, oxytocin also facilitates cognitive control that helps individuals to bring behaviour in line with internal models since behaviour control is a highly valued aspect of therapeutic success (Carter, 2014). Oxytocin unfolds effects on a higher level of brain functioning. Specifically, while increasing familiarity and trust within a social context, oxytocin promotes the assimilation of novel emotional experiences https://jrtdd.com into internal models (Tops, et al. 2013). Oxytocin might facilitate the establishment of a positive therapeutic alliance and a relaxed atmosphere during therapy, which makes it easier for patients to awaken conflictual memories and tolerate concomitantly resurfacing negative emotions. Therefore, the above mentioned discussion and results clearly indicate that the ReAttach therapy intervention shows promising results in bringing down the symptomology of emotional dysregulation, anxiety and stress, which in turn reduces pathological symptoms associated with respective disorders. For example, in the case of patient A with stress and emotional dysregulation, the levels of perceived stress have gone down from 46 to 18 and the similar results are depicted in BAI, where the anxiety index has decreased from 45 to 09, which is quite considerable. Also, in patient E, diagnosed with Anxiety(GAD), the measurement of PSS-14 in pre-test and post-test shows a decrease from 73 to 26, BAI from 45 to 16, STAI-S from 70 to 32 and STAI-T from 68 to 28. In patient D, diagnosed with emotional dysregulation, the measurement of DERS-SF in pre-test and post-test shows a decrease from 47 to 20, BDI from 12 to 08 and BAI from 34 to 12. In the case of patient B with OCPD, the rigid over control of emotion characteristic of OCPD stands apart from excessive emotional expressivity and affective lability often present in other personality disorders. It is possible that the emotional suppression and constriction associated with the disorder reflects broader underlying emotion difficulties that may deleteriously affect functioning. That explains the reason for the eruption of evident moments of emotional dysregulations and brief aversion (patient B of OCPD and patient C of PTSD) to the therapy, which was taken care of during the therapeutic process. Some affective results shown above indicate a decrease not only in STAI and BAI, but also symptoms of OCPD have been found to be decreased.
Conclusion
A reduction in emotional regulation problems and explosive behaviour has been observed in all the five patients, across the various disorders. These findings might be the result of attaining a more realistic and coherent understanding of themselves and the world through better multi-sensory information processing. ReAttach therapy by its Multi-Sensory Integration Processing through Cognitive Bias Modification, has been found to be helpful in emotional dysregulation. It has been found that it facilitates social bonding and acts as a buffer against stress, anxiety and affect regulation. One might think of oxytocin as the magic ingredient in Danish hygge -the cosy, contented feeling of being with trusted others.
|
2019-03-28T13:33:04.546Z
|
2018-05-25T00:00:00.000
|
{
"year": 2018,
"sha1": "80d371e76adbbc4b4bf28ef2d28901c8502e4987",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.26407/2018jrtdd.1.2",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b9594512e23f5d5cafa8980ab55b206836819e7d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115152986
|
pes2o/s2orc
|
v3-fos-license
|
Nickel-catalysed selective migratory hydrothiolation of alkenes and alkynes with thiols
Direct (utilize easily available and abundant precursors) and selective (both chemo- and regio-) aliphatic C–H functionalization is an attractive mean with which to streamline chemical synthesis. With many possible sites of reaction, traditional methods often need an adjacent polar directing group nearby to achieve high regio- and chemoselectivity and are often restricted to a single site of functionalization. Here we report a remote aliphatic C–H thiolation process with predictable and switchable regioselectivity through NiH-catalysed migratory hydrothiolation of two feedstock chemicals (alkenes/alkynes and thiols). This mild reaction avoids the preparation of electrophilic thiolation reagents and is highly selective to thiols over other nucleophilic groups, such as alcohols, acids, amines, and amides. Mechanistic studies show that the reaction occurs through the formation of an RS-Bpin intermediate, and THF as the solvent plays an important role in the regeneration of NiH species.
O rganosulfur compounds, metabolites or macromolecules essential to life, are prevalent in pharmaceuticals, natural products, and materials ( Fig. 1a) [1][2][3] . They compose~20% of all Food and Drug Administration-approved drugs 4,5 . The development of protocols for the sustainable and efficient construction of C-S bonds is important in chemical synthesis. Commonly used methods for the construction of such bonds include Michael addition, S N 2-type alkylation, and the powerful transition-metal-catalyzed C-S cross-coupling [6][7][8][9][10][11] . One potential and more attractive strategy for their construction is through the selective C−H functionalization 9 , because this leads to the utilization of more widely available starting materials or more concise synthetic routes. However, to achieve excellent regio-and chemoselectivity, most of these processes need a polar directing group in the vicinity, and this limits their application in organic synthesis. As an alternative, the recently emerging metal-hydridecatalyzed [12][13][14][15] olefin remote functionalization can install a functional group at a distal position in a hydrocarbon chain under mild conditions. Starting from the ubiquitously available olefin-containing substrates, and using an extra hydride source, the NiH-catalyzed remote hydrofunctionalization [39][40][41][42][43][44][45][46][47][48][49][50][51][52][53] with aryl/alkyl halides as electrophiles has been established as a powerful protocol for the construction of a diverse range of C-C bonds at a distal, inert sp 3 C-H position (Fig. 1b) [43][44][45][46][47][48][49][50][51][52][53] . However, the electrophilic amination or thiolation reagents required to forge the more challenging carbon-heteroatom bond are generally not stable and often not commercially available, especially when bearing functional groups. Their preparation is nontrivial and time consuming, and often involves the use of the stoichiometric amounts of hazardous reagents.
To address these challenges, we enquired whether unmodified nucleophilic thiols that are widely available could be employed directly. Here we present the successful application of these ideas and describe an operationally trivial approach that allows the direct selective sp 3 C-H thiolation with a naked thiol (Fig. 1c) at a distal benzylic position, the α-carbon of an ether, or a terminal position of the hydrocarbon chain of an alkene. A number of features of a transformation of this sort can be highlighted as follows: (a) high chemoselectivity of the thiol group in the presence of a series of potentially reactive functional groups such as amides, acids, alcohols, and amines; (b) excellent regioselectivity amongst multiple sites, including a benzylic position, a carbon α to the oxygen atom position, or a terminal position; (c) a regioconvergent process for the conversion, for example, of isomeric mixtures of olefins; and (d) feedstock thiols as thiolation reagents, such a process avoids the preparation of electrophilic thiolation reagents.
Regiodivergent thiolation reaction design and optimization.
We began our investigation by examining the remote hydrothiolation of 4-phenyl-1-butene (1a) with benzyl mercaptan (2a). After careful evaluation of a variety of nickel sources, ligands, bases, hydride sources, and solvents ( Fig. 2), we found that a reaction at 60°C employing a combination of NiI 2 as a catalyst, bathocuproine (L1, 2,9-dimethyl-4,7-diphenyl-1,10-phenanthroline) as a ligand, HBpin (pinacolborane) as the hydride source, Li 3 PO 4 as the base, and mixed tetrahydrofuran/acetonitrile (THF/CH 3 CN) as the solvent delivers the desired migratory benzylic thiolation product (3a) in 75% isolated yield as a single regioisomer [regioisomeric ratio (major product: all other isomers) >99:1] (Fig. 2 Fig. 2 Optimization of regiodivergent remote hydrothiolation. *Yields were determined by gas chromatography (GC) analysis using n-tetradecane as the internal standard. The yield within parentheses is the isolated yield and is an average of two runs (0.20 mmol scale). † r.r. refers to regioisomeric ratio, representing the ratio of the major product to the sum of all other isomers as determined by GC analysis. ‡ The linear thioether (3A) is obtained as a single isomer; conditions B for terminal selectivity: NiI 2 (5 mol%), L2 (6 mol%), yields and selectivities (Fig. 2, entries 2 and 3). Ortho substituents in the bipyridine ligand are critical for the reaction, and use of a similar ligand (L2) leads to inferior yield and regioisomeric ratio (Fig. 2, entry 4). Changing the hydride source to silanes, such as dimethoxy(methyl)silane, results in none of the desired product ( Fig. 2, entry 5). The addition of the base Li 3 PO 4 improves the yield but is not essential (Fig. 2, entry 1 vs. entry 6). CsF, which we previously used in remote hydroarylation reactions 45 , leads to complete failure of the reaction (Fig. 2, entry 7). Notably, control experiments show that the cyclic ether solvent is necessary for the reaction to proceed (Fig. 2, entry 8 vs. entry 9). In addition, a slightly lower yield is obtained at lower temperature (Fig. 2, entry 10). Interestingly, after a thorough re-evaluation of the reaction parameters, we were able to change the thiolation site to the terminal position [54][55][56][57][58][59][60] to generate a very good yield of the linear thioether as a single isomer (Fig. 2, entry 11).
Substrate scope. With the optimal conditions in hand, we sought to define the scope of the alkene component (Fig. 3). First, an array of terminal aliphatic alkenes with a variety of ortho, meta, and para substituents on the remote aryl ring (3c-3l) are found to perform well producing the desired benzylic thioether exclusively (Fig. 3a). Substrates containing both electron-rich (3c and 3g) and electron-deficient (3d-3f and 3 h) arenes are suitable for this reaction. Structurally complex aromatic systems such as sugarlinked aryl ring (3j) and camphor-linked aryl ring (3k) are amenable to the migratory cross-coupling. Heteroaromatic substrates, such as those containing a pyridine-linked aryl ring (3l) or a thiophene (3m) in place of the aryl group, are also well tolerated. Unactivated internal olefins also readily undergo alkene isomerization-hydrothiolation smoothly (Fig. 3b). As expected, E/ Z alkene mixtures (3n-3r and 3t) react well, and high selectivity for thiolation at the benzylic position is observed, regardless of the starting position of the C=C bond. For substrates with a tertiary carbon on a benzyl position, which previous reports 43,46,48 have noted as challenging, migration towards the benzylic position and subsequent thiolation to generate the Scontaining tetrasubstituted carbon center is still preferred (3t). Styrenes themselves (3u-3c′) are also suitable partners under these conditions (Fig. 3c). Compounds with a variety of functional groups on the aryl ring of styrene are tolerated, including Fig. 3 Substrate scope of alkene component. Under each product are given yield in percent, and regioisomeric ratio (r.r.). Yield refers to isolated yield of purified product (0.20 mmol scale, average of two experiments). r.r. represents the ratio of the major product to the sum of all other isomers as determined by gas chromatography (GC) analysis, ratios reported as >95:5 were determined by crude proton nuclear magnetic resonance ( 1 H NMR) analysis. † Forty eight hours. Me, methyl; Et, ethyl; n Bu, n-butyl; n Pent, n-pentyl; Cy, cyclohexyl; TBS, tert-butyldimethylsilyl an aryl fluoride (3u), a boronic acid pinacol ester (3v), an aryl nitrile (3w), and an ester (3c′). The reaction can also be extended to α-methyl styrenes to provide exclusively the benzylic thioethers (3b′ and 3c′) with a fully substituted carbon center. It is important to highlight that alkynes, another type of easily prepared starting material, could undergo reductive remote hydrothiolation to deliver the same migratory thiolation products (3d′ and 3e′, Fig. 3d). Mechanistically, the vinylnickel intermediate formed upon hydrometallation of the alkyne is selectively captured by a proton source (thiol) forming an alkene. Isotope labeling experiments indicated that the source of protons in this reaction is mainly from the thiol (see Supplementary Fig. 10 for details), while the alkylnickel intermediate formed upon hydrometallation of this alkene selectively engages with the NiHcatalyzed chainwalking-thiolation reaction. Finally, the current benzylic regioselectivity can also be easily extended, as in thiolation at the carbon α to the oxygen atom, producing the monothioacetals (3f′ and 3g′, Fig. 3e) in moderate yields as single isomers.
Further investigation of the reaction demonstrated the broad scope of thiol partner (Fig. 4). In general, both aliphatic (4b-4m) and aromatic (4n-4c′) thiols are excellent reaction partners and give the corresponding benzylic thioethers with good to excellent yields and regioselectivities. An array of primary and secondary aliphatic thiols all prove to be competent substrates, delivering the desired benzylic thiolation products in good to excellent yields (4b-4l). For the steric hindered tertiary thiol, a disulfide can be used to obtain a satisfactory yield (4m). In addition, a variety of electron-withdrawing (4o-4u and 4w) and electron-rich (4v and 4x-4b′) thiophenol derivatives are competent substrates. A variety of heterocycles frequently found in medicinally active agents, including both furan (4i, 4c′) and thiophene (4j), are also compatible, and a variety of functional groups are readily accommodated, including esters (4e and 4h), an aryl fluoride (4p), an aryl chloride (4q-4s), and ethers (4u-4x). Notably, potential coupling motifs, including a primary alcohol (4f), a primary carboxylic acid (4g), a phenol (4y), a primary aniline (4z), a secondary Boc carbamate (4h), and a secondary acetyl amide (4a′) remain intact, which demonstrates both the excellent chemoselectivity of this transformation and their potential application in selective cysteine conjugation in biomolecules.
Discussion
To gain some insights into the chainwalking process of olefin isomerization, olefin 1a was subjected to the standard reaction conditions in the absence of any thiol. A significant amount of other olefin isomers arising from the olefin isomerization is observed within 1 h, which indicates that occurrence of olefin isomerization does not depend on the presence of the thiol and also suggests that olefin isomerization is unrelated to C-S coupling (Fig. 5a, above). Additionally, consistent with our previously reported results, a mixture of olefins is observed when the reaction is run to partial conversion (Fig. 5a, below), indicating that olefin isomerization proceeds with fast dissociation and reassociation of the NiH species. Furthermore, the corresponding isotopic labeling experiments were carried out with deuterothiol and deuteropinacolborane, respectively (Fig. 5b). No deuterium incorporation in the desired product is noted when deuterothiol is used, indicating that the thiol is not involved in chainwalking process. As expected, deuterium scrambling and deuterium incorporation is observed at all positions along the aliphatic chain, with the exception of the benzylic position. Mass spectrometric analysis revealed that a mixture of undeuterated, monodeuterated, and polydeuterated products is obtained. This is consistent with the hypothesis that chainwalking occurs with Fig. 4 Substrate scope of thiol partner. Under each product is given yield in percent, and either the regioisomeric ratio (r.r.) or the diastereomeric ratio (d.r.). Yield and r.r. are as defined in Fig. 3 legend. † 5.0 equiv. HBpin was used. ‡ Di-tert-butyl disulfide (0.10 mmol, 0.50 equiv.) was used. Boc, tert-butoxycarbonyl; Ac, acetyl; DMPU, 1,3-dimethyl-3,4,5,6-tetrahydro-2(1H)-pyrimidinone dissociation and reassociation of free NiH/NiD from the NiH/ NiD-alkene complex. Finally, no migratory reaction takes place when the linear sulfide (3A) is resubjected to the standard conditions, suggesting that chainwalking preceeds the C-S coupling (Fig. 5c, above). Following the detection of trace amounts of a remote hydroboration product (6), this migrated hydroboration intermediate was resubjected to the standard conditions. However, no desired thiolation product was observed, suggesting that the C-S coupling step does not proceed through the remote hydroboration intermediate (Fig. 5c, below).
To shed light on the thiolation process, a variety of experiments were carried out. When 0.5 equiv. of a symmetrical disulfide is used instead of 1.0 equiv. of the corresponding thiol, the desired remote hydrothiolation product is obtained in a comparable yield (Fig. 6a), indicating that the disulfide might be the potential reactive intermediate of the thiol. Monitoring the remote hydrothiolation reaction of a disulfide by 19 F NMR (fluorine-19 nuclear magnetic resonance), however, indicates that the disulfide (δ = −114.9 ppm) is first transformed into an RS-Bpin intermediate (δ = −116.8 ppm) (Fig. 6b). Significantly, analogous experiments on the corresponding thiol substrate (δ = −118.9 ppm) also reveal the generation of this RS-Bpin intermediate (δ = −116.8 ppm) with no trace of disulfide detected (Fig. 6c). Meanwhile, the generation of H 2 in this standard reaction is also observed by gas chromatography (GC) analysis. Overall, these results reinforce the notion that the disulfide is not involved as the active intermediates from the thiol, and suggests that the in situ generated RS-Bpin might be the actual thiolation reagent.
Encouraged by these results, we wondered whether the pregenerated RS-Bpin reagent could be employed directly instead of a thiol. Indeed, as shown in Fig. 7, changing the thiolation reagent from a thiol to RS-Bpin 2A′, generated in situ from the thiol and HBpin, a competent yield of 3a is obtained. In this case, only a stoichiometric amount of the alkene is required and the desired thiolation product could still be obtained in comparable yield (cf. 1:1 stoichiometry of alkene and RS-Bpin is used and the yields are even better (88 and 93%, respectively) when a slightly excess of alkene or RS-Bpin is used (1.2 equiv.). This finding, together with the results disclosed in Fig. 6, suggests that the reactive intermediate of thiol is the RS-Bpin complex. Control experiments reveal that the solvent THF plays an important role. Only 5% yield of desired product is observed in the absence of this solvent. As shown in Fig. 8a, a different reactivity is observed when the solvent THF is replaced by a variety of other ethers in the standard reaction conditions. The nature of the ether backbone plays a crucial role, we found that only cyclic ethers with a β-hydride can produce the desired product in a reasonable yield. In contrast, acyclic ethers or cyclic ethers lacking a β-hydride do not have such a profound effect on reactivity. We postulated that the ether solvent might participate in the catalytic cycle. To verify this hypothesis, additional studies about the amount and consumption of ethers were carried out. As shown in Fig. 8b, both the yields and regioselectivities improved when the amount of THF is increased. The consumption of ether could also be observed during the reaction process (Fig. 8c). Only trace amounts of product (~1% yield) are produced during the first 6 h and the regioselectivity is poor in the first 12 h of the reaction. Subsequently, the yield of desired product increases significantly, but the yield of other regioisomers (linear isomer) fails to increase after the first 12 h (Fig. 8c, entry 3 vs. entry 1). The origin of this apparent induction period as well as initial low regioselectivity is still under investigation. Finally, as shown in Fig. 4d, when deuterated THF-d 8 is used in both standard and modified standard reactions, a small amount of deuterium scrambling and deuterium incorporation at all positions except the benzylic position along the aliphatic chain of the desired product is observed by 2 H NMR and mass spectrometric analysis. This indicates that a small amount of NiD is involved in the chainwalking process and the small amount of NiD should come from the deuterated THF-d 8 .
To probe further the role of THF, Boron-11 NMR ( 11 B NMR) experiments were carried out to trace the standard reaction. As shown in Fig. 8e, the generation of RS-Bpin intermediate (δ = 33.6 ppm) is confirmed again by 11 B NMR spectroscopic analysis. We could also observe two new boron signals accompanied with the consumption of THF, which matches the signals of Bpi-nOBpin (δ = 21.2 ppm) and ROBpin (δ = 22.3 ppm).
Although an in-depth mechanistic discussion must await further investigation, a description of the proposed pathway, based on the above mechanistic studies, is shown in Fig. 9. The active nickel(I) hydride species (I) [61][62][63][64][65][66] , which is initially formed from a Ni(II) precursor, a ligand, and an hydride source, inserts into the alkene (1a), and initiates the relatively fast and reversible chainwalking process through iterative β-hydride elimination/migratory reinsertion processes. A series of isomeric alkylnickel(I) species (II, IV, …) is then accessed through this chainwalking process. Controlled by the choice of ligand, selective reaction of the benzylic alkylnickel(I) intermediate (IV) with the thiolation reagent, the RS-Bpin (2A′) generated in situ from thiol and pinacolborane, probably through an oxidative addition and sequential facile reductive elimination [67][68][69][70] process then delivers the benzylic thiolation product (3a) along with LNi(I)Bpin (V). The active LNi(I)Bpin (V) species is then captured by THF to Mixtures of olefin isomers are generally more widely available than single isomers. Owing to the difficulty of isolation of each pure isomer, such mixtures are substantially cheaper than the pure isomers. Conversion of such mixtures in a regioconvergent process into value-added specialty chemicals is therefore of considerable interest. As expected, the robustness and utility of this catalytic system are further demonstrated through the employment as starting materials of isomeric mixtures of olefins, and the benzylic thioethers (3s and 3o) can be obtained in high yield as a single regioisomer in both cases (Fig. 10a).
Finally, as shown in Fig. 2 and Fig. 10b, the current benzylic regioselectivity can also be switched to a terminal site to form the anti-Markovnikov hydrothiolation 54-60 products. A series of terminal alkenes can be effectively hydrothiolated under a modified reaction conditions (8a-8d).
In summary, we have developed a NiH-catalyzed remote hydrothiolation reaction of alkenes using thiols directly as thiolation reagents. This transformation utilizes readily accessible alkenes/alkynes and thiols as starting materials and earthabundant nickel salts as catalysts. The mild process allows the direct installation of a thioether group at a benzylic, α-ether, or a terminal position with excellent regio-and chemoselectivity, as well as high functional group tolerance. Moreover, mechanistic studies reveal that the activated thiolation reagent is the RS-Bpin intermediate, and the ether solvent plays an important role in the regeneration of NiH species. Finally, the practical value of this transformation is highlighted by the regioconvergent conversion of unrefined isomeric mixtures of alkenes. The application of this protocol in cysteine bioconjugation as well as an asymmetric version of the current transformation is currently in progress and will be reported in due course.
Methods
General procedure for NiH-catalyzed remote hydrothiolation. To an oven-dried 8 mL screw-cap vial equipped with a magnetic stir bar was added NiI 2 (3.2 mg, 5.0 mol%) and bathocuproine (L1, 2,9-dimethyl-4,7-diphenyl-1,10-phenanthroline) (4.0 mg, 6.0 mol%). The vial was introduced into a nitrogen-filled glove box, anhydrous THF (0.40 mL) and CH 3 CN (0.20 mL) were added, and the mixture was stirred for 10 min, at which time alkene (0.40 mmol, 2.0 equiv.), benzyl mercaptan (25.0 mg, 0.20 mmol, 1.0 equiv.), HBpin (pinacolborane, 100 μL, 0.70 mmol, 3.5 equiv.) and Li 3 PO 4 (50 mg, 0.40 mmol, 2.0 equiv.) were added to the resulting mixture in this order. The tube was sealed with a teflon-lined screw cap, removed from the glove box and stirred at 60°C for 24 h (the mixture was stirred at 750 rpm). After the reaction was complete, the reaction mixture was immediately filtered through a short pad of silica gel (using EtOAc in hexanes) to give the crude product. n-Tetradecane (20 μL) was added as an internal standard for GC analysis. 1,1,2,2-Tetrachloroethane (10.5 μL, 0.10 mmol) was added as internal standard for 1 H NMR analysis of the crude material. The product was purified by chromatography on silica gel for each substrate. The yields reported are the average of at least two experiments, unless otherwise indicated. See Supplementary Information for more detailed experimental procedures and characterization data for all products.
|
2019-04-16T14:28:16.600Z
|
2019-04-15T00:00:00.000
|
{
"year": 2019,
"sha1": "2a0528a6d18a44168eaf4d58f0f1318e13baa4ee",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-09783-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "590ae4271c1e097bb5f17099cf200ae013b83c16",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
245796682
|
pes2o/s2orc
|
v3-fos-license
|
Platform Business in Korea: Advent and Growth of Kakao
With the rapid development of ICT technology, a platform business is exerting a dominant influence in various fields. This type of a business has a great ripple effect in that it creates value by engaging and connecting various market actors such as producers, suppliers, and business partners to the platform ecosystem, rather than directly selling independent products or services. In this vein, the current study introduces the emergence and growth process of Kakao, which has a monopolistic dominance in the Korean mobile messenger market, as an example of a platform business. This company started to be distributed to Korean consumers in 2010 and is currently exercising its market influence in various fields such as commerce, entertainment, finance, and transportation, and more growth is expected in the future. Based on the case investigation, the background of Kakao's success can be interpreted with a combination of various reasons, such as Korea's developed ICT environment, a large game users’ population, quick service launch, the strategy to utilize the founder's previous work experience, and service development geared towards Korean consumers.
Introduction
In recent years, platform business strategies have gained massive interests not only in the academic fields but in the practical aspects [1][2][3][4][5]. In this ″Big Wave″ era, Kakao Talk has become a national communication channel in Korea since its emergence in 2010 [6]. Within this 11 year period, it became a representative company and expanded its business encompassing entertainment, finance, and transportation. In recent years, the number of Kakao's subsidiaries reached 118, following one of Korea's largest business groups, i.e., SK. In terms of market capitalization, it ranked 3rd, followed by Naver, which has been dominating the Korean Internet market since the late 1990s.
This company has positioned itself as a leader in Korea's mobile ecosystem by entering various areas, such as artificial intelligence, big data, internet of things, blockchain, and so on. It actively participates in technology development in the era of the 4th Industrial Revolution and maintains the leading position.
As Kakao's business area has become deeply established in the daily lives of many people, related research to investigate its strategies and impacts has been conducted in various ways. The existing research with respect to Kakao shows the analysis of the growth process of Kakao Talk, motivation, and intention for continuous use, SNS mobile marketing services, the quality of mobile messenger interface design, and platform business and M&A strategies. In particular, it epitomizes effective platform business strategies in the era of the fastestchanging ICT envoronment.
Preliminary studies analyzed Kakao Talk, related services, platforms, M&A, and strategies from various perspectives. However, research on Kakao's overall entrepreneurial growth process remains scant. Thus, this study examines the entire process from the start-up to the present and takes three stages into consideration. In addition, the environmental characteristics of the Korean market that acted as the background of Kakao's success must be analyzed to derive implications.
Advent of Kakao
Kakao's chairman, Bum-soo Kim, joined Samsung SDS in 1992 after obtaining his aster's Degree in Industrial Engineering at the Seoul National University. The company was responsible for the development and operation of PC telecommunications service called " Unitel." In this process, he realized that the Internet era would change the business environment, which led him to quit a stable job in 1998. Then, he established an Internetbased computer game company called "Hangame" in early 1999 and attracted three million members in five months. The first-mover advantages, including technological leadership, preoccupation of scarce resources, and less consumer conversion costs were achieved (Lieberman and Montgomery, 1988). Soon, the number of subscribers reached 10 million owing to the Internet boom. The company has grown rapidly since. Meanwhile, Navercom, provided an Internet search engine with a limited number of users by spinning itself off from Samsung SDS. To overcome the limitations of each company, Chairman Bum-soo Kim and Chairman Hae-jin Lee merged Hangame and NaverCom in July 2000. Then the company changed its name to NHN in September 2001. NHN initially generated revenues from the sale of game items, and it started to gain popularity among Korean users through JiSikIn, which is a knowledge exchange service embedded in the Naver portal. Later, it became the No.1 portal company in Korea as it started to earn profits from keyword-based search advertisements.
As the major revenue source for Naver changed to these areas, the roles of Chairman Bum-soo Kim have been reduced, which compelled him to quit and start a new business. After moving to USA, he launched various social network services, such as "Buru.com" and "Wisia.com," which eventually failed. In this era, Apple successfully launched its first smartphone in 2008, which influenced him to switch his business to a 100% mobile-based one. Figure 1 shows the main log-in screen of Kakao Talk. Kakao initiated its business by providing a free platform, which is similar to Hangame. This strategy is highly similar to Google, which provided a search engine for free for two years after its inception to engage advertisers. Facebook also secured enough subscribers for two years through the same strategy. For the two years, 2010 and 2011, Kakao spent 15 million dollars (15 billion Korean won). However, it did not earn any revenues. To strengthen competitiveness in the domestic market and enter the global market, a domestic game company, Wemade, invested 92 million dollars (92 billion Korean won) in 2012. However, the revenue model remained unclear.
With the release of "Playing Kakao Games" in July 2012, the situation began to improve. By forming strategic alliances with seven partner companies and 10 games, including Anipang have been released. At that time, social games were considered a successful business model for overseas companies, such as Facebook and MySpace. In this game space, exiting users invited their friends on the basis of offline relationships. Anipang was a simple mobile game in which people were able to establish relationships with one another by sending invitations, which is represented through a "Heart" emoji. Through this, Kakao successfully influenced middle-aged consumer groups to be interested in using a "new" device called "smartphone." In 2012, Anipang was able to generate 10 million won (10 billion won) in sales. The revenue allocation principle of the App Store:KakaoTalk:developers = 30:21:49 resulted in a 2.1 billion revenue. To secure subscribers, it is often forced to use the Kakao platform, which brought market dominance to Kakao.
With the ″free installation policy″ and ″relationship-based invitation″, the number of users reached 45 million as of 2012, which is 90% of the entire population in Korea as shown in Figure 3. It was part of a platform envelopment strategy [7][8] that extends the scope of the usage to other areas on the basis of power [9][10][11][12][13], and it caused a lock-in phenomenon that cannot escape from Kakao Talk.
Growth of Kakao
In May 2014, it was merged with a web portal, Daum Communications, and changed the company name to Daum Kakao. The merger is at a rate of 1(Kakao):1.556(Daum), given that the mobile market is expected to grow faster in the future. Then, the company began to provide global services in 15 languages in 230 countries by reinforcing itself.
Kakao's "fast-follower" strategy, which quickly pursued the Facebook's WhatsApp, was driven by Chairman Kim Bum-soo's quick decision making after the failure of Buru.com and Wisia.com. The background of Kakao's fast-follower strategy can also be found in Naver's case.
As a revenue source, first, Kakao Games was spun off, which was a big stepping stone for growth. Concurrently, with the establishment of Oh, K-Cube Ventures, which was established to invest in promising startups, was incorporated into an affiliate. Later, it was reorganized into a holding company (KCUBE Holdings) and a venture capital company (Kakao Ventures). Kakao Investment focuses on start-ups larger than Kakao Ventures investments. Similar to Kim, Kakao Investment will take over a company where businesses are based to a certain extent. It looks organized into The companies that Kakao Ventures invested in are the American augmented reality startups. Other than this effort, Kakao Pay, Kakao Commerce, and Kakao Mobility were established.
To promote the music industry, Loen and Entertainment, which used to run Melon, was incorporated into Kakao M. Recently, by acquiring numerous music-related software developers and entertainment agencies, the company is integrated. the company is expanding into an entertainment company.
Kakao started with "Seoul Bus," a bus arrival information app, after the merger with Daum Communications. It acquired (6.67 billion won), K-Cube Ventures (5.55 billion won), Cellit and a Global Positioning System (GPS) app in May 2015. It acquired Rock and All, which is known as an application called "Kim Gisa," for $62.6 million (62.6 billion won) and Path, Indonesia's top three social networking websites $26.3 million (26.3 billion won).
Kakao's M&A strategy peaked in 2016. On January 11, Kakao announced its acquisition of Loen Entertainment, which was famous for its music platform, Melon. The acquisition amounted to $1.8B (1.8743 trillion won), which was extremely exceptional. This was surprising considering that Daum Communications was only worth 100 billion won when it acquired Lycos. The acquisition of Loen, which Kakao announced, aimed to "secure growth engines for mobile content platforms and strengthen competitiveness." Many reasons exist for this merger. First, Kakao could enjoy many advantages, such as adding Kakao Pay as a payment method for Melon, selling music on Kakao Page, and providing exclusive contents of Loen Entertainment artists on Kakao TV. Second, Kakao attempted to lower the dependency on gaming by diversifying revenue sources to Loen, which constantly generated $60 million (60 billion Korean won) of sales yearly. Last, Loen utilized K-pop music to enter the global market, which was a good choice to leap to the platform. Figure 4 shows the snapshot of Kakao's diversified business areas. Based on its brand power and popoularity, Kakao became the No. 1 company to work for, followed by many competent Korean companies. More affiliates, such as Kakao Pay Corp., a mobile-payment service, Kakao Mobility Corp., a transportation service, and Kakao Entertainment, a music and contents company are expected to grow as next-stage cash cow sources.
Kakao has strived to expand its service from a digital world to a physical service, which is dubbed as Online to Offline (O2O). A golf service could be a good example, which is implemented by Kakao VX, Kakao Games's subsidiary.
The next focus is the culture and entertainment, which could be propelled by one of its affiliates, Kakao Entertainment. This company was formed through the merger of a music streaming service platform, Kakao M, and a content providing company, Kakao Page. Kakao Entertainment acquired a 100% stake of Antenna Music, a pioneering indie music label in Korea, and through the negotiation process with Soo-Man Lee, who is one of trailblazers in the modernization of Korean music production industry.
Webtoons are another major pillar in Kakao's entertainment business. It recently acquired Japan's webtoon platform, Piccoma, with $600M (708B Korean won). In Japan, Kakao is trying to possess additional intellectual property (IP) rights, which will be one of revenue sources for the entire business group. With these efforts, Kakao's global market size has been steadily increasing as shown in Figure 5. Many reasons exist for this success. First, the Korean market has a larger share of the national economy in ICT than in other industries. ICT accounts for roughly 10% of the total value added generation, ranking first among OECD countries [14]. In addition, 4.7% of all employment is in ICT, ranking first among OECD countries [15]. Such a well-established environment related to ICT played a major role in Kakao's development of mobile technology and its rapid commercialization. Given that there exist large and small ICT service providers, including smartphone developers, such as Samsung Electronics and LG Electronics, and games, such as Netmarble, NCsoft, and Nexon, they were able to quickly procure skilled ICT personnel and catch up with WhatsApp, which launched mobile messengers. In addition, when the Internet revolution broke out 20 years ago, a company called Naver survived fierce competition in the browser and portal industries. Given that most of Kakao's founders and early members are from Naver, the presence of people with a sense of and skills in existing ICT services has contributed to this. Although Google dominated the search engine market in most countries with its advent, Naver remains influential in Korea. Moreover, Koreans are the top users of mobile apps compared with major countries. With the increasing amount of mobile app users through platforms, such as Kakao Talk, the use of tools to communicate with one another has also increased.
Second, in terms of growth, "Playing Kakao Games" played a significant role, which utilized an advantageous position in Korea, the fourth largest game market in the world.
Third, many messenger developers, including Daum's My People and Naver's Line, delayed the release period and gave Kakao Talk time to promote. In addition, although mobile carriers, such as SKT or KT levied 10 to 20 Korean won per use, KakaoTalk offered the platform for free. In this period, WhatsApp maintained a pay-per-use policy until it was acquired by Facebook. In addition, although Line expanded to Japan and Southeast Asia, Kakao would have been able to compete in the Korean market. It launched Kakao Plain and Brucch services, which epitomized the "snack" culture. Its archrival Naver blocked advertising posts and provided UX, which is ideal for writing. However, it secured a quality blogger with a system that supports writers with good ideas and skills.
Fourth, the founder, Kim Bum-soo, made use of the experience he acquired in his previous workplace, i.e., Naver. After gaining a user base, Kakao provided expanded services, which is highly similar to NHN's growth strategy. Next, the company read the growth pattern of the global mobile messenger market and released it in the domestic market quickly. One "fast follower" strategy worked effectively. Subsequently, the company succeeded in attracting investments from outside and platforming them while risking deficits. On the basis of the platform, various services, such as banking, transportation, music, content, commerce, and so on are provided through active M&A [16][17][18][19][20].
Finally, Kakao is strictly targeting Korean consumers. The service has been launched although it has been criticized due to its classic 'copycat business." Although well-known conglomerates, such as, Samsung Electronics and Hyundai Motor, focus on overseas markets, most ICT companies in Korea are considered to have a "copy cat business." The company is bound to be underrated in terms of its performance. Nonetheless, Kakao quickly adopts new technologies, such as blockchain to the domestic market. Ultimately, Kakao is considered a leader owing to its ability to create a business model that has succeeded in gaining control of the market while ranking first in the global market.
|
2022-01-07T16:10:23.555Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e03bb6b6b74cae8f2c870408c8f278667b74e942",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/02/shsconf_ies2021_02001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6258a54640ae9edbb46699785cc74e524f41bfae",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
225218935
|
pes2o/s2orc
|
v3-fos-license
|
Enhancing Protein Crystallization under a Magnetic Field
High-quality crystals are essential to ensure high-resolution structural information. Protein crystals are controlled by many factors, such as pH, temperature, and the ion concentration of crystalline solutions. We previously reported the development of a device dedicated to protein crystallization. In the current study, we have further modified and improved our device. Exposure to external magnetic field leads to alignment of the crystal toward a preferred direction depending on the magnetization energy. Each material has different magnetic susceptibilities depending on the individual direction of their unit crystal cells. One of the strategies to acquire a large crystal entails controlling the nucleation rate. Furthermore, exposure of a crystal to a magnetic field may lead to new morphologies by affecting the crystal volume, shape, and quality.
Introduction
Many biological systems are composed of neighboring atoms bonded by various hydrogen atoms [1,2]. It is also believed that hydrogen atoms play a crucial role in biological functions, such as enzymatic mechanisms [3,4]. Therefore, the precise localization of the hydrogen atoms in biological problems is more important than in other systems, for example, in the case of solid-state physics [1,5,6]. Until now, the X-ray diffraction technique has been used as a conventional method to resolve unknown crystal structures with easy access [7,8]. However, it also has limitations due to the interactions of the X-rays with electrons of the atoms in the crystal structure. Thus, it is impossible to localize the relative position of the light atoms, such as H or Li, in the presence of a relative heavy atom in a crystal structure. Thus, several sophisticated tools have been developed to delineate the location of the hydrogen atoms and, if possible, to distinguish isotopes such as hydrogen and deuterium [9,10].
Among others, neutrons can be used to determine the relative position of a hydrogen atom in a crystal structure in the presence of a relatively heavy atom [11][12][13]. Although the advantages of neutrons outweigh the benefits of the conventional X-ray technique, a relatively large volume of a sample is required due to the weak interaction between the neutrons and nuclei in the crystal structure. However, obtaining a large volume of a biological sample, for example, a protein crystal, is a non-trivial task. Therefore, many efforts have been made to increase the sample size and also to decrease the time required. Towards this end, many methods have been proposed, including the grid screen and incomplete factorial approaches [14][15][16], controlling the physical parameters such as the temperature [17,18], hydrodynamic field [19], electric field [20][21][22], magnetic field [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38], and electromagnetic field [27,39,40] Despite various trials, obtaining a single crystalline outcome remains a challenge for some proteins owing to their low crystallization ability that prevents screening. In general, experimental parameters such as temperature, pH, and pressure, which are thought to be easily accessible, are selected. A few results are reported as successful [6,17,18,[41][42][43][44]. Although temperature and pH are relatively easy to control, the control of pressure requires the design of a special device, which is complex and expensive [45,46].
A possible alternative is a magnetic field. When a crystal is exposed to a magnetic field, the crystal tends to align in the direction of the lowest magnetization energy. Each material has different magnetic susceptibilities depending on the individual direction of the unit crystal cells [19]. A large crystal can be obtained by controlling the nucleation rate. The crystal's volume, shape, and quality also can be affected when it is exposed to a magnetic field. In our and others' previous works [28,32,[34][35][36][37][38]47], as a part of these efforts, the so-called magnetic compensation was developed using biological samples and successful results have been reported. The device is composed of permanent magnets available commercially and protein crystals larger than 1.0 mm 3 were obtained. In this work, we report new results using a newly modified and improved device, and also demonstrate the excellent quality of the protein crystals obtained via X-ray diffraction.
Magnetic Device Design
Neodymium magnets were used to determine the effect of the magnetic field on the crystal growth rate, size, and quality. Although we have a magnetic device, for which we designed either a 24-well or a 72-well plate to adjust the sample plate to the magnetic device size, we designed a new device using a 96-well plate for convenience (see Figures 1 and 2).
A possible alternative is a magnetic field. When a crystal is exposed to a magnetic field, the crystal tends to align in the direction of the lowest magnetization energy. Each material has different magnetic susceptibilities depending on the individual direction of the unit crystal cells [19]. A large crystal can be obtained by controlling the nucleation rate. The crystal's volume, shape, and quality also can be affected when it is exposed to a magnetic field. In our and others' previous works [28,32,[34][35][36][37][38]47], as a part of these efforts, the so-called magnetic compensation was developed using biological samples and successful results have been reported. The device is composed of permanent magnets available commercially and protein crystals larger than 1.0 mm 3 were obtained. In this work, we report new results using a newly modified and improved device, and also demonstrate the excellent quality of the protein crystals obtained via X-ray diffraction.
Magnetic Device Design
Neodymium magnets were used to determine the effect of the magnetic field on the crystal growth rate, size, and quality. Although we have a magnetic device, for which we designed either a 24-well or a 72-well plate to adjust the sample plate to the magnetic device size, we designed a new device using a 96-well plate for convenience (see Figures 1 and 2). We also changed the shape of the permanent magnet from round to a rod type. Based on these modifications, we applied a vertical magnetic field to a solution containing crystals. The magnetic field strength was measured using a Gauss Meter and the measured value was 200 mT (see Figure 3). Hanging/sitting drop plates with 96 wells containing a protein and precipitant solution were inserted into the device located in an incubator. The device was made compact enough for use in an incubator and easy temperature control in the range of 4 to 80 °C (± 0.2 °C) during the sample growth. Few days later, microcrystals were observed. We also changed the shape of the permanent magnet from round to a rod type. Based on these modifications, we applied a vertical magnetic field to a solution containing crystals. The magnetic field strength was measured using a Gauss Meter and the measured value was 200 mT (see Figure 3). Hanging/sitting drop plates with 96 wells containing a protein and precipitant solution were inserted into the device located in an incubator. The device was made compact enough for use in an incubator and easy temperature control in the range of 4 to 80 • C (± 0.2 • C) during the sample growth. Few days later, microcrystals were observed. We also changed the shape of the permanent magnet from round to a rod type. Based on these modifications, we applied a vertical magnetic field to a solution containing crystals. The magnetic field strength was measured using a Gauss Meter and the measured value was 200 mT (see Figure 3). Hanging/sitting drop plates with 96 wells containing a protein and precipitant solution were inserted into the device located in an incubator. The device was made compact enough for use in an incubator and easy temperature control in the range of 4 to 80 °C (± 0.2 °C) during the sample growth. Few days later, microcrystals were observed.
Hen Egg-White Lysozyme (HEWL)
Protein solutions were prepared from hen egg-white lysozyme (Sigma, L6876) powder dissolved in a 50 mM sodium acetate buffer. Four different protein concentrations (10,20,30, and 40 mg/mL) and three different pH values (5.0, 5.5, and 6.0) were used. All solutions were filtered through a 0.20 μm Sartorius filter. The protein solutions were prepared, and equal volumes or a 1 μL drop of each protein sample was mixed with a 1.5 μL drop from 50 μL of a 0.8 M NiCl2 reservoir solution [40] via the sittingdrop vapor diffusion method in the 96-well plate (102-0001-20 Hampton Research HR3-163, Aliso Viejo,
Enoyl Acyl Carrier Protein Reductase (ENR)
The full-length of gene was cloned into the pET21b vector with 6 His-tag. The ENR Gly93 to Val (G93V) mutation was generated using the original ENR (UniProtKB-P49327, Crystal Genomics Inc). The plasmid was transformed into the Escherichia coli BL21 (DE3) strain (Novagen, California, USA) and the cells were grown at 310 K with ampicillin (50 mg/mL). Protein expression was induced by adding 0.5 mM isopropyl β-d-1-thiogalactopyranoside (IPTG) for an additional 12 h at 291 K.
The pelleted cells were suspended in a buffer containing 50 mM Tris-HCl at pH 7.5, 200 mM NaCl, and 1 mM PMSF) and then lysed by sonication. The lysed cells were centrifuged at 15,000 rpm for 1 h at 277 K. The supernatant was subjected to ion-exchange chromatography on a HiTrap column (GE Healthcare, USA). The protein was stepwise eluted with 50 mM Tris-HCl at pH 7.5, and 100 mM NaCl containing 0.5 M imidazole. For further purification, the eluents were loaded onto a Superdex®200 (GE Healthcare, Illinois, USA), which was pre-equilibrated with a buffer containing 50 mM Tris-HCl (pH 7.5), 200 mM NaCl, and 2 mM DTT. All the chromatographic steps were carried out at 277 K. Prior to crystallization, the purified protein was concentrated to 10 to 20 mg/mL using an Amicon Ultra
X-ray Diffraction and Analysis
After crystal growth, the crystals were observed under a digital microscope and some crystals of good quality were selected for the X-ray diffraction experiment. The crystals were transferred to a cryoprotectant under optimized crystallization conditions with 12% (w/v) glycerol (Sigma-Aldrich, G5516-1L) to reduce the radiation damage. Datasets were collected on the BL-7A synchrotron at the Pohang Light Source (PLS) Facility in South Korea [48]. The datasets were indexed, processed, and scaled using a HKL-2000 software package [49,50], as summarized in Table 1. For assessing the effect of mosaicity by freezing, the X-ray data of the HEWL crystals were collected at room temperature (see Supplementary Materials: Table S1).
Crystal Growth and Morphology
Depending on the magnetic field applied, a difference was observed during the lysozyme crystallization process (Figure 4).
Crystal Growth and Morphology
Depending on the magnetic field applied, a difference was observed during the lysozyme crystallization process (Figure 4). Applying a magnetic field clearly enhanced the nucleation of the protein crystal in each trial. As shown in Figure 5, orthorhombic and tetragonal crystals were obtained at three various pH values (pH 5.0, 5.5, and 6.0).
(a) (d) Applying a magnetic field clearly enhanced the nucleation of the protein crystal in each trial. As shown in Figure 5, orthorhombic and tetragonal crystals were obtained at three various pH values (pH 5.0, 5.5, and 6.0). Applying a magnetic field clearly enhanced the nucleation of the protein crystal in each trial. As shown in Figure 5, orthorhombic and tetragonal crystals were obtained at three various pH values (pH 5.0, 5.5, and 6.0). Two types of crystals were obtained at pH 5.0. However, as the pH increased under the magnetic conditions, the tetragonal crystals outnumbered the orthorhombic crystals. The number of hits per crystallization drop in the magnetic field was high ( Figure 6 and Table 2). The crystals obtained were compared to evaluate the magnetic field effect on the protein crystal growth ( Figure 6). The solutions exposed to a magnetic field yielded higher number of crystals compared with those unexposed. Crystals under a magnetic field also showed a better morphology, larger size and volume, and were more likely single crystals. The positive improvement in size and crystal quality is very advantageous not only for X-ray diffraction but also for neutron diffraction experiments. compared to evaluate the magnetic field effect on the protein crystal growth ( Figure 6). The solutions exposed to a magnetic field yielded higher number of crystals compared with those unexposed. Crystals under a magnetic field also showed a better morphology, larger size and volume, and were more likely single crystals. The positive improvement in size and crystal quality is very advantageous not only for X-ray diffraction but also for neutron diffraction experiments.
Comparison of Diffracting HEWL Crystals
The improved quality of the HEWL crystals using the X-ray diffraction technique is shown (Table 1 and Figure 7). The unit cell parameters of the lysozyme crystals grown under a magnetic field were mostly similar. For the X-ray diffraction experiments, the exposure time was 1 s and the distance between the sample and the detector was 150 mm. Based on the analysis, upon exposure to a magnetic field, HEWL crystallized in a tetragonal system P43212 with lattice parameters (a = b = 76.893 Å and c = 36.975 Å). Without a magnetic field, although the space group was similar, the lattice parameters differed slightly (a = b = 77.383 Å and c = 37.032 Å). The resolution limit and mosaicity were used to evaluate the crystal quality. Figure 7a shows the mosaicity of each crystal. The crystals
Comparison of Diffracting HEWL Crystals
The improved quality of the HEWL crystals using the X-ray diffraction technique is shown (Table 1 and Figure 7). The unit cell parameters of the lysozyme crystals grown under a magnetic field were mostly similar. For the X-ray diffraction experiments, the exposure time was 1 s and the distance between the sample and the detector was 150 mm. Based on the analysis, upon exposure to a magnetic field, HEWL crystallized in a tetragonal system P4 3 2 1 2 with lattice parameters (a = b = 76.893 Å and c = 36.975 Å). Without a magnetic field, although the space group was similar, the lattice parameters differed slightly (a = b = 77.383 Å and c = 37.032 Å). The resolution limit and mosaicity were used to evaluate the crystal quality. Figure 7a shows the mosaicity of each crystal. The crystals grown under a magnetic field were better than those unexposed to a magnetic field. Compared to the average normalized resolution of the crystals, the crystals under a magnetic field showed better results (Figure 7b). Consistent results were obtained with the data sets collected at room temperature (Supplementary Materials: Table S1). grown under a magnetic field were better than those unexposed to a magnetic field. Compared to the average normalized resolution of the crystals, the crystals under a magnetic field showed better results (Figure 7b). Consistent results were obtained with the data sets collected at room temperature (Supplementary Materials: Table S1).
Crystal Growth and Morphology
The best quality crystal with a size greater than 1.0 mm 3 was obtained at 30 mg/mL, 0.6 M NaNO3, and 0.5 M ADA (pH 5.6) under a 200 mT magnetic field. The crystals with a maximum size conformed to the plate shape. Irrespective of the presence of the magnetic field, the solution containing NaNO3 produced only plate-shaped crystals. The protein nucleation rate was enhanced by applying a vertical magnetic field strength of 200 mT (see Figure 8 and Table 2). The magnetic field noticeably improved the crystal quality, shape, size, and number of crystals.
Crystal Growth and Morphology
The best quality crystal with a size greater than 1.0 mm 3 was obtained at 30 mg/mL, 0.6 M NaNO 3 , and 0.5 M ADA (pH 5.6) under a 200 mT magnetic field. The crystals with a maximum size conformed to the plate shape. Irrespective of the presence of the magnetic field, the solution containing NaNO 3 produced only plate-shaped crystals. The protein nucleation rate was enhanced by applying a vertical magnetic field strength of 200 mT (see Figure 8 and Table 2). The magnetic field noticeably improved the crystal quality, shape, size, and number of crystals. grown under a magnetic field were better than those unexposed to a magnetic field. Compared to the average normalized resolution of the crystals, the crystals under a magnetic field showed better results (Figure 7b). Consistent results were obtained with the data sets collected at room temperature (Supplementary Materials: Table S1).
Crystal Growth and Morphology
The best quality crystal with a size greater than 1.0 mm 3 was obtained at 30 mg/mL, 0.6 M NaNO3, and 0.5 M ADA (pH 5.6) under a 200 mT magnetic field. The crystals with a maximum size conformed to the plate shape. Irrespective of the presence of the magnetic field, the solution containing NaNO3 produced only plate-shaped crystals. The protein nucleation rate was enhanced by applying a vertical magnetic field strength of 200 mT (see Figure 8 and Table 2). The magnetic field noticeably improved the crystal quality, shape, size, and number of crystals. The crystals showed a slightly slower growth under conditions other than magnetic fields, compared to the magnetic field conditions, and the maximum size of the crystals also decreased. The non-magnetic field crystallization trials showed crystal formation within 2 weeks with rough edges and flaws, and significant precipitation within the drop (Figure 8). The crystal growth stopped after 3 weeks, and the amount of precipitate persisted. Crystals were stable for several weeks and months, The crystals showed a slightly slower growth under conditions other than magnetic fields, compared to the magnetic field conditions, and the maximum size of the crystals also decreased. The non-magnetic field crystallization trials showed crystal formation within 2 weeks with rough edges and flaws, and significant precipitation within the drop (Figure 8). The crystal growth stopped after 3 weeks, and the amount of precipitate persisted. Crystals were stable for several weeks and months, and no further changes were detected within the drops.
The crystals grew at the bottom of the plate in the magnetic device within 10 days and reached their maximum size within 3 weeks. The crystals were also stable for several weeks up to months under a magnetic field. Generally, crystals obtained without a magnetic field were fewer in number and noticeably smaller than those acquired in the presence of a magnetic field (see Figure 9). The magnetic field improved the size and quality of crystals. The magnetic field enhanced the nucleation process of the test proteins.
(b) Figure 8. The effect of a magnetic field on nucleation. Crystal growth (a) without and (b) with a magnetic field using a 96-well plate.
The crystals showed a slightly slower growth under conditions other than magnetic fields, compared to the magnetic field conditions, and the maximum size of the crystals also decreased. The non-magnetic field crystallization trials showed crystal formation within 2 weeks with rough edges and flaws, and significant precipitation within the drop (Figure 8). The crystal growth stopped after 3 weeks, and the amount of precipitate persisted. Crystals were stable for several weeks and months, and no further changes were detected within the drops.
The crystals grew at the bottom of the plate in the magnetic device within 10 days and reached their maximum size within 3 weeks. The crystals were also stable for several weeks up to months under a magnetic field. Generally, crystals obtained without a magnetic field were fewer in number and noticeably smaller than those acquired in the presence of a magnetic field (see Figure 9). The magnetic field improved the size and quality of crystals. The magnetic field enhanced the nucleation process of the test proteins.
Comparison of Diffracting ENRG93V Crystals
Crystallographic information was obtained via X-ray diffraction experiments conducted with ENR crystals, as shown in Table 1 and Figure 10.
Comparison of Diffracting ENRG93V Crystals
Crystallographic information was obtained via X-ray diffraction experiments conducted with ENR crystals, as shown in Table 1 and Figure 10.
The exposure time was 1 s and the distance between the detector and the sample was 300 mm; also, a 2-D detector was used. The crystals obtained under both conditions showed similar lattice parameters and crystallized in the orthorhombic space group P2 1 2 1 2 1 (under magnetic field: a = 108.710 Å, b = 78.546 Å, c = 119.951 Å) and without the magnetic field (a = 109.016 Å, b = 78.667 Å, c = 113.883 Å). Significant deviations were found in the c-axis parameter. In the absence of a structural study, it is impossible to explain the lengthening of the c-axis parameter alone under a magnetic field. Further crystallographic investigations are needed to explain this phenomenon. on protein solutions per reservoir ratio. (c) Volume (a × b × c) of the crystals.
Comparison of Diffracting ENRG93V Crystals
Crystallographic information was obtained via X-ray diffraction experiments conducted with ENR crystals, as shown in Table 1 and Figure 10. The exposure time was 1 s and the distance between the detector and the sample was 300 mm; also, a 2-D detector was used. The crystals obtained under both conditions showed similar lattice parameters and crystallized in the orthorhombic space group P212121 (under magnetic field: a = 108.710 Å, b = 78.546 Å, c = 119.951 Å) and without the magnetic field (a = 109.016 Å, b = 78.667 Å, c = 113.883 Å). Significant deviations were found in the c-axis parameter. In the absence of a structural study, it is impossible to explain the lengthening of the c-axis parameter alone under a magnetic field. Further crystallographic investigations are needed to explain this phenomenon.
Discussion
We constructed unique and simple magnetic devices suitable for 96-well plates (Hampton Research) that are widely used in protein crystallization. We performed vapor diffusion techniques under a low magnetic field and used the HEWL and ENR proteins as test cases. Using a simple magnetic field device equipped with commercial magnets, we successfully obtained single crystals
Discussion
We constructed unique and simple magnetic devices suitable for 96-well plates (Hampton Research) that are widely used in protein crystallization. We performed vapor diffusion techniques under a low magnetic field and used the HEWL and ENR proteins as test cases. Using a simple magnetic field device equipped with commercial magnets, we successfully obtained single crystals larger than 1.0 mm 3 in volume. The crystals grown under a magnetic field were larger and showed perfect transparency, whereas the crystals grown without a magnetic field exhibited a low quality and cracks (Figures 4 and 8).
The estimation of mosaicity is often convoluted with the intrinsic beam divergence (the angular discrepancy from "perfectly parallel"). The mosaicity of HEWL and ENR crystals was as low as 0.333 and 0.398 • from a frozen crystal on a PAL synchrotron 7A beamline with a nearly parallel beam (Table 1). In general, a good quality crystal exhibits a small full-width-at-half-maximum, which is reflected by the mosaicity. The crystals grown under a magnetic field exhibited a slightly better resolution and lower mosaicity based on the X-ray diffraction experiments (Figures 7 and 10, and Supplementary Materials: Table S1). Exposure to a magnetic field yields a higher resolution compared to crystals unexposed to a magnetic field. The high-resolution diffraction experiments suggest a higher angle. At a high angle, the scattering power of light atoms is dramatically reduced, which attenuates the I/σ(I) values. However, in the case of neutron diffraction, the scatter length is independent of the scatter angle. Therefore, neutron diffraction experiments are desirable for crystals exposed to a magnetic field.
The HEWL and ENR crystals grown under a magnetic field showed different effects. Tetragonal crystals derived from HEWL appeared to show more dramatic effects in a magnetic field than ENR, whereas ENR showed significant deviations in the c-axis parameter. Further crystallographic investigations are needed to explain the phenomenon, especially regarding the magnetic field orientation.
The crystals grown under a magnetic field show a high quality and are large enough for neutron diffraction experiments. The diffraction data of the crystals obtained under an old device were measured using a neutron source. A neutron diffraction experiment was planned to determine the effect of a design modification on crystal quality. Unfortunately, however, our research reactor at the Korea Atomic Energy Research Institute has been shut down and the other facilities are not conducive to such experiments. Based on the measurements involving protein crystals ( Figure 11) using our old device (Figure 1), and also based on the comparison using X-ray techniques in this study, it is tempting to expect that the crystals obtained using the modified device deliver substantially improved results. effect of a design modification on crystal quality. Unfortunately, however, our research reactor at the Korea Atomic Energy Research Institute has been shut down and the other facilities are not conducive to such experiments. Based on the measurements involving protein crystals ( Figure 11) using our old device ( Figure. 1), and also based on the comparison using X-ray techniques in this study, it is tempting to expect that the crystals obtained using the modified device deliver substantially improved results. Figure 11. HEWL neutron diffraction pattern determined using previous devices (in BIX-4, Japan). Resolution limit, 100−3.0 Å; and mosaicity, 1.0°. Figure 11. HEWL neutron diffraction pattern determined using previous devices (in BIX-4, Japan). Resolution limit, 100−3.0 Å; and mosaicity, 1.0 • .
Conclusions
The magnetic field conditions affect the morphology, quality, and growth rate of protein crystals. The nucleation rate was increased at a magnetic field strength of 200 mT, which ensured the growth of a relatively large single crystal within a short time.
Presumably, a magnetic field has no effect on the increase in viscosity near the growing crystal, the reduction in natural convection inside the crystallization solution, or the decrease in the diffusion coefficient of the protein solution [12,47]. Experiments under a strong magnetic field have yet to be attempted; however, the strength of 200 mT is thought to be adequate for nucleation. Additionally, our device is small and portable, and facilitates testing of multiple test samples simultaneously.
|
2020-10-28T18:33:54.321Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "dcc1137a3a3eafacf8f3b9423a3bf8e185da5a69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/10/9/821/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e8cfbb7c5f20e9bc052652cd02a3a102713c8eba",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
32163170
|
pes2o/s2orc
|
v3-fos-license
|
Controlled Propagation of Concept Annotations in Textual Corpora
In this paper, we presented the annotation propagation tool we designed to be used in conjunction with the BRAT rapid annotation tool. We designed two experiments to annotate a corpus of 60 files, first not using our tool, second using our propagation tool. We evaluated the annotation time and the quality of annotations. We shown that using the annotation propagation tool reduces by 31.7% the time spent to annotate the corpus with a better quality of results.
Corpus annotation
Corpus annotation is a crucial step to develop suitable natural language processing (NLP) systems, to carry out evaluations of system outputs, or to train statistical models while using supervised machine-learning approaches (e.g., conditional random fields (Lafferty et al., 2001) for sequence labelling). Nevertheless, corpus annotation is a really timeconsuming task. A useful way to reduce the time spent to annotate corpora consists in providing the annotators a pre-annotated version of the corpus to annotate. Automatic pre-annotations can be made through a lexicon mapping (i.e., all existing entities found in a lexicon would be automatically pre-annotated) or a system designed to annotate entities, either using a rulebased system or a machine-learning approach. The choice of the method used to pre-annotate corpora depends on the type of entity to process: regular entities such as numeric values can be formalized using rules while more complex entities or contextual annotations would be processed using statistical approaches. Annotators working on automatic pre-annotations have to check those annotations, in order to remove non relevant annotations and to complete missing annotations. In a previous study, we demonstrated that using automatic pre-annotation based on CRF system both reduces the time spent by humans (annotators spent about 10% less time) and improves the quality of the final annotations (we computed a gain of 6 points in κ inter-annotator agreements), in comparison with annotation task made on similar raw corpora (Grouin and Névéol, 2014). Another solution to reduce annotation time consists in selecting documents from a corpus, parts of documents, or parts of text (e.g., a few sentences), using a sampling process (Patton and Potok, 2006;Kantner et al., 2011), in order to annotate only a few samples of corpora. Those samples are considered by the corpus manager as representative enough of phenomenon to annotate and study.
Annotation propagation
The basic principle of annotation propagation relies on existing annotations that will be associated with new documents, for which parts-either a single word or a whole part depending on the type of annotation to be made-are found to be similar with previously annotated documents.
Two main objectives are expected while using annotation propagation systems: first, the reduction of time spent by humans to annotate corpora, and second, an improvement of the final quality of annotations made. As a consequence, human annotators can focus on unseen annotations. Existing systems designed to propagate and enrich human annotations-either semantic annotations or POS tagging-use external ressources such as deep parser (Swift et al., 2004), meta-data and ontologies (Zonta Pastorello Jr et al., 2010), as well as transformation rules and graph (Lansdall-Welfare et al., 2012). Existing annotations can be proposed to the user through an interactive system, as done by Voutilainen (2012) for a POS tagging task. Some existing systems take advantage of several sources of distinct type to enrich and propagate annotations made by humans. Chevallet et al. (2006) designed a propagation annotation system for medical image annotation based on visual similarity. Their approach relies on concept extraction from texts in order to duplicate those concepts for images which share visual similarity. Budnik et al. (2014) proposed a multimodal system (speaker diarization and face clustering) in order to manually annotate persons in TV shows. In this paper, we present the tool 1 we designed to automatically propagate annotations on textual corpora and the experiments we made to evaluate such propagations. Our experiments rely on the BRAT Rapid Annotation Tool, a system designed by Stenetorp et al. (2012) to annotate text corpora through a browser. Although a few functionalities are provided with this tool (e.g., keyboard shortcuts), no propagation annotation plugin exists. Since sophisticated propagation annotation tools still exist, our motivation was to produce a tool with basic functionalities so as to rapidly propagate relatively slight ambiguous annotations (e.g., named entity vs. POS tagging). The aim of this tool consists to improve both annotation quality and time processing. We originally designed this tool to manually annotate a corpus of 13,500 clinical records so as to produce a fully deidentified corpus. In this paper, we applied this tool on a corpus composed of messages from a pharmacovigilance forum. We draw similar conclusions on both corpora.
Presentation
Our corpus is composed of 60 files corresponding to messages written in French and posted on the meamedica.fr website. This website allows the users to report adverse drug reactions they experienced.
Annotations
Guidelines The annotation work we focus on relies on 16 categories of concepts relying on medical treatments, clinical information, and additional information. Those annotations are then used to produce systems to automatically identify drug names and adverse drug reactions as reported by patients in messages from health forums (Morlane-Hondère et al., 2016). We used the following categories of concepts in our annotation task, following the guidelines we defined (Grouin, 2015), mainly based on the semantic types from the UMLS (Lindberg et al., 1993) and completed by useful categories for an adverse drug reaction identification task: Chemical or drug; Dosage; Concentration; Mode of administration; Anatomical part; Gene or protein; Biological process or function; Disorders; Sign or symptom; Medical procedure; Date; Duration; Frequency; Time; Weight; Job. Despite this annotation framework, our propagation annotation tool can be used for every annotation task on text data using the BRAT stand-off annotation schema. Statistics Table 1 presents the numbers of annotations for each category from our corpus, for a total number of 651 annotations. We observe that Sign or symptom and Anatomical part constitute the two main categories of information to annotate (i.e., 53.3% of all annotations). Entities from those categories are found in all documents from the corpus, since adverse drug reactions mainly involve a problem (Sign or symptom) and a location in the body (Anatomical part). As an example, the sentence I'm suffering from back pain combines the anatomical part "back" with the symptom "pain". Additional information can be found such as intensity marker (e.g., severe back pain) or frequency marker (e.g., chronic back pain). Longer annotations (more than 2 tokens) only concern the category Duration, composed of temporal marker, number, and unit: pendant un peu plus d' 1 mois ("for slightly more than one month"), cela fait déjà 7 ans ("it has been almost 7 years"), depuis plus d' un an ("for over a year now"), etc.
Annotation tool
The BRAT rapid annotation tool relies on stand-off annotations: each text file is associated with its annotation file. Annotation files are composed of three columns separated by a tabulation: (i) annotation ID, (ii) entity type, beginning and ending offset of characters for the annotated phrase, and (iii) the annotated phrase.
Propagation annotation tool
The tool we designed to propagate annotations of concepts is a PERL script which relies on two main steps: 1. First, all existing annotations are saved in a hash table, in order to keep the correspondence between entities and category; 2. Second, for each remaining file to be annotated: • existing annotations for this file are saved (in case of automatic pre-annotations done on the whole corpus), • annotations saved from the already annotated files (first step) are searched within the file. Then, beginning and ending offsets of characters are computed for each occurrence found in the file, • and a new stand-off annotation file is produced, combining existing annotations (pre-annotation step) with new annotations (propagation step).
The user can configure two features in this tool: • The minimum size for annotations to be saved, in terms of number of characters. We defined a minimum size of 3 characters per annotation (i.e., all existing annotations of tokens composed of at least three characters will be propagated). This feature depends on the type of annotations to be propagated (namely, annotations propagation for tokens composed of only one character will result in annotating each similar character found in the corpus); • The starting file from which annotation propagation will be performed. This feature allows the user to do not propagate annotations on files already annotated, reducing the risk of over-annotation.
Additionally, the user can define if propagations occur on full tokens (existing tokenization is kept), or if propagations can be found within portion of text (embedded annotations, useful for inconsistent tokenization). We did not defined any confidence score to determine whether an annotation must be propagated or not. We considered that only annotations which are not ambiguous can be propagated. Since we did not want to add noisy annotations, the user has to process ambiguous annotations (e.g., annotations depending on the context). In its current version, contrary to active learning approaches, our tool relies on iterative actions from the user (i.e., the user decides when he wants to perform the propagation annotation process: either at the end of the annotation of each file, or at the end of a set of files). Figure 1 presents the general framework of the propagation annotation tool.
Design of experiments
We defined two situations of annotation performed on the corpus of 60 files (cf. section 2.1.1.) for a concepts annotation task (cf. section 2.1.2.). Since no pre-annotation step has been done, human annotations are done on a raw version of the corpus: • First, human annotations are done on the whole corpus without any annotation propagation step; • Second, human annotations are done on the whole corpus using the annotation propagation tool. In this configuration, each time we completed a file, we launched the tool on the remaining files in order to optimize the human annotation work.
Both annotation situations rely on the same set of files. Nevertheless, annotations done in the first situation were not reused in the second situation. The same human annotator annotated files from the two situations during two distinct stages.
Results
Evolution of number of annotations Figure 2 presents the evolution of the total number of annotations along the human annotation process, depending on whether the annotation propagation tool was used (green line) or not (red line). Table 3 presents the time spent to annotate the corpus, the average number of files processed in one minute and the average number of annotations done in one minute, whether the propagation annotation tool was used or not. Annotation quality We manually built a gold standard by revising one set of annotations. We then computed precision, recall and F-measure for both situations of annotation, based on this gold standard, using the BRATeval evaluation script. Table 4 presents the evaluation of the quality of annotations done by the human annotator, whether the annotation propagation tool was used or not. Black font pinpoints the best results. This evaluation allows us to determine the impact of annotation propagation on an annotation task.
Category
No propagation Propagation P R F P R F Anatomy 0.979 0.979 0.979 1.000 1.000 1.000 Chemical 0.987 0.961 0.973 1.000 1.000 1.000 Concentration 1.000 1.000 1.000 1.000 1.000 1. Table 4 -Evaluation of annotations quality whether the annotation propagation tool was used or not (P=Precision, R=Recall, F=F-measure). Black font pinpoints the best results
Discussion
Evolution of number of annotations As presented in Figure 2, the human annotation without using the propagation tool follows a diagonal (red line). This observation shows a regular number of annotations along the annotation process. As expected, the use of the propagation tool allows to rapidly increase the number of annotations (green line) from the first files. We also produced more annotations when using the propagation annotation tool, for a total number of 640 annotations, than not using it (620 annotations). Table 3, our propagation annotation tool allowed us to annotate a corpus by reducing annotation time of about 31.7% (i.e., a gain of 13 minutes) in comparison with the same annotation task without using the propagation tool. Moreover, according to Figure 3, the propagation annotation tool allows the user to keep a consistent annotation speed along the annotation process (green line) while not using such a tool, the human annotator spends more time and looses time to annotate a few files (either because of high number of annotations to be done on those files, or because of fatigue and weariness while annotating the corpus).
Annotation time As shown in
Annotation quality According to Table 4, we achieved a better annotation quality using our propagation annotation tool: precision increases by 6.9 points, recall by 9.5 points, and F-measure by 8.3 points. All categories benefit from this propagation annotation processing. Nevertheless, we observed that three categories obtained lower recall values when using the propagation annotation tool: Dosage (-3.1 pts), Duration (-5.6 pts) and Mode (-5.2 pts). Those decreasing values are due to missing annotations (false negatives), which also implies a lower number of true positives.
This observation highlights the fact that the human annotator was too much confident with the propagation annotation tool and did not pay attention to new annotations that have not been observed in previous files, making it impossible to propagate this annotation: depuis presque un an ("for almost a year"). Missing annotations also concern parts where propagations were not made due to the configuration of the tool (only annotations composed of at least three characters are propagated, see section 2.2.2.): un seul comprimé ("a single tablet"), the dosage un has not been propagated and the human annotator thus missed the mode of administration comprimé ("tablet") since there was no existing annotation in its context. Nevertheless, since those two missing annotations occur on the same file, one can not rule out a loss of attention of the human annotator when processing this file.
Comparison In comparison with existing propagation annotation tools, our system does not rely on external resources (e.g., ontologies, lexicon, etc.) as done by Swift et al. (2004), Zonta Pastorello Jr et al. (2010) or Lansdall-Welfare et al. (2012). Our tool only focuses on existing annotations done on previous files. This ensures both annotation consistency and annotation quality since no out-ofdomain annotations can be made.
Moreover, our tool automatically propagates annotations without any interactive system as done by Voutilainen (2012). This allows a faster propagation annotation process. Nevertheless, this type of propagation is not suitable for ambiguous annotations such as part-of-speech annotations, where the context must be taken into account in order to choose the right category. In addition, this kind of automatic annotation propagation method, based on an identical token pairing without any interaction from the user, is not appropriate to process overlap annotations as a generic entity and its more specific version (e.g., "arm" vs. "left arm"). In such a case, both generic and specific versions will be propagated, leading the user to remove the specific version in each generic version found in the corpus.
Errors propagation At last, an automatic annotation propagation process can propagate errors (e.g., correct part associated with a wrong category, or incorrect annotation done while not needed). To process this issue, we designed a second script to propagate removal of annotations. This allows the user to rapidly correct errors made when propagating existing annotations. If ambiguous annotations must be processed, we consider a more sophisticated annotation propagation tool must be used.
Conclusions
In this paper, we presented the tool we designed to propagate existing annotations produced through the BRAT rapid annotation tool. Our experiments revealed the human annotator spent 31.7% less time when using the annotation propagation tool. Nevertheless, quality of annotations decreased, either due to a too much confidence in this tool or in a loss of concentration of the human annotator. This tool can be used, either to propagate annotations on remaining files to be annotated, or in addition to preannotation systems, in order to manage annotations hard to process with rules or statistical approaches (namely, longer annotations such as address or hospital name). Such categories can be hard to process using rules or statistical models due to the size of the annotation, the difficulty to identify correct frontiers of annotation, or because elements from this category vary too much along the corpus, making it difficult to capture a robust representation of those elements. As a future work, we plan to make this propagation annotation process more dynamic through a better integration of our tool in the BRAT annotation tool, which would make this annotation propagation process closer to active learning approaches.
|
2017-07-04T20:06:30.697Z
|
2016-05-01T00:00:00.000
|
{
"year": 2016,
"sha1": "c046bb595770101e8deea97bf1a588f94728698a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "c046bb595770101e8deea97bf1a588f94728698a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
26147465
|
pes2o/s2orc
|
v3-fos-license
|
Searches for periodic neutrino emission from binary systems with 22 and 40 strings of IceCube
In this paper we present the results of searches for periodic neutrino emission from a catalog of binary systems. Such modulation, observed in the photon flux, would be caused by the geometry of these systems. In the analysis, the period is fixed by these photon observations, while the phase and duration of the neutrino emission are treated as free parameters to be fit with the data. If the emission occurs during ~20% or less of the total period, this analysis achieves better sensitivity than a time-integrated analysis. We use the IceCube data taken from May 31, 2007 to April 5, 2008 with its 22-string configuration, and from April 5, 2008 to May 20, 2009 with its 40-string configuration. No evidence for neutrino emission is found, with the strongest excess occurring for Cygnus X-3 at 2.1 sigma significance after accounting for trials. Neutrino flux upper limits for both periodic and time-integrated emission are provided.
accelerators with jets capable of accelerating cosmic rays and can have a role as PeVatron accelerators of galactic cosmic rays. The observation of neutrino emission would be clear evidence for the presence of a hadronic component in the outflow of these sources. Four such binary systems, PSR B1259-63, LS 5039, HESS J0632+057, and LS I +61 • 303, have been identified as persistent TeV γ-ray emitters (Aharonian et al. 2005a(Aharonian et al. ,b, 2007Albert et al. 2006), while Cygnus X-1 is a possible candidate. The binary pulsar system PSR B1259-63 was also recently discovered to have periodic emission in GeV photons (Abdo et al. 2011). PSR B1259-63 is formed by a B2Ve star orbited by a young 48 ms pulsar (Tavani & Arons 1997) both exhibiting a strong wind. As observed in (Aharonian et al. 2005a), its VHE emission could come from Inverse Compton scattering on shock-accelerated leptons from the interaction zone between the pulsar and wind from the star though a hadronic interpretation cannot be excluded. On the other hand, the driving factor of the VHE emission in Cygnus X-1, most probably a black hole orbiting a super-giant O9.7 star (Ziółkowski 2005), could be the interaction of the black hole with the strong stellar wind of the star. However, there has been no other evidence for steady VHE emission of Cygnus X-1, though a VHE flare of about 1 h was observed (Albert et al. 2007). HESS J0632+057 was recently seen to have a periodic modulation in X-rays by Swift, with heightened TeV emission coincident with the X-ray maximum (Bongiorno et al. 2011).
LS I +61 • 303 remains a mystery even after four decades of observations over a wide range of wavelengths, from radio (Gregory 2002), soft and hard X-ray Li et al. 2011;Zhang et al. 2010;Sidoli et al. 2006;Harrison et al. 2000), GeV (Abdo et al. 2009a) and TeV photons (Albert et al. 2009;Acciari et al. 2009). The best measurement of its period, P 1 = 26.4960±0.0028 d, comes from radio data (Gregory (2002) and references therein) with the orbital zero phase taken by convention at JD 2443366.775 (Gregory & Taylor 1978), but the same modulation has also been detected in other wavelengths, notably in the GeV/TeV band emission (Abdo et al. 2009a;Albert et al. 2009). Together with LS 5039, discovered in the TeV γ band (Aharonian et al. 2005a), LS I +61 303 lacks a strong evidence supporting the black hole or neutron star nature of the compact object. This prevents clear classification of them as microquasars or pulsar systems (Paredes (2011);Paredes et al. (2000); and references therein). A discussion of the different theoretical models for these systems is presented in Bosch- Ramon & Khangulyan (2009). As with all of the binary systems above, the detection of multi-TeV neutrinos would complement the VHE photon observations and unequivocally prove the existence of hadronic acceleration.
The analysis in this paper has been performed using a likelihood method in which the underlying hypothesis is that the neutrino emission is periodically modulated due to the geometry of the X-ray binary systems. Neutrinos would be produced by a beam of hadrons accelerated by the compact object and interacting with the matter of the massive star and its atmosphere. The periodic modulation would be connected to the orbital motion of the system. This modulation is observed in photons from radio to X-ray, and in the VHE band. The analysis is designed to incorporate only minimal assumptions regarding the neutrino emission. The period is fixed to that observed in an electromagnetic band, while the phase and duration of neutrino emission are free parameters and not constrained to match the photon emission. This is to account for the fact that photons can be absorbed when the accelerator is behind the large star of the binary system while neutrino production can be enhanced if enough matter is crossed. The neutrino energy spectrum is fit with a simple power law with the index also a free parameter.
The paper is organized as follows. In Sec. 2 we describe the IceCube observatory and the data taken with two detector configurations (Achterberg et al. 2006). The analysis method is described in Sec. 3, and the expected sensitivity and discovery potential are shown. To avoid bias, the search has been performed in a blind fashion by defining cuts before looking at the true times (equivalently, the right ascension values) of the final event sample. In Sec. 4 we present the results of the search performed on a catalog of seven galactic binary stars in the Northern sky. The selected objects are considered as microquasars in Distefano et al. (2002), where their expected emission of neutrinos is calculated. While that paper is not specifically about the periodic emission from these sources, nonetheless the objects considered there are promising neutrino emitters for which radio observations allow identification of jet parameters such as the Lorentz factor and the luminosity of the jet. All of the sources considered are located in the Northern Hemisphere, where IceCube is most sensitive (Abbasi et al. 2011). During the austral summer the atmosphere above the South Pole gets warmer and thinner and the probability of pions generated in cosmic ray air showers to decay rather than interact increases (Tilav et al. 2009), causing the trigger rate to vary by about ±10%. A series of event selections and higher-level event reconstructions are applied to remove these downward-going events, while retaining upward-going tracks from muons induced by neutrinos which crossed the Earth. At the final level of analysis, this remaining background of upward-going atmospheric neutrinos comes from many different directions on the other side of the Earth. Temperature effects average over a wide terrestrial region and the seasonal modulation is only a few percent. This variation has a period of one year, much longer than any period considered in this search.
The IceCube data
The searches presented here used two data samples. The data taken with the 22-string con-figuration have a livetime of 275.7 days, 89% of the operation period from May 31, 2007 to April 5, 2008, or Modified Julian Date (MJD) 54251-54561. The sample is described in Abbasi et al. (2009a), and consists of 5114 events, which are mostly neutrino induced upward-going muons with declinations from -5 • to +85 • . The deadtime is mainly due to test and calibration runs during and after the construction season. The livetime of the 40-string data used in analysis is 375.7 d which is 92% of the nominal operation period from April 5, 2008to May 20, 2009. The handling and processing of the data to obtain the final neutrino candidate event sample are fully described in Abbasi et al. (2011). The final 40-string sample contains 36,900 atmospheric neutrino and muon events distributed over the whole sky, of which 14,121 events are upward-going (the rest are downward-going events from the Southern Sky, used for neutrino source searches strictly above PeV energies). The median angular resolution for the final sample for energies greater than 10 TeV is < 1 • . The energy of each event is estimated using the density of photons along the muon track due to stochastic energy losses of pair production, bremsstrahlung and photonuclear interactions which dominate over ionization losses for muons above 1 TeV. The energy resolution is about 0.3 in log 10 of the muon energy in the detector between 10 TeV and 10 5 TeV. The estimated muon energy is a lower bound on the primary neutrino energy, since for interactions that occur outside the detector the muon loses energy over an unknown distance before reaching the detector. (Energy distributions used internally within the analysis therefore refer to the observable muon energies.) The muon neutrino flux upper limits at 90% CL for time-integrated searches (depending on declination) are between E 2 dN/dE ∼ 3 − 20 × 10 −12 TeV cm −2 s −1 in the northern sky where the sources considered in this paper are located.
The 22 and 40-string data samples used in this paper were also used to look for bursting neutrino sources in Abbasi et al. (2012) where the stability of the data taking is discussed in detail. Azimuthal geometry effects of the 22 and 40-string IceCube detectors (due to the fact that they are more elongated in one direction than in others) and the rotation of the Earth interfere constructively for source periods that match to multiples of a half sidereal day, which is not the case for any of the source periods tested.
The limits in this paper were produced assuming a flux of only muon neutrinos and antineutrinos at the Earth with simulated energies from 10 8 to 10 19 eV. For standard neutrino oscillations over astronomical distances (Athar et al. 2000), equal fluxes of all neutrino flavors at the Earth are expected from a source producing neutrinos via pion decay with a ratio of ν e : ν µ : ν τ = 1 : 2 : 0. For the assumption of equal fluxes of muon and tau neutrinos at the Earth, the resulting upper limits on the sum of both fluxes are about 1.7 times higher than if only muon neutrinos are considered (Abbasi et al. 2011). This sets better limits than the expected factor of two due to oscillations if no tau neutrinos were detectable. This is due to the tau decay channel into muons with a branching ratio of 17.7%. In addition to this, tau leptons with energy greater than several PeV that may travel far enough to be reconstructed as tracks in IceCube before decaying. For an E −2 neutrino spectrum, the contribution due to the detectable tau neutrino flux for sources at the horizon is 10% and rising to 15% for sources in the Northern hemisphere.
The main systematic uncertainties on the flux upper limits come from photon propagation in ice, absolute DOM efficiency, and uncertainties in the Earth's density profile and muon energy loss.
For an E −2 spectrum, the estimated total uncertainty is about 16% (Abbasi et al. 2011). They are included in the upper limits calculations following the method of Conrad et al. (2003) with the modification described in Hill (2003).
Method
The likelihood method used in this analysis was described in full detail and demonstrated in Braun et al. (2008Braun et al. ( , 2010, and applied to the 40-string data in Abbasi et al. (2012). In the likelihood ratio method, the data are modeled as a combination of signal and background populations. The probability density functions (PDFs) for signal and background consist of three terms: a space term, an energy term and a time term. The first two are implemented in the same way as in Abbasi et al. (2011). For signal, the space term characterizes the clustering of event directions around the hypothesized source location (effectively, the point spread function for the reconstructed muon, since the interaction angle between the incoming neutrino and outgoing muon is subdominant at the energies of these data samples). For background, the space term is simply estimated by timescrambling of the real data. The energy term for background is similarly a PDF built from the energy estimates of events in the real data (selected from a declination band similar to the declination of the source being searched for). For signal, distinct energy PDFs are constructed for simulated events arising from a range of neutrino source spectra from E −1 to E −4 . The chief purpose of the energy term in this search is not to determine the spectrum of the source (if one were detected). Rather, it is to enhance the detectability of a source if its spectrum is relatively hard (e.g. E −2 ) by leveraging the difference in the energy distribution of the signal events compared to nearby background events, which are primarily atmospheric neutrinos with a soft (∼ E −3.7 ) spectrum.
The third term in the PDF incorporates timing information. For signal, a periodic emission with Gaussian time profile is assumed. The period is fixed to that determined by photon observations, while the phase and duration of the neutrino emission are left as free parameters. A Gaussian shape is used for the profile to provide a smooth function with the fewest assumptions about the exact time profile of the neutrino emission. The time PDF for the ith event can thus be expressed as: where σ T is the width of the Gaussian, ϕ i is the phase of the event and ϕ 0 is the phase of the peak of the neutrino emission. The fit parameters are σ T and ϕ 0 . For background, the time term is a flat function, because in the absence of detector biases the background events are randomly distributed in time.
For each candidate source, the likelihood ratio analysis finds best-fit values for four parameters: the number of signal events, the spectral index of the signal, the peak phase of the signal and its duration. An initial estimate of the significance is made by assuming the likelihood ratio follows a χ 2 distribution and converting to a (pre-trial) p-value. To ensure a robust estimate of the final significance, however, this assumption is not used, and a correction for the number of trials is also included. For the final significance, the analysis is performed on time-scrambled data and the same catalog of sources. The final post-trial p-value is given by the fraction of analyses which yield a smaller (pre-trial) p-value for any of the sources in the catalog.
We calculate the sensitivity and median upper limit at 90% confidence level using the method in Feldman & Cousins (1998). The discovery potential is the flux required to achieve a p-value less than 2.87×10 −7 (5σ of the upper tail of a one-sided Gaussian) in 50% of trials. It should be noted that the threshold significance to claim a discovery in IceCube is set to 5σ. Fig. 1 shows the sensitivity and the discovery potential for the analysis, together with the corresponding values from the time-integrated search (Abbasi et al. 2009a). Compared to the time-integrated analysis, searching for periodicity in neutrino emission results in a better discovery potential if the duration of the emission σ T is less than about 20% of the total period (see Fig. 1). As the time-dependent search adds two additional degrees of freedom to the analysis, the discovery potential is on the other hand roughly 10-15% better using the timeintegrated search if neutrinos are actually emitted at a steady rate or over a large fraction of the period. For both the 22-string and 40-string analyses, if the emission has a σ T of 1/50, the method requires about half as many events for discovery as the time-integrated search.
Results
The seven predefined sources, listed in Table 1, were used for the initial search with the 22string data from 2007-2008. The most significant outcome in this sample was for the source SS 433, with a pre-trial estimated p-value of 6%. In identical analyses of time-scrambled data, we find at least one of the seven tested sources to be more significant in 35% of the analyses.
The analysis was subsequently performed on the 40-string data from 2008-2009, which provided twice the sensitivity of the previous sample (see Fig. 1). The most significant outcome in this sample was for Cygnus X-3. The pre-trial estimated p-value of this source is 0.0019. To account for trials and for the fact that the likelihood ratio is not perfectly χ 2 distributed, the analysis is performed again on time-scrambled data. An equivalent or more significant outcome from any of the sources is found in 1.8% of scrambled samples (expressed in Gaussian standard deviations, a 2.1 σ excess), so the result is compatible with random fluctuations of the background. The best-fit peak emission is found to be at phaseφ 0 = 0.82, andσ T = 0.02. The best-fit number of source events isn s = 4.28 and spectral index isγ s = 3.75. The full results of the analysis on each source (with time-dependent and time-integrated flux upper limits) are given in Table 1. Table 2 compares the 40-string time-integrated limits to the model predictions in Distefano et al. (2002) for each source. The model predicts the neutrino flux based on the radiative luminosity associated to the jet from radio observations in quiescent states and during flares the durations of which are specified in Tab. 4 in that paper. The figure shows limits for both the persistent and time-dependent cases for a time window similar to the observed flare but not coincident to it (since IceCube was not active at the time of radio observations noted in the paper). For the persistent case of SS 433 the model predicts more than 100 events during the 40-string data taking period, a flux level which is excluded by previous searches by the AMANDA detector (Achterberg et al. 2007). Distefano et al. (2002) noted that for the specific case of SS 433, the model may be biased because the source is surrounded by the diffuse nebula W50, which can affect the estimate of the radio emission used in the model for this source. For Cygnus X-3, the IceCube limits are near the prediction with the 40-string data.
The main parameters on which the neutrino flux depends in this model are: the fraction of jet kinetic energy converted to internal energy of electrons and magnetic field, η e ; the fraction of the jet luminosity carried by accelerated protons, η p ; and the fraction of proton energy in pions f π , which strongly depends on the maximum energy to which protons can be accelerated. We show as an example for the case of a 3-day burst of Cygnus X-3 how the parameters are constrained by our result. We assume equipartition between the magnetic fields and the electrons and the proton component (η p = η e ) for setting a constraint on f π < 0.11. If equipartition does not apply, we assume f π = f π,peak as given in Table 2 in Distefano et al. (2002) (for Cygnus X-3 f π,peak = 0.12) and constrain η p to be less than 92% of η e . In deriving these limits we have assumed that the Lorentz factor of the jet is well known from radio measurements, but in many cases there is a large uncertainty on this parameter. Hence, our limits for the parameters of this model may have different implications that we cannot disentangle: protons may not be dominant in the jet, they may lose (Brocksopp et al. 1998) (Webb et al. 2000) 1.46·10 −11 6.89·10 −12 Table 1: Results of the analysis with the IceCube 40-string data sample from 2008-2009. T 0 is the time of zero phase for the binary systems tested.σ T is the standard deviation of the best-fit Gaussian, as a fraction of the period of the binary system.
The value for σ T is constrained to be larger than 0.02, to prevent the method from isolating a single event. We also include the reference used for the orbital information. The last columns give the 90% CL upper limits on the normalization of the flux for an E −2 neutrino spectrum, for the time-dependent and time-integrated hypotheses. The upper limits also incorporate a 16% systematic uncertainty.
Source
Emission Distefano et al. (2002) for specific sources for the 40-string configuration. The neutrino energy range used to calculate the total number of events is 10 11 − 5 × 10 14 eV, comparable to what was assumed in the model. For non-persistent but flaring sources, the parameters of the model were estimated for flares observed before IceCube construction. Hence the time-dependent sensitivities are calculated averaging over a duration equal to the model flare during 40-string data taking. LS I +61 • 303 is modeled as a periodically flaring source in a high state during 26% of the orbit. smaller energies into pion decay than the values considered in Distefano et al. (2002), or the Lorentz factor is lower than the value indicated in Table 1 in that paper.
Conclusions
The exploration of the GeV and TeV photon sky with the instruments on board the Fermi spacecraft and the ground-based Cherenkov telescopes has heralded the golden age of γ-ray astronomy. The connection to neutrino astronomy is clear: high energy processes which cause the observed VHE emission can be responsible for the observed high energy cosmic rays. This implies hadronic acceleration mechanisms in astrophysical sources which can result in an observable neutrino flux with giant neutrino telescopes like IceCube.
The available photon observations have made it possible to enhance the sensitivity of searches for neutrino fluxes by incorporating assumptions derived from the γ-ray data. One crucial development has been the formulation of time-dependent neutrino flux searches, postulating a connection between the time modulation of the high energy emission and the possible neutrino flux. This assumption has increased the sensitivity of these searches in comparison to their time-averaged counterparts.
This paper presented a search for neutrinos from objects with periodic photon/broadband emission. Seven X-ray binaries in the Northern Hemisphere were selected as candidate sources in analyses of IceCube 22-string and 40-string data. The most significant source in the catalog is Cygnus X-3 with a 1.8% probability after trials (2.1σ excess). Comparing the time-integrated limits for each source to model predictions from Distefano et al. (2002), we show that our limits can constrain the fraction of jet luminosity which is converted into pions and the ratio of jet energy into relativistic leptons versus relativistic hadrons, under some assumptions. For instance, for Cygnus X-3 and equipartition between electrons and protons, the fraction of proton energy in pions is limited to about 11%. All of the results in this paper are compatible with a fluctuation of the background.
|
2012-03-18T00:02:47.000Z
|
2011-08-15T00:00:00.000
|
{
"year": 2011,
"sha1": "2f38e77ccac180d15f5a5fcc4fd005ffef0b1d9e",
"oa_license": null,
"oa_url": "https://bib-pubdb1.desy.de/record/139792/files/0004-637X_748_2_118.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "23050851a52ef6f1ad5d60f2d34e0e50989f14de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
4621657
|
pes2o/s2orc
|
v3-fos-license
|
Higgs mode in the d-wave superconductor Bi2Sr2CaCu2O8+x driven by an intense terahertz pulse
We investigated the terahertz (THz)-pulse driven nonlinear response in the d-wave cuprate superconductor Bi2Sr2CaCu2O8+x (Bi2212) using a THz pump near-infrared probe scheme in the time domain. We have observed an oscillatory behavior of the optical reflectivity that follows the THz electric field squared and is strongly enhanced below Tc. The corresponding third-order nonlinear effect exhibits both A1g and B1g symmetry components, which are decomposed from polarization-resolved measurements. Comparison with a BCS calculation of the nonlinear susceptibility indicates that the A1g component is associated with the Higgs mode of the d-wave order parameter.
In a superconductor the spontaneous breaking of U(1) phase leads to two types of collective excitations of the order parameter. One is the Nambu-Goldstone mode which is pushed up to the plasma frequency due to the Coulomb interaction, while the other is the amplitude (Higgs) mode in a conventional swave superconductor [1,2]. Being chargeless and spinless, the Higgs mode in superconductors only weakly couples to external probes, and has thus remained elusive experimentally until recently. It has been initially identified in a Raman measurement in NbSe2, where the charge density wave (CDW) coexists with superconductivity and makes the mode Raman-active via its indirect coupling to the CDW order parameter [3][4][5]. Recently, the Higgs mode has been clearly observed in a more generic situation (without CDW) in an s-wave superconductor NbxTi1-xN (NbN) by ultrafast terahertz (THz) pump-THz probe spectroscopy [6]. The role of ultrashort THz-pump pulse is to provide a non-adiabatic quench of the order parameter by instantaneously creating a population of unpaired quasiparticles (QPs) around the superconducting (SC) gap energy that triggers Higgs oscillations in the time domain [7]. The Higgs dynamics of the SC order parameter has since been theoretically studied in a variety of contexts, ranging from multiband to unconventional superconductors [8][9][10][11][12]. Specifically, in a d-wave superconductor such as the cuprates with nodes in the gap function, the Higgs mode was theoretically shown to decay much faster than in the s-wave case because of the presence of low-energy QPs [9]. Besides, in many unconventional superconductors the coexistence with other electronic orders and/or competing interactions can significantly alter the Higgs-mode dynamics, and may lead to a rich assortment of collective modes [8,[13][14][15][16]. Thus, it is imperative to explore how the Higgs mode behaves in unconventional superconductor.
In this context, nonlinear optical effects have recently kicked off an alternative way to probe the Higgs mode [17,18]. This was demonstrated in the conventional s-wave superconductor NbN, where, remarkably, a resonance between the Higgs mode and an intense THz field with a photon energy ω below the SC gap 2Δ was shown to induce large third-harmonic generation (THG) with a resonance condition 2ω = 2Δ [17,18]. It has subsequently been pointed out that, in addition to the Higgs mode, charge density fluctuations (CDF) can also contribute to the THG signal at the same frequency [19].
Within the BCS mean-field approximation, the contribution of CDF to THG should be much larger than the Higgs-mode contribution. More recently, the contributions from the Higgs mode and CDF have been decomposed in NbN via polarization-resolved measurements. The decomposition, theoretically shown to hold even beyond the BCS approximation, has revealed that the Higgs mode actually gives a dominant contribution to the THG far exceeding the CDF contribution [20]. Physically, the dominance of the Higgs mode in THG can be attributed to dynamical effects in the pairing, such as the retardation in the phonon-mediated electron interaction that are neglected in the BCS approximation [21]. Given this situation for the conventional s-wave superconductors, what happens in d-wave superconductors next becomes of great interest.
In this Letter, we report an observation of the third-order nonlinear signal in a d-wave cuprate superconductor Bi2Sr2CaCu2O8+x (Bi2212) from THz pump-optical reflectivity probe measurements over a wide range of carrier doping. The third-order nonlinear signal, akin to a THz Kerr effect, turns out to manifest itself as an oscillatory behavior of the optical reflectivity that follows the squared THz electric field (E-field) with strong enhancement below Tc. The THz Kerr signal is here further decomposed into A1g and B1g symmetry components from polarization-resolved measurements. We then show that a comparison with BCS calculations for both Higgs-mode and CDF contributions to each symmetry component strongly indicates that the observed A1g component arises from the coupling of the d-wave order parameter to the Higgs mode.
We have performed a THz pump-optical probe (TPOP) measurements, schematically illustrated in Fig. 1(a), on freshly cleaved optimally-doped (OP90, Tc ≈ 90 K) as well as overdoped (OD78, OD66 and OD52, with Tc ≈ 78, 66, 52 K, respectively) and underdoped (UD74 and UD58, with Tc ≈ 74, 58 K, respectively) Bi2212 single crystals grown with the floating-zone method. The description of the THz pulse generation is given in Supplemental Material (SM) [22]. For the probe we used a nearinfrared pulse at 800 nm which has been widely used as a sensitive probe for investigating the dynamics of the SC state in the cuprates [28][29][30][31][32][33][34][35][36]. The measurements were performed as a function of both the pump and probe polarization angles θPump, θProbe as defined in Fig. 1(b). As we shall show, the polarization dependence of the pump-probe signal is crucial in discriminating the Higgs-mode and CDF contributions. The central frequency component of the THz-pump E-field is ~ 0.6 THz = 2.4 meV, which is much smaller than the anti-nodal SC gap energy, 2Δ0 > 20 meV, in Bi2212 for the present doping levels [37,38]. This THz pulse does not significantly deplete the SC state, as evidenced by the absence of any sign of pump-probe signal-saturation up to ~ 350 kV/cm (see SM Fig. S2).
Let us start with the result for sample OP90. The THz pulse-induced transient reflectivity change ΔR for θPump = θProbe = 0° is shown in Fig. 1(c) at various temperatures. At 30 K below Tc, an oscillatory behavior of ΔR/R that follows the squared THz-pump E-field |EPump(t)| 2 is clearly identified. This quasiinstantaneous oscillatory component is similar to the forced oscillation of the order parameter observed in a conventional s-wave superconductor NbN, which also follows |EPump(t)| 2 [6]. Accordingly, the maximum amplitude of ΔR/R is proportional to the square of the peak THz-pump E-field as shown in SM Fig. S2. In addition to the oscillatory component, ΔR/R has a positive decaying component that survives up to at least ~ 10 ps. At 100 K slightly above Tc, the signal consists of a much weaker oscillatory component and a decaying signal that switches sign after ~ 4 ps. At 300 K the decaying signal remains positive at all delays.
The amplitude of ΔR/R as a function of θProbe at a fixed delay t = 2 ps at which the oscillatory component is maximum is displayed in Fig. 1(d). The ΔR/R is essentially independent of the angle at 300 K and 100 K. At 30 K below Tc, however, it displays significant dependence on θProbe, which follows a form A + B cos(2θProbe). By contrast the ΔR/R signal at t = 4 ps does not show any polarization dependence at 30 K. Similar results were obtained when the pump polarization angle θPump is varied with a fixed θProbe = 0°, demonstrating the symmetrical roles played by the pump and probe polarization angles in the observed signal (see SM Fig. S3(a)).
The pump E-field and polarization dependences of the oscillatory component are consistent with a THz Kerr effect where the strong THz E-field modulates the optical reflectivity in the near-infrared (800 nm) regime [39]. This process is described by a third-order nonlinear susceptibility χ (3) (ω; ω, +Ω, -Ω) [40], where ω and Ω are the frequencies of the near-infrared pulse and THz-pump pulse, respectively. The THz pulse-induced reflectivity change ΔR/R can be expressed in terms of χ (3) (for details see SM) as where Ei denotes the ith component of the THz-pump or probe E-field and ε1 is the real part of the dielectric constant. Assuming tetragonal symmetry for Bi2212, we can analyze the polarization dependence of χ (3) (θPump, θProbe) in terms of the irreducible representations of D4h point group as where we have defined For a given θPump, the A1g and B1g signals respectively correspond to the isotropic and cos2θProbe components observed in Fig. 1(d), which can be extracted by adding or subtracting ΔR/R (θProbe = 0°) and ΔR/R (θProbe = 90°). As expected from Eq. (2) the extracted A1g signal is found to be independent of θPump, while the B1g signal follows cos2θPump (see Fig. S3 We now compare the observations with theoretical expectations, focusing on the origin of the symmetry-dependent THz Kerr signal observed in the SC state. As in the case of the THG, both CDF and Higgs mode can contribute to χ (3) . In Fig. 4(a), we show the diagrams [(i)-(iv)] that represent the CDF contributions to χ (3) (ω; ω, +Ω, -Ω). The diagram (iv) does not show characteristic temperature dependence, and is irrelevant to superconductivity. In the case of TPOP measurements, the probe frequency ω exceeds all the other relevant energy scales (Ω, 2Δ0, Tc, etc.), where the contributions of the diagrams (i) and (ii) are suppressed (~ 1/ ω 2 ) as compared to that of (iii). Hence the contribution relevant to the present experiment essentially comes from the frequency-independent diagram (iii).
Based on the above consideration, we indicate in Table 1 the general behavior of the symmetry decompositions for the CDF and Higgs mode, respectively (see SM for details). While the CDF appears in all the symmetry channels, the Higgs mode selectively appears in the A1g symmetry. To quantify the magnitudes of the CDF and Higgs-mode contribution in different symmetries, we employ the single- in which it is about 17 times smaller than the B1g contribution. The above interpretation is also supported by a comparison with Raman results in Bi2212, which are commonly attributed to CDF [43].
First, the increase in the relative amplitude of the B1g component with doping is consistent with the strong increase in the pair-breaking peak intensity observed in B1g Raman spectra toward p = 0.22 [38,44]. Second, in underdoped Bi2212 samples both B1g and A1g SC Raman responses vanish, leaving only a weak B2g Raman signature of the SC state [45,46]. It was interpreted as a consequence of the PG opening which strongly suppresses the CDF response coming from anti-nodal QPs, but leaves intact the nodal QP probed in B2g response [46]. This contrasts strongly with the dominance of the A1g oscillatory component observed here in UD samples in the THz Kerr signal, and further reinforces our assignment as arising from the d-wave Higgs mode.
Precise physical origin of the dominance of the Higgs-mode contribution to the THz Kerr effect remains an open problem. This may be a general property of nonlinear susceptibilities in the SC state at THz frequencies, since the same observation was deduced from the polarization dependence of the THz THG signal in the conventional s-wave NbN [20]. A recent dynamical mean field theory (DMFT) calculation has shown, as mentioned above, that the Higgs-mode contribution can actually exceed the CDF contribution if retardation effects are considered in strongly electron-phonon-coupled superconductors [20]. An interesting question then is whether this also holds for unconventional superconductors.
In conclusion, we have studied THz pulse-induced nonequilibrium dynamics in Bi2212 from the change in the optical reflectivity. We observed an oscillatory behavior of the optical reflectivity proportional to |EPump(t)| 2 which we assign to a nonlinear THz Kerr effect. The signal is strongly
Intense terahertz pulse generation
The output from a regenerative amplified Ti:sapphire laser system with 800 nm center wavelength, 4 mJ pulse energy, 100 fs pulse duration, and 1 kHz repetition rate was divided into two beams: one for the generation of the terahertz (THz) pulse, and the other for the optical-probe pulse. To generate an intense monocycle THz pulse as an oscillating driving source, we used the tilted-pulse-front method with a LiNbO3 crystal [S1] combined with the tight focusing method [S2]. The THz-pump electric field (E-field) EPump(t) was detected by the electro-optic (EO) sampling in a 380 µm GaP (110) crystal placed inside the cryostat. Figure S1 shows the waveform and power spectrum of the THz-pump E-field. The peak value of the E-field reaches ~ 350 kV/cm with a central frequency ~ 0.6 THz. The near-infrared probe pulse was focused onto a 0.24 mm diameter spot on the ab plane of the crystal and the THz-pump pulse was focused onto a 3.1 mm spot.
THz-pump E-field dependence of the reflectivity change
We examined the THz-pump E-field dependence of the near-infrared reflectivity change ΔR/R. The THz E-field strength was continuously tuned by using three wire-grid polarizers (WGPs) inserted in the optical path of the THz pulse. Only the middle WGP was rotated to tune the THz E-field strength while keeping the waveform and the polarization identical at the sample position. Figure S2 shows the reflectivity change ΔR/R at its maximum as a function of the peak THz E-field for the OP90 sample at 10 K when θPump = θProbe = 0°. ΔRMax/R does not saturate up to ~ 350 kV/cm, indicating that the superconducting (SC) state is not significantly depleted up to the strongest THz E-field studied here.
Pump polarization dependence
Here we show the THz-pump polarization dependence of ΔR/R for the OP90 sample. We used 3 WGPs to rotate the pump polarization angle θPump while keeping the E-field strength identical as ~ 320 kV/cm, for all the polarization angles [S3]. Figure S3(a) shows the maximum amplitude of ΔR/R at 30 K when θProbe = 0° as a function of the pump polarization angle θPump. The result can be fitted by a formula A + B cos(2θPump). This polarization-angle dependence of the pump pulse is similar to that of the probe pulse shown in Fig. 1(d). We also plot the maximum amplitude of the A1g and B1g signals against θPump in Fig. S3(b). As expected from Eq.
(2) in the main text, the A1g signal is angle-independent whereas the B1g signal follows cos2θPump .
The analysis for the pump and probe polarization dependence
As we have explained in the main text, the oscillatory behavior of the observed ΔR/R signal can be described by a third-order nonlinear susceptibility χ (3) (ω; ω, + Ω, − Ω) [S4], where ω and Ω are the frequencies of the near-infrared pulse and THz-pump pulse, respectively. The reflectivity change ΔR induced by the THz pulse can be expanded in terms of the real (ε1) and imaginary (ε2) parts of the dielectric constant as Figure S2. Dependence of the maximum amplitude of ΔR/R on the peak E-field of the THz-pump pulse for OP90 at 10 K for θPump = θProbe = 0°. Here the change Δε in the complex dielectric constant is connected to χ (3) as where Ek Pump denotes the kth component of the THz-pump E-field while i and j are the indices for the probe Efield. Eq. (S2) corresponds to Eq. (1) in the main text. As we shall show, when the pump photon energy is much smaller than half the SC gap energy Δ0, the instantaneous response is off-resonant and thereby dominated by the real part of χ (3) . In that case we have Since the experiments were performed only using the polarizations parallel to the CuO2 planes, we can focus on the plane on which we can assign the axes x, y along the Cu-O bonds ( Fig. 1(b) in the main text). Here we define the pump and probe E-fields as E Pump = E Pump (cosθPump, sinθPump) and E Probe = E Probe (cosθProbe, sinθProbe).
(2) in the main text. By substituting Eq. (S4) into Eq. (S3), we can express the polarization-angle dependence of ΔR/R as where we have defined
The B2g component for θPump = 45°
From Eq. (S6), we can obtain the B1g and B2g components of ΔR/R as Figure S4 shows the obtained B1g and B2g components at 30 K for OP90 with the E-field strength fixed to ~ 320 kV/cm. The B2g signal was not resolved within the sensitivity of our measurement.
The fitting procedure
To obtain detailed information on the temperature dependence of the symmetry-resolved components, we fitted the transient signals with a formula, The first term represents the oscillatory component or THz
Temperature dependences for UD74 and OD78
Here we show the temperature dependences of the A1g and B1g signals for the UD74 sample in Figs
Comparison of the A1g and B1g signals for UD58, OD66 and OD52
The pump-probe delay dependence of A1g and B1g signals for the UD58, OD66 and OD52 samples at 10 K are shown in Fig. S7. In the UD58 sample, the A1g oscillatory component is much larger than B1g component, Figure S6.
Numerical calculation of the nonlinear optical susceptibility
The nonlinear optical susceptibility that contributes to the THz pump-optical probe (TPOP) signal is evaluated within the mean-field treatment. Here we take a pairing Hamiltonian, where c kσ † is the creation operator for electrons with momentum and spin , ξ k is the band dispersion, is the number of k-points, and (k, k ' ) is the pairing interaction. We assume the d-wave pairing interaction of the form (k, k ' ) = Vu k u k ' with > 0 and u k = cos k x − cos k y . We define the SC gap function, which satisfies the self-consistent mean-field gap equation, where E k = √ξ k 2 + Δ k 2 is the eigenenergy of quasiparticles, and T is the temperature. One can factor out the momentum dependence of the gap function as Δ k = Δu k .
The dynamics of the superconductor is described by the evolution of Anderson's pseudospin σ k = is the Nambu spinor and = (τ x , τ y , τ z ) are the Pauli matrices. The equation of motion for the pseudospins is given by a Bloch equation, ) is the pseudomagnetic field acting on the pseudospin, Δ k ' (t) and Δ k '' (t) are respectively the real and imaginary parts of the gap function, while (t) = A Pump (t) + A Probe (t) represents the vector potential for the pump and probe lasers. If we denote the deviation of the pseudospin configuration from the equilibrium state as σ k (t) = σ k, eq + δσ k (t), then σ k (t) is even-order in A(t).
The current is expressed in terms of the pseudospins as (S14) with v k = ∂ξ k ∂k the group velocity. The leading pump-probe response is third-order in A(t) as Let us assume a sinusoidal form for the pump and probe E-fields, where and are the frequencies of the pump and probe light, respectively. We define the polarization vectors e Pump and e Probe for the pump and probe light as A Pump = e Pump A Pump and A Probe = e Probe A Probe (|e Pump |=|e Probe |=1). The nonlinear current j (3) (t) that contributes to the pump-probe spectroscopy has the same time dependence as that of the probe light (∝ e −iωt ). Hence j (3) (t) must contain the product of A Pump e iΩt , A Pump e −iΩt , and A Probe e −iωt . The third-order nonlinear optical susceptibility χ (3) that represents the pumpprobe signal is defined by For the first term in Eq. (S15), there are three possibilities: (1) A i (t) is A Pump e iΩt with σ k z (t) containing A Pump e −iΩt and A Probe e −iωt .
(2) A i (t) is A Pump e −iΩt with σ k z (t) containing A Pump e iΩt and A Probe e −iωt .
The second term in Eq. (S15), on the other hand, has a unique possibility that Ai(t), Aj(t), and Ak(t) are a permutation of A Pump e iΩt , A Pump e −iΩt , and A Probe e −iωt . Correspondingly, we have four different diagrams for χ (3) as displayed in Fig. 4(a) in the main text.
In the case of the TPOP spectroscopy, the frequency of the probe light exceeds all the other relevant energy scales. In this situation, as we have discussed in the main text, the dominant contribution of the charge density fluctuation (CDF) to χ (3) comes from the case (3) in the above [which corresponds to diagram (iii) in Fig. 4(a)]. The CDF contribution including the screening effect is explicitly calculated as where χ 33 (k, ν) is the dynamical charge susceptibility [S3,S16]. Within the mean-field theory, χ 33 (k, ν) is given by with an infinitesimal positive constant (in practice we take a finite value for to regularize the divergence in χ 33 ).
To investigate the polarization-angle dependence of χ (3) , we set e Pump = ( cos θ Pump , sin θ Pump , 0) and e Probe = ( cos θ Probe , sin θ Probe , 0). Assuming tetragonal symmetry for Bi2212, we can decompose the nonlinear susceptibility χ CDF into the irreducible representations of D4h point group as in Eq. (2) in the main text. These components are given as The Higgs-mode contribution to χ (3) is also classified in the same way as CDF. The relevant diagrams are those corresponding to (i)-(iii) in Fig. 4(a) in the main text with the vertex function inserted inside the bubbles.
The dominant contribution for the TPOP spectroscopy comes from the frequency-independent one that corresponds to diagram (iii .
(S24)
This sharply contrasts with the polarization dependence of the CDF contribution (see Table 1 in the main text), which allows us to discriminate the CDF and Higgs-mode contributions in TPOP spectroscopy experiments.
Within the mean-field theory, the A1g component of the Higgs-mode contribution (including the screening effect) is explicitly evaluated as (S25) Here χ 11 (k, ν) and χ 31 (k, ν) are amplitude-amplitude and amplitude-charge dynamical susceptibilities, respectively. In the mean-field theory, they are calculated as To numerically evaluate these quantities for Bi2212, we employ a single-band tight-binding model with the band dispersion, where t, t ' and t '' are respectively the nearest, second-, and third-neighbor hoppings on the two-dimensional square lattice, and is the chemical potential. We adopt t ' t ⁄ =0.2 and t '' /t = 0.1 from the literature [S17], and take V/t = 1 and / = 0.01. The filling is set to be 20 % hole doped. The result for the CDF contribution is shown in Fig. 4(b) in the main text, while that for the Higgs-mode contribution is shown in Fig. S8 here. One can see that the A1g component grows below Tc, evidencing the correlation with superconductivity. Note that χ Higgs (3) in Fig. S6 is normalized by its maximum value for the lowest temperature considered, while χ CDF (3) in Fig. 4(b) is normalized by its maximum value of the B1g component for the lowest temperature considered, which is 80 times larger than that of χ Higgs (3) . In general, the magnitude of χ Higgs (3) in the mean-field treatment is much smaller than that of χ CDF (3) . This situation is similar to the s-wave case: The Higgs-mode contribution for the susceptibility of the THG is suppressed by a factor of (Δ/V) 2 as compared to the CDF contribution [S18].
However, this is just an artifact of the mean-field approximation [S16]. For instance, if one takes account of strong correlation effects such as the retarded phonon-mediated interaction in the s-wave case, the Higgs-mode contribution can be comparable to, or even larger than, the CDF contribution. In contrast to the relative magnitudes, we can note that the polarization-angle dependence of the CDF or Higgs-mode contribution remains almost unchanged when one goes beyond the mean-field theory. It is natural to expect a similar behavior for the case of d-wave superconductors.
Dominance of the B1g component in the CDF contribution
As we have seen in Fig. 4 The argument above is valid within the mean-field theory. However, we speculate that the situation is qualitatively similar in strongly correlated systems. If one takes account of strong correlation effects, χ 33 (k, 0) in Eq. (S19) is replaced with the one calculated beyond the mean-field theory. Due to the self-energy correction, the peaks in χ 33 (k, 0) spread to some extent. In the underdoped regime, the self-energy effect should be significant in the anti-nodal regions ( ~ (±π, 0), (0, ±π) ). Therefore, the cancellation between the first and second terms in Eq. (S21) becomes less effective but remains. This will make the magnitude of the A1g component smaller than the B1g component. Figure S8. The mean-field result for the Higgs-mode contribution to the nonlinear optical susceptibility χ (3) for the TPOP spectroscopy. χ Higgs (3) is here normalized by its maximum value for the lowest T considered.
|
2018-04-26T16:54:25.134Z
|
2017-11-14T00:00:00.000
|
{
"year": 2017,
"sha1": "530aa6e476ac09d8ed2a66dfa30d7c0c16eae3da",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.120.117001",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "530aa6e476ac09d8ed2a66dfa30d7c0c16eae3da",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
263716878
|
pes2o/s2orc
|
v3-fos-license
|
Posthospital Multidisciplinary Care for AKI Survivors: A Feasibility Pilot
Rationale & Objective Innovative models are needed to address significant gaps in kidney care follow-up for acute kidney injury (AKI) survivors. Study Design This quasi-experimental pilot study reports the feasibility of the AKI in Care Transitions (ACT) program, a multidisciplinary approach to AKI survivor care based in the primary care setting. Setting & Participants The study included consenting adults with stage 3 AKI discharged home without dialysis. Interventions The ACT intervention included predischarge education from nurses and coordinated postdischarge follow-up with a primary care provider and pharmacist within 14 days. ACT was implemented in phases (Usual Care, Education, ACT). Outcomes The primary outcome was feasibility. Secondary outcomes included process and clinical outcomes. Results In total, 46 of 110 eligible adults were enrolled. Education occurred in 18/18 and 14/15 participants in the Education and ACT groups, respectively. 30-day urine protein evaluation occurred in 15%, 28%, and 87% of the Usual Care, Education, and ACT groups, respectively (P < 0.001). Cumulative incidence of provider (primary care or nephrologist) and laboratory follow-up at 14 and 30 days was different across groups (14 days: Usual care 0%, Education 11%, ACT 73% [P < 0.01]; 30 days: 0%, 22%, and 73% [P < 0.01]). 30-day readmission rates were 23%, 44%, and 13% in the Usual Care, Education, and ACT groups, respectively (P = 0.13). Limitations Patients were not randomly assigned to treatment groups. The sample size limited the ability to detect some differences or perform multivariable analysis. Conclusions This study demonstrated the feasibility of multidisciplinary AKI survivor follow-up beginning in primary care. We observed a higher cumulative incidence of laboratory and provider follow-up in ACT participants. Trial Registration ClinicalTrials.gov (NCT04505891). Plain-Language Summary Abrupt loss of kidney function in hospitalized patients, acute kidney injury (AKI), increases the chances of long-term kidney disease and a worse health care experience for patients. One out of 3 people who experience AKI do not get the follow-up kidney care they need. We performed a pilot study to test whether a program that facilitates structured AKI follow-up in primary care called the AKI in Care Transitions (ACT) program was possible. ACT brings together the unique expertise of nurses, doctors, and pharmacists to look at the patient’s kidney health plan from all angles. The study found that the ACT program was possible and led to more complete kidney care follow-up after discharge than the normal approach to care.
][3][4][5][6][7] Despite these heightened risks, at least 21% of patients are unaware of their AKI diagnosis and kidney-focused follow-up is infrequent. 8,9Appropriate laboratory monitoring with serum creatinine (SCr) or urine protein occurs in just 54% and 14% of patients, respectively, within 6 months of discharge. 9,10Even survivors at the highest risk for poor outcomes, such as those with AKI requiring dialysis, pre-existing CKD, or persistent AKI, are seen by an outpatient nephrologist in only 36-43% of cases. 11,12ncreasing attention to post-AKI follow-up through alternative care delivery models may help mitigate these gaps.Nephrologist follow-up is the focal point of most models.Patients involved in AKI survivor clinics directed by nephrologists demonstrate improved kidney health knowledge and adherence to best practices, such as kidney laboratory assessments. 8,13Preliminary data also showed improvements in clinical outcomes, such as blood pressure control and reduced rehospitalization, though confirmatory research is needed. 14This care model has promise, but concerns have been raised about feasibility and scalability.Patients report reluctance to add more doctors to their health care team and cite concerns about travel distance and issues with scheduling follow-up visits. 13Few nephrologists are available in community and rural settings, which decreases access to AKI survivor care. 15Accordingly, nephrologists have called for multidisciplinary care models to enhance capacity for post-AKI care delivery. 168][19] We therefore developed the AKI in Care Transitions (ACT) program, a multidisciplinary, teambased approach to AKI survivor care based in the primary care setting.This study reports the preliminary feasibility and effectiveness of the ACT program.
Setting and Participants
This prospective pilot study was conducted between April 2020 and November 2021 at Mayo Clinic in Rochester, Minnesota, a tertiary care center with a primary care practice for local area residents.The Mayo Clinic primary care program includes approximately 150,000 empaneled patients cared for at 7 full-service clinical sites and 2 express care sites in the local counties.Included individuals were adults (≥ 18 years) with Kidney Disease Improving Global Outcomes (KDIGO) stage 3 AKI at any time during their hospitalization who were not discharged on dialysis or with hospice care and who received primary care in a Mayo Clinic Rochesterbased clinic. 20Recruitment was limited to patients with stage 3 AKI for feasibility in the pilot stage (Table S1).Excluded patients were non-English speakers, persons cognitively or physically unable to participate (eg, clinician-documented dementia in the electronic health record [EHR]), and persons who did not provide informed consent.Eligibility was determined using an EHR screening alert and EHR review by a study team member.This study was approved by the Institutional Review Board at Mayo Clinic (IRB 20-004204) and registered on ClinicalTrials.gov(NCT04505891).
No formal dedicated AKI survivor clinic exists at Mayo Clinic.There are 5 inpatient nephrology consult services electively available at Mayo Clinic in Rochester, with a typical cumulative daily census of 60-90 patients.Nephrology consult teams include nurse educators whose primary role is to deliver education to hospitalized patients being discharged on dialysis.The primary care practice at Mayo Clinic in Rochester employs a team-based care model that includes physicians, advanced practice providers, nurses, and embedded clinical pharmacists who consult with patients independently or in collaboration with the PCP.There were no significant changes to the standard of transitional post-AKI care within Mayo Clinic or by external consensus during the study period.
In 2020, the previously described ACT program was implemented to provide support for AKI survivors transitioning between the inpatient and outpatient settings and facilitate timely kidney care follow-up after discharge. 21riefly, AKI survivors identified by the EHR screening alert, an embedded alert that used serum creatinine and urine output data to identify AKI, received inpatient education from nephrology nurse educators approximately 1-3 days before discharge. 21A detailed description of provided education, including artifacts, has been previously published. 21Next, the study team coordinated transitional care, including posthospital visits with a PCP and pharmacist within 14 days after discharge.Nephrology referral was at the discretion of the inpatient nephrologists, if consulted, or the patient's PCP.
Study Groups
ACT was deployed in 3 phases, which created a natural 3-phase quasi-experimental design, with informed consent obtained for participants in each phase.The first phase (April 2020 to October 2020; 'Usual Care' group) included AKI survivors identified by the EHR screening tool who would be candidates for ACT.Patients were passively followed during this phase, and the inpatient care team coordinated any AKI-related education and outpatient follow-up as part of their standard practice.Throughout all phases, any visit could be in-person or virtual, according to patient preference.During the second phase of implementation (October 2020 to April 2021), patients identified by the EHR screening tool were visited by a trained nephrology nurse educator who delivered targeted AKI education using videos, pamphlets, and teach-back strategies before hospital dismissal (the Education Alone group).Frequency and intensity of education were individualized at the discretion of the nephrology nurse educator, but standard components were delivered to all participants.The third and final phase (April 2021 to November 2021) included patients who received the full ACT intervention (the ACT group).In this phase, participants received the previously described nephrology nurse educator AKI education and the study team coordinated outpatient kidney follow-up within 14 days of discharge (Fig 1).Follow-up included discharge orders for laboratory testing (ie, extended metabolic panel including SCr and urinalysis with microscopy or an alternative urine protein test as available) and posthospital follow-up visits with a PCP and a pharmacist, which ideally occurred back-to-back or on the same day.Pharmacists evaluated postdischarge urine protein results and used an established protocol to order a repeat assessment within 3 months if evidence of proteinuria.They also performed a detailed medication review and reconciliation and discussed recommendations with the provider in-person, if possible, or via secure message.Recommendations were at the pharmacists' discretion and were not standardized or limited to kidney-related medications.Pharmacist recommendations in this workflow are frequently related to therapy optimization (eg, drug choice, dose change), monitoring (eg, drug levels), management of drug interactions, and optimization of PLAIN-LANGUAGE SUMMARY Abrupt loss of kidney function in hospitalized patients, acute kidney injury (AKI), increases the chances of long-term kidney disease and a worse health care experience for patients.One out of 3 people who experience AKI do not get the follow-up kidney care they need.We performed a pilot study to test whether a program that facilitates structured AKI follow-up in primary care called the AKI in Care Transitions (ACT) program was possible.ACT brings together the unique expertise of nurses, doctors, and pharmacists to look at the patient's kidney health plan from all angles.The study found that the ACT program was possible and led to more complete kidney care follow-up after discharge than the normal approach to care.patient centeredness (eg, decreased medication burden to improve adherence).The PCP reviewed the posthospital laboratory data, if available, and was encouraged to use the KAMPS (kidney function assessment, awareness and education, medication review, blood pressure monitoring and sick day education) framework (Table S2) for secondary and tertiary prevention of AKI. 22If deemed appropriate, additional follow-up with nephrology or other specialists occurred.Clinical decision support tools were developed and embedded in the EHR during this phase. 21Two alerts developed for inpatient teams included 1) a passive notification that the patient had stage 3 AKI and links to kidney health resources and 2) a failsafe alert to prompt placement of dismissal orders (kidney laboratory monitoring and a PCP and pharmacist visits), if those placed by the ACT study team were discontinued.Clinical decision support was also available for outpatient providers with descriptions of the KAMPS framework and links to additional kidney care resources through a proprietary medical knowledge system. 23Comparisons were made across phases to examine the impact of each added level of intervention on care processes and outcomes.
Data Collection
Data abstracted from the EHR included demographics and select comorbid conditions documented in clinician notes.Encounter data included length of hospital and intensive care unit stay, nephrology consultation during hospitalization, and details about the AKI episode.Estimated glomerular filtration rate (eGFR) was determined using the 2021 Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) creatinine equation. 24Preadmission SCr was defined as the median SCr from 6 months to 7 days before admission or back-calculated using MDRD (Modification of Diet in Renal Disease) Study equation, assuming an eGFR of 75 mL/min/1.73m 2 . 25All data were manually collected from the EHR except laboratory data, which were electronically obtained.
Outcomes and Analysis
Feasibility was measured using the proportion of patients screened, approached for consent, and enrolled from among all patients identified by the EHR alert and the proportion of participants in the ACT group who completed follow-up care (intention-to-treat). Intervention fidelity was measured using the proportion of patients who received the intervention components.We also evaluated the proportion of participants in the ACT group where clinicians interfaced with clinical decision support alerts.Process outcomes assessed in all groups included the frequency and nature of participants' completed follow-up care, including timing and provider type.The cumulative incidence of provider (PCP or nephrologist) and laboratory (SCr and urine study, including urinalysis with microscopy, urine dipstick, and urine albumin-to-creatinine ratio) follow-up was determined at 14 and 30 days.Clinical outcomes of interest were emergency department visits, hospital readmissions, and death within 90 days.Changes in eGFR between dismissal and 30 ± 15 days and 90 ± 30 days after hospitalization were calculated using outpatient SCr values.Medication data were collected from the discharge summary for the index hospitalization and at 90 days using the medication list from the nearest inpatient or outpatient encounter.Participants were followed for 90 days after hospital dismissal or until death or loss to follow-up within that timeframe.
Continuous data were described using median and interquartile range (IQR).Baseline characteristics, hospitalization, and feasibility data were reported using descriptive statistics.The 3 groups were compared using the Fisher-Freeman-Halton exact test for nominal or discrete data and the Kruskal-Wallis test for continuous data.A sensitivity analysis excluded participants discharged to a skilled nursing facility, as follow-up practices may be impacted by the discharge disposition.As care coordination was facilitated by the study team in only the ACT group, an additional sensitivity analysis compared the ACT group to a group that combined the Usual Care and Education Alone During the first phase ('Usual Care'), participants were identified by the electronic screening tool and passively followed, while the inpatient care team coordinated any education and outpatient follow-up as part of standard practice.In the second phase ('Education Alone'), standardized kidney health education was delivered to patients and caregivers before hospital dismissal.The third phase ('ACT') included standardized education and care coordination of kidney function laboratory tests and provider assessment with 14 days of discharge.During all phases, nephrology follow-up was coordinated at the discretion of the inpatient care team, consulting nephrologists, or primary care provider.
participants.All analyses were performed using SAS version 9.4 software (SAS Institute, Inc.; Cary, NC).
Recruitment Feasibility
Most (329 out of 346) patients identified by the EHR alert were evaluated for inclusion by the study team.Outcomes The 14-day cumulative incidence of provider (PCP or nephrologist) and laboratory (SCr and urine study) followup was 0% in the Usual Care group, 11% in the Education Alone group, and 80% in the ACT group (P < 0.001; Table 3).The degree of provider and laboratory follow-up was persistently different at 30 days [0%, 22%, and 80%, respectively (P < 0.001)].Time to follow-up is shown in Figure 3. Findings were consistent in sensitivity analyses excluding those who were discharged to a skilled nursing facility (Table S3) and when the ACT group was compared to the combined Usual Care and Education Alone groups (Table S4).
Participants were on a median (IQR) of 12 (11, 15) medications in the Usual Care group at hospital discharge, 16 (13, 22) in the Education Alone group, and 16 (8, 21) in the ACT group (Table 1).Two (15%) participants in the Usual Care group had a nonsteroidal anti-inflammatory drug on their medication list at hospital discharge compared to zero in the Education Alone and ACT groups (P = 0.07; Table 5).Renoprotective medications, including renin-angiotensin system inhibitors and sodium-glucose cotransporter-2 inhibitors, were newly initiated within 90 days in 3 (20%) ACT participants compared to 0 and 1 (6%) in the Usual Care and Education Alone groups, respectively (P = 0.23; Table 5).
DISCUSSION
In this prospective evaluation of a multidisciplinary model for AKI survivor care, we demonstrated that kidney health education and coordinated follow-up of AKI survivors in primary care increases timely adherence to best practices.Feasibility was evidenced by effective participant identification and recruitment practices, which yielded a 42% enrollment rate, higher than reported with nephrologistcentric programs, and reliable delivery of the ACT intervention, with >80% of participants completing education and postdischarge laboratory and provider follow-up.
In this study, participation in the ACT program was associated with a higher rate of laboratory monitoring for kidney function assessment and provider follow-up, 2 core components of best practices for high-quality post-AKI care. 22It appeared that improved completion of timely urine protein evaluation was a key driver in the ACT Includes urinalysis with microscopy, urine dipstick, and urine albumin-to-creatinine ratio.Of the 42 urine evaluations performed across the groups within 90 days, 88% were urinalyses or urine dipsticks.In cases where results revealed an elevated protein osmolality ratio or hematuria on screening evaluation with a urinalysis with microscopy or urine dipstick, a repeat assessment and urine albumin-to-creatinine ratio were recommended within 3 months of discharge.c Telehealth visit occurred in 1 and 4 participants in the Usual Care and Education Alone groups, respectively.
d Telehealth visit occurred in 1 participant in the ACT group.
program impact.Although urine protein assessment is a key prognostic indicator in AKI survivors, United States Renal Data System data indicate that it was evaluated in less than 20% of patients in the 6 months after discharge. 26,27 mixed methods study from the ACT group indicated that this is likely due to a combination of factors, including a lack of awareness, competing priorities, and opportunities to improve kidney knowledge and education among PCPs.28 An episode of AKI has been associated with a 9% increase in urine albumin-to-creatinine ratio, with greater increases following more severe AKI (eg, 24% with stage 3 AKI).29 Identifying and quantifying proteinuria is a critical risk-stratification tool following AKI.A higher urine albumin-to-creatinine ratio is associated with an increased risk of kidney disease progression and dialysis needs.26,30 It also has implications beyond kidney disease, such as complications from cardiovascular disease, and thus may inform other comorbid condition-directed therapy.31 When used in the primary care setting, identification of proteinuria may identify a subset of patients where nephrology consultation would provide greatest benefit.The higher rates of urine protein assessment observed in the ACT group were likely driven primarily by the active role of the ACT study team in care coordination of clinical and laboratory follow-up before hospital dismissal.Urine protein test selection was driven by availability at the primary care practice site. Whle the urine albumin-to-creatinine ratio would be preferred for all patients, semiquantitative assessments with a urine dipstick or urinalysis with microscopy may be more feasible in certain environments.For patients with evidence of proteinuria on surveillance evaluation, a more detailed review is warranted.An established pharmacistdriven, collaborative practice protocol facilitated ordering repeat measurements for participants in the ACT group with elevated protein on initial assessment.Overall, monitoring for proteinuria may catalyze the initiation or resumption of renoprotective medications during postdischarge follow-up, as was seen in 20% of the ACT group compared to 0% and 6% in the Usual Care and Education Alone groups, respectively.ACT participants had a significantly decreased time to posthospital laboratory monitoring and PCP and pharmacist visits. Th frequency of nephrologist visits and the time to nephrologist follow-up was similar across groups.This represents an improvement relative to previously described nephrologist-centric models, which showed a median follow-up time of 15-48 days.13,14 Early posthospital follow-up may allow for more timely recognition of AKI-related complications, decreased exposure to nephrotoxins, and adjustment of diuretic and other medication doses during the dynamic arc of kidney recovery.A prior report from this study identified a median of 3 drug therapy problems per patient, 18% of which were for nephrotoxic/renoprotective medication optimization.Among these pharmacist-identified interventions for renally active therapy, 80% of optimization recommendations were acted on by providers within 7 days.32 Previous research on collaborative delivery of post-AKI care between nephrologists and PCPs found that followup in primary care, in advance of or in conjunction with nephrologist follow-up when indicated, was seen as instrumental in assuring care continuity and comorbidity management.28 Similar models of transitional, multidisciplinary team-based follow-up in primary care have demonstrated reductions in ED visits, rehospitalization, and costs as well as improved self-rated health.[33][34][35][36] Collaborative care delivery has been identified as desirable and necessary for the scale and spread of post-AKI care, and further research on how it may be optimized is warranted.16,[37][38][39][40] By incorporating multiple disciplines, including pharmacists and nephrology nurse educators, this health care delivery model capitalizes on specialty knowledge while reducing burden on already limited provider resources. Multisciplinary engagement, particularly pharmacist-led medication review and reconciliation, has been recommended as a foundational element of post-AKI care by nephrologists and patients.16,[37][38][39][40] Medication management is one of the few modifiable determinants of patient outcomes, and pharmacist involvement in transitions of care has been associated with reduced hospital readmissions and polypharmacy.[41][42][43][44][45] Despite these factors, their routine incorporation into post-AKI care is not well documented.In the present study, pharmacist involvement may have contributed to favorable medication use patterns, including lack of nonsteroidal anti-inflammatory drugs and initiation of renoprotective medications in 20% of participants the ACT group.32 This model of care included several strengths that address known barriers to delivery of post-AKI care. Usof an EHRbased screening tool allowed for automated identification of AKI survivors from among the entire inpatient census.Previous tactics have relied on time-consuming manual efforts, including review of cases in select hospital locations (eg, an ICU) and/or referral from an inpatient nephrology consult team.13,14,46 Such approaches may miss AKI survivors who stand to benefit significantly from kidney follow-up care.As a representative example, among those included in our study, only 54% were seen by nephrology during their hospitalization, and 46% had an ICU stay.Engagement of primary care may have contributed to a higher participation rate than observed in other nephrology-centric models.13 Patients have described reluctance to add additional specialists to their care team and long wait times for access to specialists as barriers to participating in post-AKI care.13,47 Targeted AKI education before hospital dismissal may have increased patient awareness about AKI and knowledge about the importance of kidney health follow-up, which are additional obstacles to patient participation in follow-up.8,40,48 This may have contributed to high compliance with ACT program components, including laboratory monitoring (93-100%) and provider visits (80%).Clinical decision support tools may act as important prompts to coordinate recommended follow-up care and thus contributed to the success of the ACT program.However, as only 40% of providers interfaced with these tools, more research is needed to optimize their utility.Collectively, this study provides evidence for the potential scalability and generalizability of this approach to post-AKI health care delivery.
This study is not without limitations.Patients were recruited during phased implementation of the ACT program and were not randomly assigned to treatment groups, which likely contributed to differences across groups.As the primary outcome was feasibility, the sample size was small and thus insufficient to detect differences in many clinically meaningful outcomes or perform multivariable analysis.Given these factors, findings related to clinical outcomes should be interpreted with caution.Dismissal to a skilled nursing facility occurred at varying rates across groups and may have impacted the timing and frequency of postdischarge follow-up.Thus, a sensitivity analysis was performed excluding patients who dismissed to a skilled nursing facility and results were similar.There also remains a need to evaluate key patient-reported outcomes, including the effect of ACT on kidney health knowledge, which is planned but beyond the scope of this report.All participants were receiving primary care in the same region as the tertiary care center where recruitment was conducted, and all sites use a shared EHR.This minimizes the likelihood of missing follow-up or rehospitalization outside our health system and may affect generalizability of these data.It is unknown how our findings translate to patients receiving primary care at a greater distance or in practices that do not share an EHR with the discharging hospital.Additionally, this study used a proactive approach to post-AKI care coordination, with the study team facilitating recruitment and delivery of the education and arranging clinical and laboratory follow-up before hospital dismissal.Large-scale feasibility cannot be inferred from these data.Additional personnel, adaptive workflows, or automation may be necessary to facilitate scale and spread.A dedicated nurse navigator or care manager for AKI survivors would likely be of great benefit to extending the reach of programs, such as ACT, to more patients. 14Nevertheless, a primary care-based follow-up strategy is likely more feasible in these circumstances than a nephrologist-driven specialty clinic, as has been previously reported for AKI survivors.Finally, the gap between the number of patients screened (n=329) and approached for consent (n=110) is evidence of the heterogeneity of the AKI survivor population, challenges with electronic identification of AKI survivor candidates, and complexity of care delivery.Although primary care-based follow-up of AKI survivors may offer significant benefits in select populations, a one-size-fits-all approach is unlikely to be successful.Follow-up pathways should be flexible to accommodate diversity in patients and clinical scenarios.
In conclusion, this pilot study demonstrated feasibility of multidisciplinary AKI survivor follow-up beginning in primary care, with higher 14-and 30-day cumulative incidence of laboratory and provider follow-up in ACT participants.Further studies are needed to determine the effect on important clinical and patient-centered outcomes and to identify strategies for optimizing collaborative care delivery between nephrologists and the primary care team.
Supplementary File (PDF)
Table S1: KDIGO criteria for acute kidney injury.
Table S2: KAMPS framework for components of kidney follow-up care.
Table S3: Analysis of kidney follow-up components excluding participants discharged to a skilled nursing facility.
Table S4: Analysis of kidney follow-up components by team responsible for postdischarge care coordination.
Figure 1 .
Figure1.ACT program implementation phases.During the first phase ('Usual Care'), participants were identified by the electronic screening tool and passively followed, while the inpatient care team coordinated any education and outpatient follow-up as part of standard practice.In the second phase ('Education Alone'), standardized kidney health education was delivered to patients and caregivers before hospital dismissal.The third phase ('ACT') included standardized education and care coordination of kidney function laboratory tests and provider assessment with 14 days of discharge.During all phases, nephrology follow-up was coordinated at the discretion of the inpatient care team, consulting nephrologists, or primary care provider.
Figure 3 .
Figure 3.Time to provider (PCP or nephrologist) and laboratory (SCr and urine study) follow-up across 3 groups.A significantly greater proportion of patients achieved provider and laboratory follow-up in the ACT group (P < 0.001).
Table 2 .
Participants' Completion of ACT Program Components in the Intention-to-Treat ACT Group During 90-Day Follow-Up
Table 3 .
Kidney Follow-Up Components Note: Data reported as n (%) for nominal/discrete data or median (IQR) for continuous data.Abbreviations: ACT, AKI in Care Transition; PCP, primary care provider.aCumulative incidence of provider (PCP or nephrologist) and laboratory (SCr and urine study) follow-up.b
Table 4 .
Clinical Outcomes Note: Data reported as n (%) for nominal/discrete data or median (IQR) for continuous data.Abbreviations: ACT, AKI in Care Transition; ED, emergency department; eGFR, estimated glomerular filtration rate.a mL/min/1.73m 2
Table 5 .
Patterns of Medication Use
|
2023-10-07T15:05:49.324Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "82254cd7faf37261dedf32461d25a520bc30d0ea",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a7a0142646919afaad3d14a9d892282fe12825c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
252415324
|
pes2o/s2orc
|
v3-fos-license
|
Point of care prehospital ultrasound in Basic Emergency Services in Portugal
Abstract Background and Aims The Point of Care Ultrasound and Point‐of‐Care Ultrasound in Resource‐Limited Settings are differentiated diagnostic methods using ultrasound, essential in urgent patients screening, allowing better guidance in the diagnostic process and therapeutic approach. This study intends to observe the impact of these techniques in two Basic Emergency Services (SUB) in Portugal. Methods A longitudinal study was carried out in two remote locations in Portugal (SUB N and SUB S). Data were collected by trained radiographers in each location, and a total of 972 exams were considered. Imaging findings were documented by exam type, the exam normality and the resolution after exam. χ 2 and Cramer's V tests were performed to check significant correlations between the variables. Results Regarding the type of echographic findings, 289 (29.7%) were considered normal, 628 (64.6%) were classified as abnormal and 55 (5.7%) were considered inconclusive. As for the type of resolution, 58% had local resolution, 24% were referred to a hospital emergency service and 18% referred to ambulatory care. Regarding the Location versus Resolution after exam versus Findings variables, it was verified a stronger statistically significant association for the exams considered “Abnormal” (Cramer's V = 0.414; p < 0.001). In the variables Location versus Findings versus Resolution after exam, it was verified a stronger statistical significance for “Referral to Ambulatory” (Cramer V = 0.443; p < 0.001) although Referral for Hospital (Cramer V = 0.252; p = 0.003) or Local Resolution (Cramer V = 0.252; p < 0.001) also had a moderate association strength. Conclusion Ultrasonography is a useful diagnostic tool for patients screening, having an influence on patient management in remote settings. Given the limited literature in Portugal about this matter, further research and literature will be needed to support and complement the results of this study.
| INTRODUCTION
Ultrasound is a differentiated and multidisciplinary diagnostic tool that is essential in urgent patients screening, which allows better guidance in the diagnostic process and the initial therapeutic approach in a faster and more reliable way. 1 Point of Care Ultrasonography (POCUS) in prehospital setting is a protocol used worldwide. [1][2][3] Major medical specialties and a considerable part of emergency flowcharts include the POCUS or other ultrasound protocols both for physicians and nonmedical professionals. [4][5][6][7][8] The Point-of-care Ultrasound in Resource-Limited Settings (PURLS) 9 is described in Uganda Hospitals by Stolz 10 as well as in remote settings, for instance, in a study of Henwood. 11 Portugal, despite being a small country, has a great geographic dispersion, so there was a need to create, in 2008, the so-called Basic Emergency Services (SUB) to tackle the asymmetry between urban and rural areas in health care emergency services delivery. It is in this context that ultrasound implementation as screening tool was essential in remote contexts. [12][13][14][15][16][17][18] Ultrasound proved to be useful for the patient screening, allowing the relief of hospital admissions, keeping geography from being an obstacle to quality care delivery. 1,3,6,19 The literature about ultrasound in remote context is extensive, both for medical and nonmedical personnel, and recognizes the extremely important role of its application in extra-hospital and prehospital settings. 2,6,14,15,[20][21][22][23][24][25][26][27] There are remote places (as we will discuss in the study) with a lack of resources, where the use of ultrasound has shown a very positive effects on the patients management, without interference with the work of the other medical specialties. We highlight a study by Biegler 6 where nurses were trained to perform lung ultrasound and reports, where instructions and guidance were done remotely. Other study by Léger 3 also describes that the majority of emergency units in Québec (95%) used POCUS, which was extremely useful in the clinical response, allowing great health outcomes and savings for the public treasury, namely in interhospital transfers, avoiding late diagnoses and promoting an easier access to emergency health care.
As far as Radiographers/Sonographers are concerned, the reality is no different either, their progressive and fast evolution at academic level has enhanced their ability to perform more complex imaging exams, namely ultrasonography. [28][29][30][31][32]33 The European Society of Radiology report, corroborated by the European Federation of Radiographer Societies, described that there were often hospitals and clinics in Europe where specialized Radiographers perform ultrasound examinations and pre-reports, releasing radiologists for more specialized tasks. 30,34 The main goal of this study is to verify whether the patient management classification (Normal, Abnormal, or Inconclusive) could influence the type of resolution (Local, Ambulatory, or Hospital emergency). This study was not intended to assess the accuracy of the diagnosis. In both locations, data collection was carried out by only one Radiographer in each location, because they were the only ones with specific and differentiated ultrasound training.
Data were collected and registered by the main investigator on a common data file built for that purpose. The Radiographers, after performing and analyzing the exams with the prescriber physician, classified the exams as "Normal," "Abnormal," or "Inconclusive" according to the Table 1 criteria and then registered the type of referral given to the patient by the Basic Emergency Center (Local, Ambulatory, or Hospital emergency).
In the first phase of this study, a descriptive analysis (percentages and frequencies) of the data was made and the main differences in
| Ultrasound protocols covered in the study
The acquisition of echographic images followed specific and systematic protocols to obtain a correct coverage of what is intended to be seen in each exam. The description of these protocols and the respective clinical indications can be seen in Table 2.
| RESULTS
The total number of exams considered in this study was 972, 610 (62.8%) from SUB N and 362 (37.2) from SUB S. Of these, 554 (57%) were male and 418 (43%) were female, with an average age of 55.2 years.
In relation to the ultrasound findings ( Figure 2), 289 (29.7%) were considered normal, 628 (64.6%) were classified as abnormal and 55 Regarding to the type of resolution after the exam (Figure 3), there was a large percentage of exams (58%) that ended up having a local resolution in both SUBs, clearly ahead of the 24% of referrals to Hospital emergency and 18% for Ambulatory care/follow-up.
Analyzing the type of resolution by exam type (Figure 4) To analyze the relationship between variables, χ 2 test was performed. First only two variables were compared at the time. For instance the variable "Type of Exam" was compared with the variable "Type of Resolution after exam," The variable "Location" with "Resolution after exam," the variable "Findings" with "Type of Resolution after exam" and the variable "Study location" with the variable "Findings," also checking the Cramer's V value and its respective significance 35 as shown in the Table 3 below.
After this initial analysis between two variables, χ 2 tests were carried out stratifying the variables by Location, Findings and Resolution after exam. Through Table 3 Despite all these regional and context differences, regarding to the classification of exams, this reveals a lot of homogeneity, which which means that they are independent. In other words, it was indifferent to do a given type of exam in SUB N or SUB S.
Regarding to the inferential statistics between variables (Table 3), there was verified a statistically correlation between the aggregate variables: Exam versus Resolution after exam versus Location, which reinforces that, regardless of the location (radiographer, context limitations, and patient characteristics), there seems to be a homogeneous approach and interpretation of the ultrasound exams.
Reinforcing this thesis, the Cramer's V value was higher in the Exam x Resolution after exam (0.317; p < 0.001), followed by the Findings versus Resolution After exam versus SUB N (0.320; Considering the relationship between the variables Location versus Resolution after exam versus Findings it was verified a stronger statistically significant association for the exams considered "Abnormal" (Cramer's V was 0.414; p < 0.001). This may indicate that, regardless the location, the type of resolution is strongly influenced when the findings are considered abnormal. This means that exams considered abnormal seem to have more "weight" for a decision to manage in a certain direction.
Regarding the relationship between the variables Findings versus
Resolution after exam versus Location, was verified the influence of the exam findings and the type of resolution after the exam (0.263; p < 0.001). This is also valid, individually, for each of the study locations (0.276; p < 0.001 and 0.268; p < 0.001 for the SUB N and SUB S, respectively).
This highlights the contribution of the exam findings to a presumption diagnosis and the specific type of referral needed.
Although the methodology was different, this trend was already verified in the studies by Groen 12
| Limitations of the study
There were some limitations on this study that must be considered.
| CONCLUSION
Ultrasound in rural and prehospital settings with limited resources, as in SUB S and SUB N, has proved to be a very important and differentiated imaging diagnostic tool, allowing for better guidance in the diagnosis process and in the initial approach to the patient management.
In this study, ultrasound proved to be a very resolutive tool in remote contexts due to the low percentage of inconclusive exams observed, either from SUB N (5.7%) and from SUB S (5.5%), a fact that allows us to predict a high utility for the diagnostic contribution of 94.3% and 94.5%, respectively. The abnormal-to-normal ratio for
CONFLICT OF INTEREST
The authors declare no conflict of interest.
TRANSPARENCY STATEMENT
The lead author Manuel José Cruz Duarte Lobo affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
DATA AVAILABILITY STATEMENT
The anonymized data that support the findings of this study are available from the corresponding author upon reasonable request.
ETHIC STATEMENT
All exams were prescribed by physicians in an emergency context. In this context, some patients were unable to sign the informed consent due to their health status. No patient and institutional data were registered, in accordance with the general data protection law. The main objective was to try to prove the importance and usefulness of this techniques in remote contexts. This study followed the scientific investigation ethical patterns, including the declaration of Helsinki and the general data protection national legislation.
|
2022-09-22T15:24:14.957Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "3f9346176138eac1a18240f620872e59bfaf9a1b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "5c6f59efbf7d0ab400ad7d774cd95f4c45705c98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251330120
|
pes2o/s2orc
|
v3-fos-license
|
Vitamin D Signaling in Psoriasis: Pathogenesis and Therapy
Psoriasis is a systemic, chronic, immune-mediated disease that affects approximately 2–3% of the world’s population. The etiology and pathophysiology of psoriasis are still unknown, but the activation of the adaptive immune system with the main role of T-cells is key in psoriasis pathogenesis. The modulation of the local neuroendocrine system with the downregulation of pro-inflammatory and the upregulation of anti-inflammatory messengers represent a promising adjuvant treatment in psoriasis therapies. Vitamin D receptors and vitamin D-mediated signaling pathways function in the skin and are essential in maintaining the skin homeostasis. The active forms of vitamin D act as powerful immunomodulators of clinical response in psoriatic patients and represent the effective and safe adjuvant treatments for psoriasis, even when high doses of vitamin D are administered. The phototherapy of psoriasis, especially UVB-based, changes the serum level of 25(OH)D, but the correlation of 25(OH)D changes and psoriasis improvement need more clinical trials, since contradictory data have been published. Vitamin D derivatives can improve the efficacy of psoriasis phototherapy without inducing adverse side effects. The anti-psoriatic treatment could include non-calcemic CYP11A1-derived vitamin D hydroxyderivatives that would act on the VDR or as inverse agonists on RORs or activate alternative nuclear receptors including AhR and LXRs. In conclusion, vitamin D signaling can play an important role in the natural history of psoriasis. Selective targeting of proper nuclear receptors could represent potential treatment options in psoriasis.
Psoriasis: An Overview of the Clinical Problem
Psoriasis is a systemic, chronic, immune-mediated disease that is characterized by raised patches on the skin that affects approximately 2-3% of the world's population [1]. The most common type of psoriasis is plaque psoriasis, which accounts for about 80-90% of cases. The other types include pustular psoriasis, which is more common in adults; guttate psoriasis, which is common in children; inverse psoriasis; and erythrodermic psoriasis. Psoriatic lesions are usually found on the scalp, skin folds, hands, feet, nails, and genitals [1]. Psoriasis usually manifests with cutaneous symptoms such as red, dry skin with raised, inflamed patches, silver scales or plaques, itch, thick, pitted nails, and swelling [1,2]. The psoriatic plaques are formed as an effect of epidermal hyperplasia resulting from enhanced proliferation and disturbed differentiation of keratinocytes [3]. These manifestations are related to the inflammatory process since psoriasis is an immune-mediated disease that from enhanced proliferation and disturbed differentiation of keratinocytes [3]. These manifestations are related to the inflammatory process since psoriasis is an immune-mediated disease that is caused by the dysfunction of the immune system that resulted in inflammation [4][5][6]. The etiology and pathophysiology of psoriasis are still unknown, but the activation of the adaptive immune system with the main role of T-cells is key in psoriasis pathogenesis [2]. It is suggested that the impaired balance between T helper Type 1 (Th1) and Type 2 (Th2) cells , as well as cytokine production are the top causative factors of psoriasis [2,4,[7][8][9]. In psoriatic patients, there is a shift towards the Th1 phenotype, which is characterized by the increased expression of IL-2, IFN-gamma, IL-12, T-bet [7,8], and the attenuation of the Th2 phenotype, with decreased expression of GATA3 and IL-4 [8] was found. In addition, the increased expression of IL-23 resulted in the increased level of Th17 and Th22 lymphocytes and their cytokines (IL-6, IL-20, IL-17, and IL-22) [2,[9][10][11]. The production of Th17 cytokines in psoriasis is also related to the impairment function of regulatory T-cells (Tregs) [9,[12][13][14][15]. Apart from T-cells, the other cells that are linked to psoriasis pathogenesis are the following: innate lymphoid cells, dendritic cells, mast cells, monocytes and macrophages, neutrophils, natural killers, keratinocytes, and many others (reviewed in [3,16,17]. Some data indicate dendritic cells activation that is mediated by peptide LL-37 and self-DNA, resulting in interferon production as a trigger for psoriasis pathogenesis [18,19]. Dendritic cells also promote the Th1 phenotype and the production of Th1 cytokines [19]. Figure 1 presents the major effector cells and signaling pathways in the immunopathogenesis of psoriasis. The immunopathogenesis of psoriasis involves a complex inflammatory cascade, which is initially triggered by innate immune cells (keratinocytes, dendritic cells, NKT cells, macrophages, fibroblasts, δ T-cells) that are activated by external (trauma, UV, microorganisms, drugs, smoking, diet and obesity, etc.) or internal factors (stress, autoantigens, DNA/RNA AMP complex, etc.) in genetically predisposed individuals. Cytokines that are produced by innate cells activate myeloid dendritic cells to increase the production of cytokines that are involved in the differentiation of lymphocytes to main adaptive immune cells: Th1, Th22, and Th17 which play the central role in the disease pathogenesis. Cytokines that are produced by these cells, which include: TNFα, IL-22, and IL-17A/F lead to keratinocyte proliferation, neoangiogenesis, chemokines production, neutrophils and CD8 + cells migration to the epidermis, and chronic inflammatory process. For this reason, biologic drugs targeting ILs such as IL 17,23, and TNFα are the mainstay in the management of severe psoriasis [20,21] The immunopathogenesis of psoriasis involves a complex inflammatory cascade, which is initially triggered by innate immune cells (keratinocytes, dendritic cells, NKT cells, macrophages, fibroblasts, γδ T-cells) that are activated by external (trauma, UV, microorganisms, drugs, smoking, diet and obesity, etc.) or internal factors (stress, autoantigens, The exposure of keratinocytes to ultraviolet B radiation initiates the 7-dehydrocholesterol (7DHC) photochemical transformation to a pro-hormone vitamin D 3 [57][58][59]. Its transformation activation requires two-steps: hydroxylations at C25 (by CYP2R1 and CYP27A1) and C1α (by CYP27B1) to produce 1,25(OH) 2 D 3 . The cutaneous synthesis supplies more than 90% of its body's requirement [57,59,60]. 1,25(OH) 2 D 3 , in addition to regulating calcium homeostasis, has important pleiotropic effects affecting almost all body functions. This action is mediated through interactions with vitamin D receptor (VDR), belonging to a subfamily of nuclear receptors [57][58][59][61][62][63]. VDR heterodimerizes with the retinoid X receptor (RXR) and functions as a ligand-activated transcription factor, after binding to the promoter regions of VDR responsive element (VDRE) to influence the expression of responsive genes [58,63,64]. Not only the expression of the VDR receptor determines the responsiveness of the cells to vitamin D, but also its polymorphisms. It was shown that the F and T alleles of Fok1 and Taq1 have been associated with increased VDR activity [65]. There is growing evidence that induction of transcriptional activity of VDR by 1,25(OH) 2 D 3 does not fully explain the complexity and variety of cellular responses to this multipotent hormone. Thus, so called alternative, non-genomic response has been described. It was suggested that this rapid response requires a membrane receptor for 1,25(OH) 2 D 3 and the subsequent activation of secondary messengers such as cAMP or calcium (recently reviewed in: [66]). Protein disulfide isomerase (PDIA3), also known as pER57 or 1,25D 3 -MARRS (membrane-associated, rapid response steroid-binding) is the most studied candidate for membrane vitamin D receptor, although a detailed mechanism of interaction between 1,25(OH) 2 D 3 and PDIA3 is not fully understood [66][67][68]. Recent studies also provided evidence that mitochondria could be a direct target of 1,25(OH) 2 D 3 [69,70]. This observation may support previous studies showing the protection of mitochondria by 1,25(OH) 2 D 3 by the modulation of the levels of oxidative stress (e.g., mitochondrial membrane potential) and the expression of genes thata re involved in response to reactive oxygen species [71][72][73]. Interestingly, it seems that mitochondrial localization of VDR protect mitochondrial from oxidative and nitrosative stress [72]. On the other hand, in vitro results suggested that the mitoprotective effects may depend on the concentration of active analogs of vitamin D, the time of incubation, or are cell-type-specific [71,74]. The mitoprotective effect of vitamin D and its analogs was also observed in skin cells that were subjected to ultraviolet light [70,73,[75][76][77][78][79]. Interestingly, the mitochondria could also be the targets for anticancer and/or inflammatory activities of vitamin D and its analogs as well as derivatives of lumisterol or other related steroidal analogs [76]. The malfunction of mitochondria and the excessive production of ROS contributes to inflammation that is characteristic for psoriasis [80]. Furthermore, the impairment of the mitochondrial-induced apoptotic pathway may also result in hyperproliferation of keratinocytes [81]. Thus, it seems that the direct, nongenomic impact of 1,25(OH) 2 D 3 on cellular processes including mitochondrial function may contribute to its anti-psoriatic activities of this powerful hormone [66].
Our previous study showed that vitamin D in addition to activation by 25 and 1-hydroxylations, can also be activated by the rate limiting enzyme of steroidogenesis, CYP11A1 [82][83][84][85], with the generation of 20(OH)D 3 , as the first and main product of this pathway. 20(OHD) 3 can be further hydroxylated by other enzymes together with downstream metabolites [83,[86][87][88]. This pathway functions in vivo in humans and animals and can act on a local and systemic level [82,83,86,89]. 20(OH)D 3 and its metabolites without OH at C1α can act as a biased agonist on VDR as indicated by the lack of calcemic effects and the poor activation of CYP24A1 [86,90,91], and by studies on ligand-induced VDR translocation to nucleus and molecular modeling [86,[92][93][94] and crystallography using the ligand binding domain of the VDR [95,96]. In addition, novel pathways of lumisterol [97,98] and tachysterol [99] activation have been discovered.
Recently, we also showed that 20(OH)D 3 , 20,23(OH) 2 D 3 and their metabolites, that were generated by alternative pathway, can act as inverse agonists on retinoic acid-related orphan receptors, RORα and RORγ [100][101][102], which belong to the ROR subfamily of nuclear receptors, that play a crucial role in the variety of physiological processes, including immune functions [103][104][105]. In addition, lumisterol hydroxyderivatives act as inverse agonists on RORs [97]. Importantly, aryl hydrocarbon (AhR) was identified as an alternative receptor for vitamin D hydroxyderivatives [102,106,107]. In addition, hydroxyderivatives of vitamin D and lumisterol compounds act as ligands on liver X receptors (LXR)α and β [102,108]. Thus, there is more than one bioactive form of vitamin D and several additional to the VDR nuclear receptors that are activated by these compounds [36]. In addition, it has been documented that lumisterol, a photoderivative of vitamin D 3 , can be activated to the biologically active hydroxyderivatives that would act on LXRs and RORs to exert their phenotypic effect [97,105,108,109]. Vitamin D and lumisterol hydroxyderivatives can also interact with SARS-CoV-2 replication machinery enzymes [110] and angiotensin-converting enzyme 2 (ACE2) and TMPRSS2 [111] and their protective role in COVID-19 has been discussed [112].
Vitamin D and Epidermal Keratinocytes
Keratinocytes express VDR and RORs and can produce and metabolize 1,25(OH) 2 D 3 [101,113]. Skin cells also express CYP enzymes that metabolize vitamin D 3 pro-hormone to its biologically active form [45,75,[114][115][116][117] (Figure 2). VDR expression and VDR-mediated signaling Hosomi et al. [123] firstly reported the stimulatory effects of 1,25(OH)2D3 on keratinocyte differentiation by inhibiting DNA synthesis, thus decreasing the number of cells, increasing the density and the size, and differentiation into squamous and enucleated cells, and the stimulation of the formation of a cornified envelope. These findings were confirmed by others [124][125][126]. The effects of VDR-mediated pathways depend on coactivators and corepressors [127,128]. Vitamin D, acting through VDR, and DRIP205, SRC2, and SRC3 coactivators can stimulate the keratinocytes differentiation markers: expression of involucrin, loricrin, filaggrin, keratins, and transglutaminase activity [129]. Bikle et al. [128] reported that the main coactivators are the vitamin D interacting protein (DRIP) complex that is involved mainly in the proliferation of keratinocyte and the steroid receptor coactivator (SRC) complexes that are involved in keratinocytes differentiation. They found also that DRIP205 plays a role in the regulation of β-catenin pathways, including cyclin D1 and Gli1 expression, and SRC3 can regulate lipid synthesis, permeability barrier formation that is related to differentiation. The effects of vitamin D on keratinocytes could be dependent on other culture conditions. Gniadecki et al. reported that in cultures of 1,25(OH)2D3 at the concentration from 10 −11 to 10 −6 M and 0.15 mM calcium in the absence or with low levels (0.1 ng/ml) of epidermal growth factor, the keratinocytes cell cycle was blocked in the late G1 phase, while the culture of keratinocytes with 1,25(OH)2D3 at the concentration of 10 −11 to 10 −9 M and high extracellular calcium concentration (1.8 mM) stimulated cell growth was observed (increasing the proportion of cells entering the S phase) [130]. 20(OH)D3 inhibits proliferation, causes G1/0 and G2/M arrest, and stimulates differentiation of keratinocytes [126] and inhibits NFκΒ [131]. Correspondingly, 20S(OH)D3, acts through VDR (stimulates its expression), inhibits growth, inhibits DNA synthesis. It also stimulates the expression of involucrin-differentiation marker for keratinocytes [126,132]. Other CYP11A1 vitamin D derivatives show similar anti-proliferative and pro- Hosomi et al. [122] firstly reported the stimulatory effects of 1,25(OH) 2 D 3 on keratinocyte differentiation by inhibiting DNA synthesis, thus decreasing the number of cells, increasing the density and the size, and differentiation into squamous and enucleated cells, and the stimulation of the formation of a cornified envelope. These findings were confirmed by others [123][124][125]. The effects of VDR-mediated pathways depend on coactivators and corepressors [126,127]. Vitamin D, acting through VDR, and DRIP205, SRC2, and SRC3 coactivators can stimulate the keratinocytes differentiation markers: expression of involucrin, loricrin, filaggrin, keratins, and transglutaminase activity [128]. Bikle et al. [127] reported that the main coactivators are the vitamin D interacting protein (DRIP) complex that is involved mainly in the proliferation of keratinocyte and the steroid receptor coactivator (SRC) complexes that are involved in keratinocytes differentiation. They found also that DRIP205 plays a role in the regulation of β-catenin pathways, including cyclin D1 and Gli1 expression, and SRC3 can regulate lipid synthesis, permeability barrier formation that is related to differentiation. The effects of vitamin D on keratinocytes could be dependent on other culture conditions. Gniadecki et al. reported that in cultures of 1,25(OH) 2 D 3 at the concentration from 10 −11 to 10 −6 M and 0.15 mM calcium in the absence or with low levels (0.1 ng/ml) of epidermal growth factor, the keratinocytes cell cycle was blocked in the late G1 phase, while the culture of keratinocytes with 1,25(OH) 2 D 3 at the concentration of 10 −11 to 10 −9 M and high extracellular calcium concentration (1.8 mM) stimulated cell growth was observed (increasing the proportion of cells entering the S phase) [129]. 20(OH)D 3 inhibits proliferation, causes G1/0 and G2/M arrest, and stimulates differentiation of keratinocytes [125] and inhibits NFκβ [130]. Correspondingly, 20S(OH)D 3 , acts through VDR (stimulates its expression), inhibits growth, inhibits DNA synthesis. It also stimulates the expression of involucrin-differentiation marker for keratinocytes [125,131]. Other CYP11A1 vitamin D derivatives show similar anti-proliferative and pro-differentiation properties [88,93,102,[132][133][134][135]. Similarly, vitamin D derivatives (resulted from CYP11A1 activity) protect from UV-induced DNA damages by the activation of Nrf2 and p53 defense mechanisms [73,77,134,136] and have shown anti-tumor activity against epidermal cancers [75,137]. Vitamin D derivatives have also shown antiproliferative activity on melanocytes [138,139] and fibroblasts with anti-fibrogenic actions [140][141][142][143], being dependent on RORγ [144]. In addition, vitamin D derivatives increased the expression of hypothalamic-pituitary-adrenal axis neuropeptides: CRF, urocortins and POMC, and their receptors, CRFR1, CRFR2, MC1R, MC2R, MC3R, and MC4R human epidermal keratinocytes [145]. Thus, vitamin D derivatives, VDR, and its coactivators are important for the epidermis differentiation and maintenance. Finally, 20(OH)D 3 has recently been shown to have therapeutic effects in in vivo models of rheumatoid arthritis (RA) [146,147].
The Active Forms of Vitamin D Act as Powerful Immunomodulators
The active forms of vitamin D, acting through VDR, modulates the maturation, activity, and functions of monocytes, macrophages, T-and B-cells, and dendritic cells (DCs) [148]. In general, vitamin D promotes the innate immune responses by enhancing phagocytic functions of immune cells, while inhibiting the adaptive immune system. The activation of monocytes and macrophages by biologically active vitamin D derivatives resulted in the increased production of cathelicidin antimicrobial peptide (CAMP) with its processing to LL 37. In dendritic cells presenting antigens, vitamin D decreases the maturation, expression of MHC Class II molecules, co-stimulatory molecules (CD40, CD80, and CD86) and IL-12, and increases IL-10 production (reviewed in [149]). Vitamin D decreases the development of human natural killer (NK) cells and inhibits the cytotoxicity and cytokine production by developed NK cells, while in hematopoietic stem cells stimulated the expression of monocytes markers (C/EBPα and CD14) [150]. Vitamin D acts as an immunosuppressive molecule decreasing the proliferation and functions of T lymphocytes that is mediated by inhibiting IL-2 and IFNγ production [151,152]. Vitamin D inhibits polarization towards Th1 cells [153,154] and pro-inflammatory Th1-related cytokines by T lymphocytes, as IL-2, IFNγ, and TNFα [153,[155][156][157][158][159]. The development of Th2 cells of Th lymphocytes, strong polarization toward a Th2 profile is also enhanced by vitamin D in the IL-4-depended pathways. The neutralization of IL-4 abolishes vitamin D 3 -induced polarization of Th2 independently of IFN-γ [160]. In addition, vitamin D 3 enhances the expression of Th2specific transcription factors, GATA-3 and c-maf, in developing Th cells [160]. Vitamin D promotes the immunosuppressive response by enhancing the activity of CD4 + CD25 + , mostly expressing FoxP3, without the changing the number of CD4CD25Foxp3 cells [161]. [148]). Calcipotriol decreased the fre-quency of CD8 + IL-17 + T-cells in psoriatic lesions [164]. The activity of DCs, DCsmediated induction of T-cell proliferation, Th1 cytokine IFNγ production is suppressed by vitamin D [165]. Vitamin D inhibited the IL_17-induced expression of IL-1Ra, IL-36α, IL-36β, and IL-36γ, and the TNF-α-induced expression of IL-1Ra, IL-36Ra, IL-36α, IL-36γ, and beta-defensin 2 (HBD2) in human keratinocytes [166]. Besides, in psoriasis calcipotriol de-creased the Th17 cytokine-mediated pro-inflammatory S100 psoriasin (S100A7) and koeb-nerisin (S100A15) [167] and HBD2 and HBD3, IL-17A, IL-17F, and IL-8 production. The inhibition of IL-17A induced-HBD2 expression was mediated by increasing IkappaB-α protein and the inhibition of NF-κB signaling, while VDR and MEK/ERK signaling path-ways were activated and involved in the induction of cathelicidin [168]. Interestingly, vit-amin D also stimulates the expression of IL-33 and its receptor ST2 [169] and IL-33 was show to alleviate Th17-mediated psoriatic inflammation [170]. Thus, the anti-inflamma-tory activity of vitamin D is an important factor that is useful in the pathogenesis and management of psoriasis.
Vitamin D Serum Level in Psoriatic Patients
The results of several epidemiologic studies have identified the correlation between the vitamin D serum level and the likelihood of some diseases development or progression in patients with low or deficient levels of vitamin D [112,171,172,173,174]. Similar studies have been published for psoriasis [175,176,177,178]. Morimoto and co-workers showed that the 1,25-dihydroxyvitamin D (1,25(OH)2D, but not 25-dihydroxyvitamin D (25(OH)D), was inversely correlated to area-severity index and the severity index in psoriasis patients. In these patients, the level of vitamin D was in the normal range [179]. Tajjour et al. reported decreased 25(OH)D levels in psoriasis patients and a negative correlation with the severity of the disease [180]. Bergler-Czop and Brzezinska-Wcislo reported a lower level of 25(OH)D in the psoriasis group than in the control group, with a deficient level in psoriasis and insufficient level in the control group [181]. Furthermore, , TNFα, MPO activity, and IL-1β), downregulated the expression of T-bet (Th1 transcription factor), and up-regulated the expression of GATA3 and IL-4 [162].
The inverse correlation between vitamin D and the pathogenesis of autoimmune diseases, including psoriasis has been published [163]. Vitamin D regulates the function of the innate and adaptive immune response, thus representing a potential protectant and therapeutic for psoriasis. The effects of vitamin D on the immune system in psoriasis are complex. For example, Vitamin D promotes differentiation of naïve T-cell differentiation into T regulatory cells, thus enhancing the production of anti-inflammatory cytokines (TGF-β, IL-4, and IL-10), and suppressing the production of pro-inflammatory cytokines (TNFα, INF the innate and adaptive immune response, thus representing a potential protectant and therapeutic for psoriasis. The effects of vitamin D on the immune system in psoriasis are complex. For example, Vitamin D promotes differentiation of naïve T-cell differentiation into T regulatory cells, thus enhancing the production of anti-inflammatory cytokines (TGF-β, IL-4, and IL-10), and suppressing the production of pro-inflammatory cytokines (TNFα, INF ƴ , IL-2, IL-17A, and IL-21) (reviewed in [148]). Calcipotriol decreased the fre-quency of CD8 + IL-17 + T-cells in psoriatic lesions [164]. The activity of DCs, DCsmediated induction of T-cell proliferation, Th1 cytokine IFNγ production is suppressed by vitamin D [165]. Vitamin D inhibited the IL_17-induced expression of IL-1Ra, IL-36α, IL-36β, and IL-36γ, and the TNF-α-induced expression of IL-1Ra, IL-36Ra, IL-36α, IL-36γ, and beta-defensin 2 (HBD2) in human keratinocytes [166]. Besides, in psoriasis calcipotriol de-creased the Th17 cytokine-mediated pro-inflammatory S100 psoriasin (S100A7) and koeb-nerisin (S100A15) [167] and HBD2 and HBD3, IL-17A, IL-17F, and IL-8 production. The inhibition of IL-17A induced-HBD2 expression was mediated by , IL-2, IL-17A, and IL-21) (reviewed in [148]). Calcipotriol decreased the frequency of CD8 + IL-17 + T-cells in psoriatic lesions [164]. The activity of DCs, DCs-mediated induction of T-cell proliferation, Th1 cytokine IFNγ production is suppressed by vitamin D [165]. Vitamin D inhibited the IL_17-induced expression of IL-1Ra, IL-36α, IL-36β, and IL-36γ, and the TNF-α-induced expression of IL-1Ra, IL-36Ra, IL-36α, IL-36γ, and betadefensin 2 (HBD2) in human keratinocytes [166]. Besides, in psoriasis calcipotriol decreased the Th17 cytokine-mediated pro-inflammatory S100 psoriasin (S100A7) and koebnerisin (S100A15) [167] and HBD2 and HBD3, IL-17A, IL-17F, and IL-8 production. The inhibition of IL-17A induced-HBD2 expression was mediated by increasing IkappaB-α protein and the inhibition of NF-κB signaling, while VDR and MEK/ERK signaling pathways were activated and involved in the induction of cathelicidin [168]. Interestingly, vitamin D also stimulates the expression of IL-33 and its receptor ST2 [169] and IL-33 was show to alleviate Th17-mediated psoriatic inflammation [170]. Thus, the anti-inflammatory activity of vitamin D is an important factor that is useful in the pathogenesis and management of psoriasis.
Vitamin D Serum Level in Psoriatic Patients
The results of several epidemiologic studies have identified the correlation between the vitamin D serum level and the likelihood of some diseases development or progression in patients with low or deficient levels of vitamin D [112,[171][172][173][174]. Similar studies have been published for psoriasis [175][176][177][178]. Morimoto and co-workers showed that the 1,25-dihydroxyvitamin D (1,25(OH)2D, but not 25-dihydroxyvitamin D (25(OH)D), was inversely correlated to area-severity index and the severity index in psoriasis patients. In these patients, the level of vitamin D was in the normal range [179]. Tajjour et al. reported decreased 25(OH)D levels in psoriasis patients and a negative correlation with the severity of the disease [180]. Bergler-Czop and Brzezinska-Wcislo reported a lower level of 25(OH)D in the psoriasis group than in the control group, with a deficient level in psoriasis and insufficient level in the control group [181]. Furthermore, the level of 25(OH)D was negatively correlated to PASI score and the duration of psoriasis [181,182]. The lower level of 25(OH)D was also found in case-control studies [183][184][185]. In addition, Filoni et al. also found a reduced level than in the control group and the correlation between vitamin D levels and psoriasis duration [175]. Similarly, Grassi and co-workers in a cross-sectional study observed a lower free and total vitamin D serum level in chronic plaque psoriasis patients than in the controls [176]. The relationship between vitamin D and psoriasis was also confirmed in a meta-analysis [177]. However, still the nature of this correlation is unclear and further studies are needed to elucidate its role in the pathogenesis of psoriasis, and still it is not known if low vitamin D level is the causative factor for psoriasis or the effect of the disease. On the contrary, some reports showed no differences in the 25(OH)D serum level between psoriasis and the control group [186]. Mattozzi et al. found the positive correlation between the vitamin D serum level and Tregs, and suggested that a decreased level of vitamin D may promote the activity of Th1, Th17, and Th22 [14]. They also found a negative correlation between PASI score and vitamin D level, but only Tregs were significantly related with vitamin D in multiple regression analysis [14]. The decreased level of vitamin D is negatively correlated to inflammatory activation marker-C-reactive protein [183]. A meta-analysis of 10 published reports and 571 psoriatic patients 496 controls confirmed the reduced 25(OH)D level in the disease group and negative correlation between circulating 25(OH)D levels and PASI score [187]. Vitamin D deficiency has been suggested as one of the environmental factors that is involved in psoriasis as immune-mediated disorder. Some studies have confirmed that vitamin D deficiency can be found in psoriatic patients being associated with the severity of a disease.
On the Link between UVB Phototherapy, Serum 25(OH)D Levels and Psoriasis Natural History
The synthesis of vitamin D in the skin starts with the conversion of 7-dehydrocholesterol to previtamin D 3 after the absorption of UVB. The phototherapies of psoriasis are based on UVB (290-320 nm), narrowband UVB (NB-UVB) (311 nm), excimer laser (308 nm), UVA1 (340-400 nm), psoralen and UVA (PUVA, 320-400 nm), and others (reviewed in [188]. The studies revealed that NB-UVB therapy has an impact on the systemic serum level of vitamin D in psoriasis patients. In psoriasis, patients that were treated with NB-UVB, the increase of the vitamin D serum level from insufficient to normal range was found, but no relationship with PASI score and/or SCORAD improvement [189][190][191]. Similarly, Ryan et al. noted that the serum 25(OH)D level increase from median of 23 ng/mL at to 51 ng/mL at the end of NB-UVB, with no correlation with treatment response [192]. Ala-Houhala et al. published the data that were related to NB-UVB and oral supplementation of cholecalciferol, 20 µg daily. The psoriasis patients and healthy controls increased 25(OH)D level found similar results [193]. NB-UVB exposure did not change the expression of CYP27A1 and CYP27B1 in psoriasis patients, while in the healthy controls its expression decreased. In healthy controls the expression cathelidicin decreased, HBD2 increased slightly, while in psoriasis patients cathelidicin expression did not changed, while HBD2 expression decreased [193]. The increase of 25(OH)D after UVB-based psoriasis therapy is also observed after UVA/NB-UVB treatment [194,195]. A similar trend was found also for 25(OH)D and for broadband UVB (BB-UVB) therapy, with BB-UVB showing strongest effect [196]. It should be noted that UVA1 therapy decreased the 25(OH)D serum level from 21.9 to 19.0 ng/mL [194]. The increase of the vitamin D serum level after NB-UVB treatment is accompanied by changes in athe ntimicrobial peptide and cytokine expression (increasing expression of cathelicidin and decreasing levels of human beta-defensin 2) [190], but these changes are related to the season of the irradiation [197]. Thus, the balance between vitamin D and the expression of antimicrobial peptides could be involved in the therapeutic effects of NB-UVB. On the contrary, Vandicas et al. [198] noticed a higher level of 25(OH)D and vitamin D binding protein (VDBP), acting as transporter and reservoir for vitamin D and its metabolites [199,200] in psoriasis patients. In addition, as in other studies, 25(OH)D increased after UVB treatment, while VDBPs were not changed and did not correlate with 25(OH)D levels. Thus, the authors suggested that VDBPs could be a marker of systemic inflammation. However, immunosuppressive effects of the UVB can also be secondary to the activation of the central [201][202][203][204] or local neuroendocrine networks [26,28,35,205,206].
Local Vitamin D Endocrine System in Psoriasis
Vitamin D acts mainly through VDR [101,119,207], and the disturbances of its expression have been observed in several diseases (for example: [208][209][210][211]). The expression of vitamin D receptors have been found in psoriatic cells (Figure 2) [212]. Since vitamin D can regulate the proliferation and growth of keratinocytes, the impairment of VDR expression in epidermal skin cells could be involved in the pathogenesis of psoriasis [119,[213][214][215]. The first reports did not show the changes in vitamin D system and reported comparable VDR levels and receptor binding to DNA in psoriatic and normal skin [216]. Milde et al. reported no differences in VDR expression between normal and non-lesional skin, but found stronger VDR expression in psoriasis when compared to non-lesional skin [217]. A very recent study showed that VDR is expressed in psoriatic skin and shows mainly strong expression, especially in the basal layer (Chandra, Roesyanto-Mahadi et al., 2020). On the other hand, Kim et al. [218] found reduced VDR expression in psoriasis and perilesional skin than in normal skin. They also reported the negative correlation between Toll-like receptor 2 (TLR2) and VDR expression in psoriasis, a negative correlation between TLR2 and VDR expression in the psoriasis skin of vitamin D-deficient groups, but a positive correlation in psoriasis skin of a vitamin D-sufficient group [218]. The authors concluded that psoriasis patients, according to vitamin D serum level, could be treated differently with therapies that modulated the TLR-VDR pathways. Similarly, Visconti et al. also observed the reduced VDR expression in psoriasis, with the preserved expression in deeper layers of the epidermis [219]. Contradictory data on VDR expression could result from methodological issues and using of different anti-VDR antibodies. The pathogenic effects of the potential disturbed expression of VDR in keratinocytes could result in keratinocytes adhesion. Visconti et al. found significantly reduced expression of occludin and claudin 1 (proteins forming tight junctions) than in normal skin. Furthermore, the percentage of claudin-1-and zonulin-1-positive cells (proteins forming tight junctions) correlated to the percentage of VDR-positive cells [219]. The authors suggested that VDR expression in the skin is essential to the preservation of skin integrity and homeostasis by forming tight junctions [219].
Not only the changes of the expression of VDR are probably linked to psoriasis but also VDR polymorphism. A meta-analysis of 11 studies revealed that FokI and ApaI VDR polymorphisms are not linked to psoriasis risk, but the BsmI B variant shows borderline association [220]. For Caucasians, the TaqI t variants were allied with reduced psoriasis risk [220]. Similarly, a meta-analysis of 16 studies showed that VDR TaqI TT variant in Caucasians but not in Asians is related to a higher risk of psoriasis, but VDR ApaI, BsmI,t or FokI polymorphisms have no association with the disease [221]. On the other hand, A-1012G VDR promoter polymorphism and Fok1 were identified as related to the susceptibility to non-familial psoriasis [65]. A recent study showed that the TaaI/Cdx-2 GG genotype, related to regulation of IL-17 and IL-23 expression, are more frequent in psoriasis patients [222].
The novel vitamin D derivatives, 20(OH)D 3 and 20,23(OH) 2 D 3 , can act as the inverse agonists on RORα and RORγ, which are expressed in human skin, keratinocytes, fibroblasts, melanocytes, and others [101,135]. Previously, we reported the elevated expression of RORγ in lymphocytes in psoriasis skin [212]. RORγ is a key factor that is involved in the differentiation of lymphocytes into IL-17-producing Th17 cells [223]. In addition, RORα and RORγ can regulate IL-17 expression [103,224]. 1,25(OH) 2 D 3 , 20(OH)D 3 and 20,23(OH) 2 D 3 inhibited the RORα and RORγ-mediated activation of the IL17 promoter in a dose-dependent manner and production of IL-17 protein [101]. Since IL-17 and TH17 are the crucial factors in psoriasis pathogenesis, they represent a molecular target [225,226]. A-9758, selective RORγt inverse agonist, efficiently inhibited IL-17A release in in vitro and in vivo models [226]. A selective RORγt inhibitor Cpd A inhibited the transcriptional activity of human RORγt, differentiation of Th17 cells and the production of pro-inflammatory cytokines by T-cells, IL17F, IL22, IL26, IL23R, and CCR6 [227]. Another molecule, SR1001, synthetic RORα/γ inverse agonist in mouse models of atopic dermatitis and acute irritant dermatitis showed the anti-inflammatory effects (decreased the expression of IL-13, IL-5, IL-17A) and restored keratinocytes differentiation [228]. RORγt is also an important transcription factor regulating IL-22 in Th22 cells, which are related to inflammation and linked to the pathophysiology of psoriasis. 1,25(OH) 2 D 3 can regulate the expression of IL-22, since vitamin D response elements have been identified in the IL22 promoter [229].
Introduction to the Problem
Since vitamin D regulates the immune system functions and the proliferation and differentiation of the keratinocytes, as well as other cell types, it has been effectively incorporated as an adjuvant treatment for psoriasis [230][231][232], although the precise mechanism of therapeutic action of vitamin D is not fully known. The first reports showing the therapeutic potential of vitamin D were published almost 90 years ago [233,234], but the use of high doses of vitamin D was limited due to potential toxic effects. The next important step was done by Morimoto S, who published several papers confirming the therapeutic effects of orally or topically administrated 1α(OH)D 3 or 1,25(OH) 2 D 3 [235][236][237][238][239]. The significant response of the lesional psoriatic skin to the treatment was observed in up to 85% of psoriasis cases [235][236][237][238][239]. The authors also showed the inhibitory effects of analogues of 1,25(OH) 2 D 3 with no effects of 1,25(OH) 2 D 3 on cultured psoriatic fibroblasts [240]. MacLaughlin et al. also observed the partial response of psoriatic fibroblasts to 1,25(OH)2D3, but only at higher tested doses (10 and 100 µM) [241]. The studies also confirmed the safety of long-term psoriasis treatment that was based on vitamin D [242][243][244]. The further studies allowed to develop the effective anti-psoriatic treatment that was based on vitamin D derivatives, including 1,25(OH) 2 D 3 , calcipotriol, maxacalcitol, tacalcitol, hexafluoro-1,25(OH) 2 D, calcipotriene, and others (reviewed in [230,245]. It is accepted that the therapeutic effects of vitamin D and its derivatives require VDR expression since above mentioned processes are mediated by this receptor and that VDR polymorphisms could modulate the responsiveness to the treatment. Recently, it has been reported that psoriasis patients with a PASI score that is lower than 3 and the rs7975232 CC genotypes were much more susceptible to calcipotriol treatment [246]. Lesiak et al. did not find the correlation of the rs7975232 variant and the treatment, but they observed the relationship between different variants of TaaI/Cdx-2 and the effects of UVB-based treatment, as assessed by the analysis of cytokines expression (IL-17, IL_23, TNF alpha) [222]. Taq1 VDR polymorphism (rs731236) was found to be a predictor of the duration of remission with C allele homozygotes that were related to the decreased VDR activity, showing a shorter remission duration than heterozygotes and T allele homozygotes [247]. A-1012G promoter polymorphism, and F and T alleles of Fok1 and Taq1 polymorphisms of VDR have been also identified as positively with calcipotriol response: AA and TT genotypes and AAFF, AATT, and FFTT genotype combinations [65]. On the contrary, BsmI and ApaI VDR polymorphisms are correlated to responsiveness to calcipotriol in Korean psoriasis patients [248].
It is well accepted that the topical application of vitamin D and/or its derivatives can improve psoriasis. However, psoriasis is systemic disease, thus systemic treatment should also be applied.
Oral Treatment with Vitamin D and Its Derivatives
The first vitamin D derivative that was used for psoriasis treatment by oral administration was 1α(OH)D (1.0 µg/day for 6 months) [235,237,238]. Its therapeutic action could be result from the inducing the changes in keratins expression. Holland and co-workers observed lowering the expression of keratin 16 and keratin 2 overexpression after 1α(OH)D treatment [249]. Thus, 1α(OH)D is able to inhibit the keratinocytes proliferation and promote its differentiation and shows anti-psoriatic properties. The oral administration of 2 µg/day of 1,25(OH) 2 D 3 in the pilot study of Huckins et al. also resulted in a significant improvement of psoriatic arthritis [250]. Supplementation with vitamin D 5000 IU/day for three months significantly increased the vitamin D serum level, and the expression of anti-inflammatory cytokines (IL-10, IL-5) and decreased the PASI score and homocysteine plasma level, the expression of pro-inflammatory cytokines (IFN-7, TNF-α, IL-1β, IL-6, IL-8, and IL-17) and high-sensitivity C-reactive protein [251]. Similarly, a double-blind, randomized, placebo-controlled study with oral administration of vitamin D2 60,000 IU once every 2 weeks for 6 months resulted in an improved PASI score and an increase of 25(OH)D serum level with no signs of adverse effects. In addition, the serum level of 25(OH)D was negatively correlated to the PASI score [231].
The earlier studies used relatively low doses of vitamin D or its derivatives. But Finamor et al. in their study used 35,000 IU of vitamin D 3 /day for six month and observed a significant 25(OH)D3 increase and improvement of PASI score, with no signs of toxicity: no change of serum urea, creatinine, and calcium and a change of urinary calcium excretion within the normal range [252]. McCullough et al. studied the effects of long-term high doses of vitamin D 3 administration (up to 50,000 IU) in patients that were affected by psoriasis with a significant improvement observed with no toxicity and adverse effects related to vitamin D treatment [253]. These results indicated that high doses of vitamin D could be efficient and safe in psoriasis treatment.
Some results are inconclusive or present contradictory data. Studies of Ingram et al. concerned the effects of vitamin D 3 at the doses of 100,000 IU/month for 12 months (200,000 IU at baseline) (Australian New Zealand Clinical Trials Registry #12611000648921 [254]). They observed no changes of the PASI score, but the level of 25(OH)D increased. Additionally, an inverse correlation between a slight decrease of PASI score and 25(OH)D (up to 125 nmol/L) was also found. Similarly, Jarrett et al. [255] did not recommend the administration of vitamin D 3 (100,000 IU per month) to treat the psoriasis, since no significant differences were observed between the supplemented and the placebo groups. In study of Prystowsky et al., no additive effect of orally administrated calcitriol (0.5-2.0 µg/day) on UVB phototherapy of psoriasis was found [256].
In summary, the meta-analysis of randomized controlled trials of oral vitamin D supplementation in psoriasis patients confirmed the effectiveness of the improvement the PASI score, however, Hartung-Knapp adjustment these effects were not significant [257]. Thus, more randomized controlled trials, especially with the use of vitamin D derivatives are needed. Furthermore, more effective treatment is related with topical administration. A study by Gumowski-Sunek and co-workers found changes in calcium metabolism after the oral administration of calcitriol (1.5 µg/day), while the topical administration of calcipotriol with an equivalent dose 150 µg/day, with 1% absorption) did not change calcium metabolism [258].
Topical Treatment with Vitamin D and Its Derivatives
The topical treatment is a first-line therapy in patients with mild or moderate psoriasis [259]. Early reports of Morimoto et al. showed that the topical administration of 0.1 and 0.5 µg/day of 1,25(OH) 2 D 3 resulted in a significant improvement of psoriatic lesions with no toxicity symptoms [239]. For the topical psoriasis treatment, the calcipotriol has been introduced as an efficient and safe adjuvant already in late 1980s [260,261]. The treatment with calcipotriol resulted in the reduced expression of IL-6, with no changes in TNF alpha expression [262]. The efficacy of UVB treatment of psoriasis with ointment containing of calcipotriol (50 µg/g ointment twice a day) was greater than UVB alone [263]. Calcipotriolcontaining ointment (50 µg/g twice a day) treatment also improved the psoriasis therapy with fumaric acid ester in multicenter, randomized, double-blind, vehicle-controlled study [264], psoriasis treatment with cyclosporine [265], and psoriasis treatment with acitretin [266]. Similarly, calcipotriene (0.005% and betamethasone dipropionate 0.064%) ointment improved the presentation of psoriasis [267]. Pinter et al. data showed the high effectiveness of calcipotriene and betamethasone PAD TM Technology cream with no adverse drug reaction in two Pahase 3, multicenter, randomized, investigator-blind studies [268]. Different clinical studies report that the combination of calcipotriene and betamethasone dipropionate in foam solution to be the most effective treatment for mild psoriasis [269].
Equally, a multicenter study concerning the use of tacalcitol (4 µg/g) ointment once daily for 18 months showed its high effectiveness, safety, and good tolerance during the long-term topical treatment, with no changes in calcitriol, calcium, and parathyroid hormone serum level [270]. The multicenter prospective study with tacalcitol (20 µg/g) ointment applied once daily revealed the decrease of PASI score, with local adverse effects, a decrease of parathyroid hormone and 1,25(OH) 2 D 3 , but a maintained serum calcium homeostasis was observed [243]. Tacalcitol also increased the effectiveness of NB-UVB treatment [271]. Thus, tacalcitol is effective and safe during long-term treatment.
Maxacalcitol, another vitamin D derivative, is very effective in inducing of differentiation and inhibiting of keratinocyte proliferation, without inducing hypercalcemia [272]. These effects were stronger than that of either calcipotriol or tacalcitol in in vitro models. A placebo-controlled, double-blind study showed that maxacalcitol ointment (6, 12.5, 25, and 50 mg/g) was very effective in the improvement of psoriasis presentation, with 25 mg/g ointment being more efficient than calcipotriol [273], especially in combined treatment that included maxacalcitol and NB-UB [274]. However, use of maxalcitol is related to a higher risk of development of hypercalcemia, and psoriasis treatment with calcipotriol is safer in comparison to maxalcitol [275].
The topical application vitamin D and its derivatives constitutes an important element of psoriasis management and can also be used in combination with other modalities. The topical treatment offers a safer therapy option, since the active compounds can directly target the lesional areas, without excessive systemic entry.
Conclusions and Future Directions
Vitamin D and its derivatives have resulted in impressive clinical responses in psoriatic patients. They represent effective and safe adjuvant treatments for psoriasis, even when high doses of vitamin D are administered. The phototherapy of psoriasis, especially UVBbased, changes the serum level of 25(OH)D. However, such a correlation of 25(OH)D levels and psoriasis improvement requires future clinical trials since contradictory data have been published in this area. Vitamin D derivatives can improve the efficacy of psoriasis UVB phototherapy without inducing adverse side effects. Excellent candidates for anti-psoriatic treatment are new, none, or low calcemic CYP11A1-derived hydroxyderivatives of vitamin D 3 or lumisterol compounds. These would act as agonists on VDR, LXR, and AhR receptors and as inverse agonists on RORs. In conclusion, targeting of local vitamin D signaling systems represents a promising future in therapy or prevention of psoriasis.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-08-05T15:05:34.712Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b9a99865fe9fe1a8fd07fd71d33ef6ca54735114",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/15/8575/pdf?version=1659448404",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e0eb8d7d927dba96c02ff91b1620fa440945228",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
29898309
|
pes2o/s2orc
|
v3-fos-license
|
Adversity, emotion recognition, and empathic concern in high-risk youth
Little is known about how emotion recognition and empathy jointly operate in youth growing up in contexts defined by persistent adversity. We investigated whether adversity exposure in two groups of youth was associated with reduced empathy and whether deficits in emotion recognition mediated this association. Foster, rural poor, and comparison youth from Swaziland, Africa identified emotional expressions and rated their empathic concern for characters depicted in images showing positive, ambiguous, and negative scenes. Rural and foster youth perceived greater anger and happiness in the main characters in ambiguous and negative images than did comparison youth. Rural children also perceived less sadness. Youth’s perceptions of sadness in the negative and ambiguous expressions mediated the relation between adversity and empathic concern, but only for the rural youth, who perceived less sadness, which then predicted less empathy. Findings provide new insight into processes that underlie empathic tendencies in adversity-exposed youth and highlight potential directions for interventions to increase empathy.
Introduction
In recent years, scientific research, policy, and even public attention has turned toward attempting to understand how some of the most fundamental social processes that make us human-compassion, empathy, and concern for others-operate in a world filled with vast poverty, desperation, and violence. These processes are core to our ability to connect with one another, form close relationships, and engage with others; and are believed to underlie a range of prosocial and altruistic tendencies [1,2]. Despite recognition of the critical role that empathy and related processes play in human lives, questions remain about precisely how empathy functions in contexts defined by extreme adversity and challenge, particularly in childhood, a time when emotional functioning generally, including possibly empathy, is undergoing rapid change.
In the current investigation, we examined empathic concern in high-risk children and adolescents growing up in a small, impoverished country in the southern part of Africa, Swaziland. Our primary questions were first whether exposure to chronic adversity was associated with reduced empathic concern, and second, whether the association between adversity and empathic concern varied as a function of the youth's ability to recognize others' emotions. Swaziland was an ideal data collection site for several reasons. The country, like others in the region, is highly impoverished, with a vast majority of the population living in conditions of extreme poverty. Swaziland also has one of the highest rates of HIV/AIDs in the world [3], which, when combined with commonly co-occurring diseases (e.g., tuberculosis), contributes to very high rates of illnesses and deaths in the population. For instance, the country's infant and child mortality rates are among the highest in the world, and the average lifespan, under 50, is among the lowest [4]. Thus, large numbers of children are growing up with ill or deceased parents, siblings, and other family members, experiences that are accompanied by uncertainty, inconsistent caregiving, and challenge [5]. Finally, the ethnicity and religion of the population are largely homogenous (97% Black; 97% Catholic or Zionist), and the country has not endured any major sociopolitical, ethnic, or religious conflict for several generations. Thus, Swazi children have been exposed to high levels of chronic adversity as reflected in poverty and family stress, but not unpredictable violence, which may affect emotional processes in ways that are different from chronic but somewhat predictable adversity [6].
Empathy generally refers to one's tendency to share or respond to others' emotions or feelings [7,8]. An implicit assumption in this definition, and an assumption that has yet to be adequately tested, is that empathic individuals easily and consistently recognize the emotions being displayed or felt by others [9][10][11]. On the one hand, perhaps testing this assumption is unnecessary: basic emotion recognition emerges very early in development, and even relatively young children can accurately label and respond to a range of emotional displays in others [12,13]. On the other hand, however, experiential and developmental factors play a role in emotion recognition tendencies [14], particularly in childhood. Insofar as variations exist in how well children recognize emotions being displayed by others, such variations may affect whether or not children seem empathic in turn.
In particular, in a largely separate literature, research has focused on how exposure to compromised home environments, such as those defined by neglect or abuse by parents, severe deprivation, or parental mental illness, affects children's interpretations and responses to others' displays of negative emotions, most especially anger and sadness. With regard to anger, findings have been fairly consistent in revealing heightened sensitivity to anger among adversity-exposed children. For instance, children who have been physically abused often recognize anger more quickly and accurately than children without a history of physical abuse [15,16]. At the same time, this sensitivity seems to extend to situations in which negative emotions are perhaps less clear, with physically abused children tending to "recognize" or see anger in emotionally ambiguous expressions and situations [17][18][19][20]. Research with children raised in institutionalized foster care settings has revealed similarly liberal tendencies toward perceiving anger [21], suggesting that chronic exposure to neglectful or inconsistent caregiving and violence may all contribute to anger bias tendencies in children.
Findings concerning adversity-exposed children's perceptions of other emotions, particularly sadness, are less consistent than findings concerning anger, but, when findings do emerge, they tend to reveal deficits or difficulties in emotion recognition tendencies among such children, as well [18,21,22]. In one investigation, for example, Wismer Fries and Pollak [21] compared children who had been raised in Eastern European institutions and children who had always lived with their biological parents. Although the institutionalized children had been subsequently adopted, they nonetheless were less accurate than the comparison children when attempting to identify happy, sad, and fearful expressions in photographs. In an earlier investigation, Pollak et al. [18] found similar results: compared to non-maltreated children, Foundation and by a Small Grant from the School of Social Ecology at the University of California, Irvine, both given to Dr. Jodi Quas. Neither of these funders played any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. One of the authors, CQ, is employed at a commercial company, Bluefish Dental. Bluefish Dental provided support in the form of salary for author CQ, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the 'author contributions' section.
neglected children had difficulty discriminating among emotional expressions, and physically abused children were poorer at recognizing sadness. Finally, even youth exposed to civil war, including former child soldiers, appear to show reduced accuracy in identifying sadness in facial expressions, and in one investigation, child soldiers tended to mislabel sadness in others as anger [23].
Theoretically, when caregivers are inconsistent or unavailable, children lack sufficient input to learn to recognize emotions broadly. They instead develop a heightened sensitivity to emotions that are most critical for their daily lives. Anger, for many of the children, represents such an emotion. The children need to be able to recognize anger in others, or even potential signs of anger, as their safety and wellbeing may depend on this ability. On the other hand, quick and efficient recognition of other emotions, including sadness, may be more difficult because they are not exposed to those emotions as often, and their adult caregivers are not adequately teaching them about those other emotions [24]. Whether similar difficulties emerge among children living in other highly compromised contexts is not clear. However, if parents are ill or have died and children are being raised without consistent adult input, they may not receive sufficient cues about emotions that would promote their recognition. This, in turn, may reduce their tendency to respond with empathy (see [25]).
Although such a possibility has yet to be tested directly, hints at its occurrence come from studies of empathy and prosocial behavior in high-risk youth that find young maltreated children are less likely than comparison children to help a peer in distress and more likely to react aggressively [26][27][28]. Also, children formerly exposed to war-related violence report less empathic responding and helping [29] than demographically similar children with no such exposure. A direct test of the links among adversity, empathic concern, and emotion recognition is needed.
We conducted such a test in the present study by assessing both high-risk and low-risk children's and adolescents' recognition of both clear and ambiguous emotional expressions and feelings of concern for the individuals displaying those expressions. We predicted that, compared to youth without a history of adversity exposure, youth exposed to chronic adversity would show reduced emotion recognition, with the exception of anger, relative to youth with no such exposure, and would report lower levels of empathic concern. We also expected that low emotion recognition would mediate the relationship between having a history of adversity exposure and empathic concern.
Method Participants
One hundred twenty-three Swazi children and adolescents ("youth"), grades 5-12, ages 11-22, M = 14.04, 61 girls, served as participants. A majority of the sample was not in contact with their parent, due to the youth's removal from home or parental death. The other youth were in school, often not in close proximity to their parents (e.g., some walked long distances to school, some lived with relatives or neighbors, and some lived with siblings). Thus, it was not feasible to obtain parental consent for the youth to participate. Instead, per our Institutional Review Board, formal approval to approach the youth was first granted by a professional responsible for the well-being of the youth in each area. This included the headmasters at the schools where youth were tested, the regional chief who oversaw education and well-being of individuals, including children, in his region, or the head social worker at the two foster care locations. In addition, on the days when data were collected, for ethical reasons, we also sought approval to approach youth from social workers or teachers who knew each youth personally. Once these individuals approved, we invited youth, who then provided written assent to participate. Two additional youth began the study but stopped part way through. Inclusion criteria were that the youth were in primary or secondary school and had no obvious cognitive disability. Social workers and teachers screened out youth with severe mental health problems.
Children were recruited from three types of environments. Two were characterized by high adversity. First, 47 "foster" youth were recruited from two out-of-home placement locations. These youth were included in light of extant literature showing biases in emotion recognition tendencies among children exposed to maltreatment or social deprivation [18,21]. One set of foster youth (n = 33) came from a small rural town that has been converted to a large, live-in orphan village. Several hundred foster children live in small two-room cottages with five same age and gender peers and one unrelated live-in adult female caregiver. Youth had been removed from home or elected to leave as a result of exposure to maltreatment, sexual assault, or lack of adults in the homestead and were invited to live in the village (the process by which children were selected to come to the village is not known). The town is supported by private funds, but the staff work closely with governmental agencies to identify youth in need of placement and screen for appropriateness. Siblings may move to the village together but are rarely placed in the same cottage. The other out-of-home placement location (n = 14) was comprised of two residential facilities (one for boys and one for girls) in the capital of Swaziland. The facilities contain up to 14 same sex youth with at least one live-in female caregiver. A social worker also lives on site, and the locations regularly have staff from international charitable organizations visiting. In both locations, although the youth had previously been exposed to high levels of adversity, such as maltreatment or parental death, the youth were now in residential facilities that had running water inside, and all youth reported having a mattress on which they could sleep.
Second, 34 youth were recruited from one of two impoverished rural villages. These "rural" youth were attending the local primary school (grades 1-7). School was not in session, but 7 th graders were attending a class to prepare for their exit examination, and other youth were playing nearby at the request of the headmaster, who told them that we would be providing lunch. The youth in this group in many ways are similar to the youth in the foster group in that they were growing up in the same or similar regions. However, rural youth still lived in their home communities and their exposure to adversity was ongoing. For example, only 26% of these youth reported having running water in their homes, 23% reported not having a mattress or bed to sleep on, and 32% reported not getting enough food to eat on a regular basis. Given that their current state of poverty and adversity was likely much higher than the foster youth (even though prior exposure may have been similar), it was important to distinguish this group from the foster youth in the analyses.
Third, a sample of "comparison" youth (n = 42) was recruited from a well-funded private primary school in the capital. The comparison youth came from a variety of locations (some rural, some urban), but all were living with at least one biological parent. Moreover, their families were sufficiently well-off to pay the costs associated with private schooling and provide transportation for their youth to attend school. All but one of these youth had indoor running water, all had a mattress on which to sleep, and most (88%) reported getting enough food to eat. Thus, even though as a group, these youth may have been exposed to higher levels of challenge than youth in Western countries, the group was nonetheless considered middle class and included as an important lower-risk comparison group.
Data were collected over a two and a half-week period, with approximately three days spent per location, during which time we recruited and interviewed as many youth as possible. Far more youth wanted to participate than we were able to interview (e.g., upwards of 20-30 youth would be waiting to see if we had time to talk to them). We alternated selecting male and female youth to be interviewed, attempting to vary the ages while doing so. We provided snacks to all youth who were interested in the project, whether they were interviewed or not. The response rate of the youth who were invited was approximately 99%, and we completed 125 interviews. This sample size was sufficiently large to allow us to detect small to medium within-subject effects with power of .80. We also employed bootstrapping methods, a commonly applied strategy for enhancing power in mediation models, to help guard against potential violations of the assumption of multivariate normality in the analyses and generate a more accurate estimate of standard errors and confidence intervals for indirect effects [30]. Demographic details across the groups are presented in Table 1.
Materials and procedure
Procedures were approved by the University Institutional Review Board, including procedures specific to approaching and interviewing youth in international settings. Testing was done in English, one of two official languages in Swaziland. The other official language is Swazi, and local interpreters (unknown to the youth) were available to elaborate in Swazi on some questions, as needed. Measures were administered via paper (n = 58, 47%) or tablet by one of three researchers. Interviews were audiotaped. Measures relevant to the current research are described here.
Demographic information, home, and community. Demographic questions asked about the youth's age, year and month of birth, and grade in school. Adversity questions were included to confirm that the groups differed in levels of current adversity exposure, particularly when comparing the rural youth to the comparison youth. By virtue of the foster youth's removal from home due to maltreatment or parental death and the fact that these youth had no alternative living arrangements available, they were assumed to have a history of adversity. Questions asked about the number and ages of individuals in the home and their relationship to the youth, the length of time in the current home, number of rooms, whether running water was currently available inside of the home, how many times a month the youth ate meat, whether the youth had a blanket or bed, and how the youth got to school (items adapted from The World Bank Child Needs Assessment Toolkit; [31]). Finally, yes/no questions asked about the community: whether robberies, assaults, domestic violence, alcohol and drug use, teen pregnancy, and violence against women had occurred (items adapted from the World Bank Social Capital Assessment Tool-Community Questionnaire; [32]).
Emotion recognition and empathic concern. A measure of emotion recognition and empathic concern was developed for the present study based on procedures in former studies concerning emotion understanding and empathy (e.g., [33][34][35]). Youth were shown images of scenes containing between one and five individuals (race matched that of our participants). The first and last images showed positive scenes (e.g., a family smiling). The other images Adversity, emotion recognition, and empathic concern in high-risk youth showed negative scenes (e.g., a sick child with an intravenous drip, an adult crying) or ambiguous scenes (e.g., an adult pushing a cart of personal items looking in the distance). We classified images as positive, negative, or ambiguous based on whether a discrete emotional expression [36] was clearly depicted. If so, the images were classified as positive (happy) or negative (sad, fear, anger). The ambiguous images did not show the main character displaying a single or discrete emotional expression or showed a character displaying an expression inconsistent with the context. Confirmation of the images' and questions' appropriateness and classifications came from several sources. An initial set of 26 images, all depicting individuals of African descent was shown to community leaders, teachers, and social workers in Swaziland for their feedback. Images deemed potentially confusing in terms of the content were eliminated. Question language was reviewed with these individuals as well, and phrasing was modified according to their suggestions. In prior work on empathic concern with children, approximately 20 images have been shown [37]. However, in this work, only one or two questions were asked about each image. Because we asked six questions per image, one of which required a narrative response, we elected to include a smaller number of total images, n = 11. Within these, as well, we retained a higher number of ambiguous (n = 6) images relative to positive (n = 2) and negative (n = 3), given our particular interest in variability in perceptions of ambiguity.
We also evaluated the comparison group's ratings of the emotions depicted in the images as a second check on the images' classifications. Given that this group had experienced the lowest levels of adversity, their responses could be considered a type of baseline or normative perceptions. The comparison group routinely rated characters in the negative images as high on one or more of the basic negative emotions and low on the positive emotion, and likewise, the positive images as only high on positive emotion and low on all negative emotions. Finally, this group's mean ratings on the ambiguous images varied, but none was especially high or low. A third form of validation of the images came from ratings provided by an ethnically-diverse group of college students in the United States (N = 10, ages 20-26 years, 50% female). The students' mean ratings of the characters in the images converged with those of the comparison youth: characters in the positive and negative images were rated as almost exclusively positive or negative, whereas characters in the ambiguous images were rated in the middle across emotions. In combination, these three checks on the images, in addition to evidence suggesting universality in emotion recognition abilities [38], suggested that they were appropriately tapping the desired emotions and were understandable to Swazi youth in the study.
Each image was presented individually and was followed by six questions. The first, designed to ensure that youth attended to each image, asked youth to describe what was happening in the image. After youth answered, the face and neck of the main character in the image was framed so that it was clearly distinguished from other information in the image. Youth were asked to look at the identified character and answer four questions, each on a 3-point scale (not at all, a little, a lot) how angry, happy, scared, and sad they thought that character in the image was (e.g., "How sad does this person feel?") [33]. For the final question per image, youth were shown a 20-point pictorial scale (taken from [35]) with a cartoon face showing a large smile (score of 0) on one side and a large frown (score of 20) on the other (a neutral expression at 10 was also shown). Youth were told, "Sometimes when we see others, we feel good for them, sometimes we feel bad for them, and sometimes we don't feel anything or we feel good and bad. Using this scale, point to the place that shows how you feel for the person in this box" (adapted from [35]). After the youth chose, the next image was presented. All questions, along with the pictorial rating scale, are provided in supplemental information.
Youth then completed other measures. At the end, they were thanked and children in the rural and foster groups were given snacks.
Coding
Several composite scores were calculated. First, in order to confirm whether groups differed reliably in adversity exposure, the number of adverse experiences to which youth had been exposed in the home (e.g., not having running water in the home, one or both parents deceased) and community (e.g., drug use, violence against women) was summed and divided by the number possible to create an adversity index (M = .30, SD = .17).
Second, participants' ratings on the 3-point scale of how happy, sad, angry, and afraid the character felt in each image were averaged within the three types of images: positive (n = 2), ambiguous (n = 6), and negative (n = 3). Finally, participants' ratings of their empathic concern, that is, how good or bad they
Analysis plan
Preliminary analyses included t-tests to assess whether mode of survey administration (tablet v. hard copy) influenced any of the participants' responses, and analyses of variance to determine whether any group (comparison, foster, rural) differences emerged in demographic characteristics or life experiences. Next, mixed model analyses of covariance were conducted to examine whether the two adversity-exposed groups of youth differed in their perceptions of the emotions depicted by the main characters in the images. Group was entered as a between subject factor and the four emotion ratings were entered as the within subject dependent factor. Age was covaried. Separate models were conducted for the positive, ambiguous, and negative images. Significant effects with Huynh-Feldt corrections are reported, along with follow-up simple effects and pairwise comparisons using the Bonferroni adjustment, when appropriate. One-way analyses of covariance (age covaried) were conducted to assess whether the groups differed in their empathic concern for the characters in the positive, ambiguous, and negative images. Finally to test our main hypotheses, we conducted multiple mediation analyses using ordinary least squares path analysis with Hayes' PROCESS macro for SPSS [39]. Groups were dummy coded, with the rural and foster groups being separately compared to the comparison youth. The ambiguous and negative images were examined in separate models. Bias-corrected bootstrap confidence intervals for the indirect effects were obtained. In each model, 10,000 bootstrap resamples were collected to estimate confidence intervals. All significant effects are reported. The ns vary slightly across analyses because a few youth skipped some questions.
Preliminary analyses
We first compared youth's mean ratings of character's emotion and empathic concern based on whether youth completed measures via hardcopy or tablet. For the ambiguous images, participants rated the characters as slightly more happy, but also angry when the images were presented on the tablet; and for the negative images, participants rated the characters as more angry, ts (113) ! 2.05, ps .045, ds !.38. No differences emerged for participants' ratings of empathic concern. Although the reason for these differences is not clear, we nonetheless confirmed that all subsequent significant effects remained when measure format was taken into account. We return briefly to the issue of format in the Discussion.
When group comparisons were conducted on demographic and experiential features, no differences in gender emerged. The comparison youth were younger on average than the other groups but were also in a higher grade academically than the foster youth, Fs (2,120)! 3.34, ps .04, Z 2 p ! :05 (see Table 1). Also, the comparison group reported fewer total negative life experiences, at home and in their community, than the foster and rural groups did, F (2, 119) = 29.46, p < .001, Z 2 p ¼ :33, as would be expected. The rural and foster youth did not differ in the number of reported adverse experiences.
Emotion recognition
Youth ratings of the emotional displays depicted by the main characters in the images, separately for the foster, rural, and comparison samples, are presented in Figs 1-3. Mixed model ANCOVAs, conducted separately for the three types of images, revealed several significant effects. For positive images, the main effect of emotion was significant, F (2.18, 241.68) = 10.94, p < .001, Z 2 p ¼ :09. As seen in Fig 1, youth rated the main characters as substantially more happy than sad, angry, and fearful, with the latter three mean scores all falling near floor. No other significant effects emerged.
The ambiguous images were of particular interest, given the potential for high variability in youth's interpretations [40]. When their ratings of the characters' displays of the four emotions were entered into the ANCOVA, two interactions, emotion X age, F (2.60, 288.48) = 3.56, p = .02, Z 2 p ¼ :03; and emotion X group, F (5.20, 288.48) = 7.10, p < .001, Z 2 p ¼ :11;, were significant. To analyze the emotion X age interaction, correlations were computed between participants' emotion ratings and age. As age increased, participants' ratings of the main character's level of happiness decreased, r (115) = -.20, p = .04, and level of anger increased r (115) = .32, p = .001. To examine the emotion X group interaction, simple effects analyses were conducted by comparing the groups separately for each emotion. Groups differed in their ratings of the character's happiness, sadness, and anger, Fs (2, 111) ! 3.18, ps .046, Z 2 p ! :05 (Fig 2). In partial support of our hypotheses, both foster and rural youth perceived greater anger in the characters than did comparison youth, ps < .001; and the rural youth perceived less sadness than the comparison youth, p < .05. In addition, although all youth's ratings of the character's happiness were low, the foster youth's ratings of the characters in the the ambiguous images were somewhat higher than those of the comparison youth, p < .05.
When negative images were considered, the main effect of emotion, F (2.86, 317.50) = 4.50, p = .005, Z 2 p ¼ :04; and emotion X group interaction, F (5.72, 317.50) = 5.76, p < .001, Z 2 p ¼ :09; were significant. Follow-up analyses revealed that all groups reported very low levels of happiness in the main characters, but the adversity groups' ratings were not quite as low as the comparison group's ratings, F (2, 111) = 3.74, p = .03, Z 2 p s ¼ :06. Foster and rural groups also rated the main characters as more angry than the comparison group, F (2,111) = 7.13, p = .001, Z 2 p ¼ :11. Thus, to some extent, the adversity-exposed groups tended toward an anger attribution bias.
Second, perceptions of empathic concern, that is how good or bad youth felt for the characters in the positive, ambiguous, and negative images, were examined. Youth's mean ratings of empathic concern, separated by image valence, were entered into separate 3 (group) ANCOVAs, age covaried. Group differences emerged for the ambiguous images, F (2, 111) = 2.95, p = .047. Follow-up comparisons revealed that rural youth, M = 12.75, reported feeling less bad for the main characters than comparison youth, M = 14.46, p < .05. The foster youth's mean fell in between, M = 13.92, and did not significantly differ from the other group means.
Adversity, emotion recognition, and empathy
Next, we tested empirically whether differences in youth's emotion understanding, particularly differences across groups, contributed to group variations in empathic tendencies. The aforementioned analyses suggested that the groups differed primarily in perceptions of sadness and anger, and primarily with the ambiguous images, though to some extent also with the negative images. Because of these trends, only ratings of sadness and anger were included in subsequent analyses. The analyses consisted of multiple mediation analyses using ordinary least squares path analysis, Hayes' PROCESS macro for SPSS [39]. Groups were dummy coded. The two high-adversity groups (rural and foster) were separately compared to the comparison youth. The ambiguous and negative images were examined in separate models. Bias-corrected bootstrap confidence intervals for the indirect effects were obtained. In each model, 10,000 bootstrap resamples were collected to estimate confidence intervals.
First, analyses concerned youth's perceptions of characters' feelings of sadness and anger in the ambiguous images and youth's ratings of empathic concern for those characters. Results confirmed hypotheses when comparing rural and comparison youth (Fig 4, Table 2). Relative to comparison youth, rural youth perceived less sadness in the ambiguous image characters, and in turn, youth who perceived less sadness reported feeling less bad for the characters. Stated another way, once the indirect path was taken into account, the direct path revealed that group differences in empathic concern were no longer significant. Rural youth also Adversity, emotion recognition, and empathic concern in high-risk youth perceived more anger, but anger was not significantly related to how youth felt toward the character. Thus, perceived sadness was a significant mediator of the relation between adversity exposure (i.e., the rural group), and empathic concern with other potential predictors controlled. For foster compared to comparison youth, no evidence of mediation of perceived sadness or anger emerged.
Second, analyses were repeated for the negative images. Results were highly similar to those for ambiguous images; however, only for the rural youth (Table 2). Rural youth again perceived less sadness in the negative image characters than did comparison youth (Fig 5), and youth who perceived less sadness in the characters reported feeling less bad for them. The latter trend, as well, accounted for initial group differences in empathy. Rural youth also perceived greater anger than the comparison youth, but the association between perceived anger and empathic concern was non-significant. Models comparing foster and comparison youth were not significant.
In a final analysis, we re-evaluated our comparison group to determine whether their experiences of adversity mattered. That is, although youth in the comparison sample had experienced fewer negative life experiences and challenges than did the two other group (e.g., see Table 1), a sizeable minority of the comparison youth had lost a parent. We tested whether differences in emotion recognition, and the link between recognition and empathy, were due to the specific experience of parental loss. We divided the comparison and rural samples into two groups: youth who had versus had not lost one or both parents. We excluded the foster youth because they had the added experience of removal from home that may have differentially affected their reaction to the death of a parent. No evidence of mediation emerged for the negative and ambiguous images based on whether youth had versus had not experienced parental loss.
Discussion
The overarching goal of the present study was to assess whether exposure to chronic adversity was associated with reduced empathic concern in youth, and test whether this association was mediated by variations in the youth's capacity to recognize emotions in others. We pursued this goal by evaluating emotion recognition and empathic concern in a unique sample of Swazi youth, many of whom had experienced significant adversity in the past, and for some, at present. The results provide novel insight into how core emotional processes, namely emotion Asterisks denote significant group differences within each emotion between the two adversity-exposed groups and the comparison group, post hoc ps < .05. Rating scales ranged from 0-2. https://doi.org/10.1371/journal.pone.0181606.g003 Adversity, emotion recognition, and empathic concern in high-risk youth recognition and empathic concern, operate and potentially influence one another in high-risk youth growing up in environments characterized by uncertainty, loss, and possible deprivation.
Overall, our work suggests that youth who have experienced chronic hardship, especially when that hardship is ongoing, show different patterns of emotion recognition than youth who have not, and that these patterns may alter the level of empathic concern children express toward others. Specifically, adversity-exposed youth perceived more anger in images showing negative expressions, but also in ambiguous images in which the main character's expression was not entirely clear. These data align with prior work showing heightened sensitivity to the perception of anger in youth growing up in harsh environments [18,21]. This anger bias may well be adaptive in the youth's compromised contexts where vigilance toward threat is imperative [41]. Over time, this vigilance may develop into a hostile attribution bias even when threat is no longer present [19,42], the latter of which is believed to place children at risk for increased aggression, delinquency, poor peer relationships, and anxiety and depression [43,44]. Biases in youth's ability to recognize anger in others may therefore have critical implications for a host of behavioral and emotional problems, one of which includes a reduction in empathic concern.
Youth exposed to adversity also tended to perceive less sadness in the images depicting negative and ambiguous expressions. This reduced ability is consistent with findings of research concerning emotion processing tendencies in youth exposed to other types of adversity. Former child soldiers, young war and terrorism survivors, and maltreated children all exhibit similar deficits in understanding sadness in others [23,45]. Moreover, in the present study, when empathic concern was considered, these differences in youth's perceptions of characters' Adversity, emotion recognition, and empathic concern in high-risk youth sadness were key. Less recognition of sadness was associated with less reported empathic concern, particularly in youth who continue to live in chronically deprived settings. This suggests that chronically deprived youth may not be less empathic than more advantaged youth per se, but rather, may have difficulties recognizing emotions that provoke feelings of concern, which in turn reduces the need or opportunity to feel empathic. Such a possibility aligns with past theory and work that contends that perceiving distress in others, particularly sadness, is critical in motivating empathic responding, affiliation, and prosocial behavior [2,46,47].
Several other interesting trends in these data are also noteworthy. For one, there were slight differences among groups' perceptions of happiness in the images displaying negative and ambiguous expressions. Foster and rural youth, that is, both groups with a history of chronic adversity, perceived greater happiness in some of these images than comparison youth, although as already noted, all youths' ratings were fairly low. On the one hand, this finding is inconsistent with past research, which suggests that emotion biases in youth growing up in adverse contexts are specific to cues of anger or negative emotions, rather than happiness [48,49]. On the other hand, these trends hint that adversity-exposed youth may be more confused about emotions generally and hence see even conflicting types of emotions (happiness and anger) in others. Finally, it is possible that the high-adversity youth were more likely than Adversity, emotion recognition, and empathic concern in high-risk youth comparison youth to perceive positivity in negative or ambiguous expressions due to contextual or experiential differences. Nonetheless, future work should examine this pattern in greater detail to better understand the extent to which life experiences influence youth's perception of happiness in others, as well as the implications of these perceptions for youth development.
Our data also suggested that youth's perceptions of anger in others might influence empathic concern for both adversity-exposed groups, but not the comparison group. Foster and rural youth perceived greater anger in ambiguous expressions relative to comparison youth and this tendency was associated with reporting greater empathic concern for the characters in these images. It is unclear why perceiving greater anger, an often outwardly hostile emotion, would affect youth's empathic concern in a positive direction. Perhaps, for adversityexposed youth, anger served as a motivator, just as anger often acts as an approach emotion, leading to action. In this specific paradigm, that action was expressed as concern for the main character. It is also possible that the images may have evoked feelings of outrage in some adversity-exposed youth. Such feelings have been associated with prosocial and moral behaviors, at least under certain conditions [50]. Overall, these findings highlight that a similar outcomein this case, empathic concern-might be motivated by different processes depending on children's prior exposure to adversity. Alternatively, perhaps perceiving high levels of any negative emotion leads to increases in empathic feelings, though which negative emotions are perceived seem to vary depending on adversity exposure. Future research should consider these possibilities more systematically, given that the relations between anger perception and empathic concern only emerged at trend levels in the present data.
It is worth commenting on two other trends that emerged. First, intriguing differences emerged between the two adversity groups in their perceptions of emotions and the Adversity, emotion recognition, and empathic concern in high-risk youth meditational role of emotion recognition. In addition, differences in perceptions emerged between children who completed the measure on a tablet or by looking at hard copy images. Regarding the adversity groups, the rural and foster groups differed in relation to some specific emotion recognition patterns and associations among adversity. The rural youth were presently living in impoverished conditions with limited access to running water and in some cases, adequate meals and shelter; moreover, many had lost parents and family members due to HIV/AIDS or other illness, and thus faced unstable or inconsistent caregiving. The foster youth, though originally from areas similar to the rural youth, had been selected for placement in the foster villages. Once there, the youth were cared for by live-in caregivers (who stayed with the youth for several years at a time) and social workers. The foster youth had access to schools, clothing, supplies, and reliable food and water. Although they had left their primary home and hence changed caregivers completely, the youth were now surrounded by a set of consistently available adults who provided emotional support and guidance. This level of allencompassing intervention may well alter, perhaps in positive ways, youth's feelings of empathy even if the intervention does not fully change emotion recognition tendencies. Indeed, there is evidence that interventions focused on relationship-building have benefits for youth's emotional functioning and subsequent social relationships [51,52]. How these and other changes in context affect youth's empathy and concern for others is a crucial area for further inquiry.
Regarding the study methods, youth who saw images on the tablet rated the characters as slightly more angry and happy in the ambiguous images and slightly more angry in the negative images than youth who saw hard copy pictures. Children were able to change the size of the images with the tablet but not the hard copy, which could have affected their responses. However, it is not clear why changing the image size differentially affected only perceptions of anger and happiness and only of some images. Nonetheless, as technology becomes increasingly used in data collection around the globe, including with children and including in relation to their understanding of emotional displays in others, it will be important to consider how technology may influence responding.
Although the study's findings are exciting and novel, the conclusions are tentative without further exploration of several key issues that could not be addressed with the current methodology. For one, it will be important to assess the extent to which cognitive ability and other developmental processes shape children's emotion recognition tendencies and feelings of empathic concern. The comparison youth were, on average, younger than both adversityexposed groups but at the same time in a higher grade academically than the foster youth. Even though studies examining emotion recognition in other groups of adversity-exposed youth (e.g., maltreated; [18,22]) have controlled for cognitive ability and still found differences when comparing those youth to community samples, cognitive ability could still indirectly influence youth performance, for instance, by affecting their willingness to answer some questions in a comprehensive or detailed manner. In addition, it was not possible to standardize our measures. However, we made efforts to confirm that the images captured the emotions intended by showing the images to other populations and by examining the responses of youth in the comparison group. Moreover, there is some level of universality in humans' ability to recognize emotional expressions in others, and these abilities are often strongest when viewing expressions presented by individuals from similar ethnic and racial backgrounds [38]. All of our images conformed to the latter and depicted African individuals displaying emotions. Nonetheless, future research should include a wider array of emotion recognition tasks, including those that have been standardized for specific races and ethnic groups. Future research should also assess whether similar findings emerge in different contexts and that include youth exposed to other forms of adversity (e.g., chronic health problems, war exposure) to determine the conditions under which variations in emotion recognition, particularly in ambiguous situations, relate to or influence empathic concern. Such research will reveal the extent to which the links between emotion recognition and empathic concern are generalizable versus context-specific.
In closing the current study offers novel insight into potential processes underlying empathy in high-risk youth and has implications for interventions aimed at increasing empathic responding in children and adolescents. What emotions youth perceive in others, which is directly related to their level of adversity, predicts the extent to which youth feel empathic concern for those others. More broadly, emotion recognition may serve as a key component to appraisal processes that, in turn, motivate empathic behaviors [53]. Insofar as it is possible to alter children's interpretations of ambiguous expressions [54], and that these alterations may affect behaviors (e.g., aggression), it may be possible to begin to shape as well, empathic concern and perhaps even helping or prosocial behaviors. Overall, this line of work has tremendous potential to enhance understanding of the processes by which people support and engage with others versus disconnect, especially in situations of challenge, in which connection and cooperation may be vital to resilience and even survival.
|
2018-04-03T02:19:29.647Z
|
2017-07-24T00:00:00.000
|
{
"year": 2017,
"sha1": "031dfc276356dcfff4ed51e6a636d68080c4c1e7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0181606&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de4cce9bb16d07c9f7b0eef9358c123039fe7d96",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
55408656
|
pes2o/s2orc
|
v3-fos-license
|
EVALUATION OF THE BENEFITS OF REFLECTORIZED SIGN POSTS TO DRIVERS
Özet In United States Federal Highway Administration (FHWA) provides departments of transportation (DOTs) the option of using retroreflective material on sign posts when the DOTs determine that there is a need to draw attention to the sign, especially at night. The State of Ohio Department of Transportation (ODOT) required all Stop, Yield, Do Not Enter, and Wrong Way sign posts to be reflectorized with RED reflective sheeting material and all Chevron, Stop Ahead, and One/Two Large Directional Arrow sign posts to be reflectorized with YELLOW (sign background color) reflective sheeting material as part of ODOT Comprehensive Highway Safety Plan and FHWA’ recommendations. In this study, a photometric analysis and a human factors analysis were conducted to estimate the benefits of reflectorized sign posts to driver visual perception, driver guidance and driver comprehension. The study showed that the reflectorized sign posts improve detection, recognition, and comprehension of traffic signs for drivers, especially in nighttime driving conditions. Amerika Birleşik Devletleri Federal Karayolları İdaresi, trafik işaretlerinin özellikle gece görülebilirliğini artırmak ve dikkat çekmek için trafik işareti direklerinde reflektif malzemelerin kullanılabilceğini belirtmektedir. Bu öneri doğrultusunda Ohio Eyaleti Ulaştırma Bölümü karayolları güvenliğini artırmak için tüm Dur, Yol Ver, Girilmez ve Yanlış Yön trafik işareti direklerinin kırmızı reflektif malzeme ile kaplanmasına ve tüm ok işaretli yön levhalarının gövdelerinin ise sarı reflektif malzeme ile kaplanmasına karar vermiştir. Bu çalışmada, trafik işareti direklerinde kullanılan reflektif malzemelerin fotometrik analizi ve insan faktörleri analizi yapılmıştır. Bu analizler sonucunda trafik işareti direklerinde reflektif malzeme kullanımının sürücülerin trafik işaretlerini algılamasına, tanımasına ve anlamasına olan faydaları incelenmiştir. Çalışma sonucunda reflektörlü işaret direkleri, özellikle gece sürüş koşullarında, sürücülerin trafik işaretlerini algılamasına, tanımasına ve anlamasına faydasının olduğu görülmüştür.
Introduction
In United States Federal Highway Administration provides Departments Of Transportation (DOTs) the option of using retroreflective material on sign posts when the DOTs determine that there is a need to draw attention to the sign, especially at night.Ohio Department of Transportation (ODOT) required all Stop, Yield, Do Not Enter sign posts to be reflectorized with RED reflective sheeting material and all Chevron, Stop Ahead, and One/Two Large Directional Arrow sign posts to be reflectorized with YELLOW (sign background color) reflective sheeting material as a part of ODOT Comprehensive Highway Safety Plan [1].The pictures given below in Figure 1 show the installed sign post reflectors in Athens County, OH.The pictures were taken on US 56 and US 50.It appears that this program, representing a part of ODOT's Comprehensive Highway Safety Plan for 2006, was implemented without the benefits of any prior research or existing positive practice results.One would expect a reduction in intersection crashes controlled by Stop/Yield signs with reflectorized sign posts, wrong way driving crashes on 4-lane divided highways, and run off the road crashes in curves where chevron and large arrow signs with reflectorized sign posts are used.
Methods
In this study the potential benefits of reflectorized sign posts for drivers were investigated from a visibility point of view, a photometric analysis point of view, a perception point of view, a driver's comprehension point of view, and a human factors point of view.
Literature Review
The reflectorized sign posts contribute to driver guidance at intersections and curves and driver comprehension on wrong way driving at night.No published studies were found in the literature specifically related to reflectorized sign posts, user acceptance of reflectorized sign posts, photometric analysis of reflectorized sign posts, and human factors analysis of reflectorized sign posts.There is no literature found on the crash reduction potential of reflectorized sign posts; however, the literature shows that larger target size results in improved detection.The crash reduction potential of similar reflectorization measures is given.The improved detection of the signs should result in improved sign comprehension, which in turn will lead to fewer accidents during daytime and nighttime.The crash reduction effect of reflectorized sign posts is expected to be very small for daytime and slightly larger than daytime but still very small for nighttime.The crash reduction effect of reflective sign posts expected to be slightly larger for nighttime of the increased visual signal at night.Stronger luminance values are expected at sign posts because of the headlamp beam pattern.Relevant studies addressing certain aspects regarding the benefits of reflectorized sign posts for drivers are investigated and given as a part of this study.Zwahlen and Schnell, [2]- [3] evaluated the new crossbuck designs with additional reflective sheeting on sign posts in Ohio.The researchers found that the use of reflective sheeting on sign posts reduced the number of crashes observed at the test sites by 22.3% and resulted in higher compliance with the signs.The higher conspicuity and warning power caused by the use of reflective sign sheeting on crossbuck sign posts may have caused the improvements.The reflective sheeting materials are added to the Stop Sign, Stop Ahead Sign, Yield Sign, Chevron Sign, One Large Arrow Sign, Wrong Way Sign, and Do Not Enter Sign.The aim of reflectorization of these sign posts is to increase their contribution to driver guidance and comprehension.The additional reflective area would benefit drivers especially at stop controlled intersections, at curves, and at wrong way driving.The addition of the reflective sign sheeting on chevron sign posts may benefit curve delineation and provide guidance for drivers through curves by providing visual cues especially for nighttime driving conditions [4]- [5].Zwahlen et al., [6] evaluated the reflective performance of flexible post delineators as a function of height, reflector dimensions, photometric performance, lateral offset, and spacing using a computer model.The researchers performed a small scale field evaluation of retroreflective patches by expert panel.The results revealed that the use of 45.72x2.54cm retroreflective vertical strip patch design used on flexible post delineators performed better than the 15.24x7.62cm and 30.48x3.81 cm patch designs.It provided excellent shape cue, guidance cue, and distance estimation cue to the drivers.In another study [7] the effects of the target aspect ratio on human threshold contrast was investigated.The researchers found that small target aspect ratios (shorter dimension to longer dimension) should be used to maximize rectangular target visibility.Reflective material application on traffic sign posts are rectangular targets with small target aspect ratios therefore similar potential benefits are expected from their use and they would provide higher visibility.Stalder and Lauer, [8] investigated the effects of the amount and distribution of reflectorized materials on the level of visibility of railroad boxcars.They have found that the reflective materials increase the speed and perception of changing distances between vehicles.In another study [9] different methods to place reflective materials on boxcars were investigated.The researchers found that the mass application of reflective material was better than the distributed application of the material.Based on the results of these two studies from the literature it can be suggested that the reflective sign posts will provide better perception of traffic signs, and especially increase speed and accuracy of perception of curves at night.Reflectorized sign posts can be regarded as mass application of reflective materials and expected to provide better guidance.
2.2
Photometric Analysis Target Visibility Predictor (TARVIP) software developed by the Operator Performance Laboratory of the University of Iowa [10] is used in the photometric analysis of the reflectorized sign posts.TARVIP is a deterministic model for nighttime reflective object visibility evaluation.The deterministic model is based on the dynamics of light, retroreflection, and atmospheric conditions under nighttime driving conditions.The inverse square law (brightness of a source is inversely proportional to the square of its distance) is adapted for TARVIP calculations.Road geometry, sign data, driver data, vehicle data, headlamp data are the inputs of TARVIP software.Input menu of TARVIP features a set of graphical user interface windows to define a scenario.The first input item is the road option in TARVIP.Straight road geometry is selected for all scenarios in the photometric analysis.The straight road is defined as two lane rural roadway.The vehicle is assigned on the right lane (outer lane) of the road.The lane widths are assigned as 3.66 m and the vehicle is centered to the right lane.The road analysis for each scenario is performed for 25 m segments up to 300 m from the target.The road configuration used in TARVIP is given in Figure 2. The grade of the road segment is assumed to be zero.The sign information is entered in the next step of the program.The dimensions of the signs are measured in the field on US 56 and US 50.Sign dimension information from ODOT Sign Design Manual [1] is also used.The sheeting materials for all signs are ASTM Type III (3M High Intensity) sheeting material.Another input variable in TARVIP is the headlamp and vehicle input.The vehicle dimensions and driver dimensions are adapted from a study by Zwahlen and Schnell, [11].Average passenger car dimensions and three different driver dimensions are used in the scenarios.The driver dimensions are adapted from 1988 US Army Personnel Data.The data used are 5th percentile female (small female), 95th percentile male (large male), and 50th percentile adults (average human) in order to represent different driver types.In addition to the geometric inputs of the TARVIP, different headlamp data is entered to the program to estimate the effects.Various types of headlamps are available in TARVIP for analysis.In this study three different headlamps are evaluated.
The headlamp patterns from [12] are evaluated.Schoettle et al. had analyzed the headlamp patterns of 20 best selling passenger cars in 2000.They had identified the25th, 50th, and 75th percentile US low beam headlamps patterns, which are used in TARVIP.The scenarios are run for each sign and the signs are analyzed separately for the sign sheeting material and the sign post sheeting material.The sign sheeting material is analyzed with respect to the legend sheeting material and background sheeting material.It is assumed that the materials are condensed at the center of the sign for analysis.The sign post sheeting material for each sign is analyzed in three sections.The sign is divided in three portions vertically and each of them analyzed separately.TARVIP can generate target angles and photometry measurements.The observed luminance values are generated.
The observed luminance values for driver side headlamp, passenger side headlamp, and total luminance are generated and compared for sign sheeting and sign post sheeting.Total of 252 scenarios (3 driver types X 3 headlamp types X 7 road/sign configuration X 4 target points for each sign) are run using TARVIP.The results of the scenarios for 50 th percentile adult population and 50 th percentile headlamp configuration with respect to sign/road configurations are given in this study.The total observed luminance values for the stop sign on the right side of a straight road is given in Figure 3. Overall the photometric analysis of reflective sign posts with TARVIP showed that the sheeting material on sign posts provide higher luminance than the sign itself.The total reflective area of traffic signs increases with the additional sheeting on the sign posts.In Table 1, the increases in the reflective area of the signs are given in percentages.It can be concluded that the new application of reflectorized sign posts would be beneficial to drivers and it will provide higher levels of luminance to the drivers caused by the increased reflective area.The increase in the luminance values by increased area is calculated using TARVIP luminance output.The sign areas are converted into square meters and then multiplied by the luminance values.The luminance values for sign area and sign post area are compared.Table 2 shows the percent increase in the luminous intensity observed on the signs with the addition of reflective sign posts.The photometric analysis of reflective sign posts with TARVIP shows that the sheeting material on sign posts provide higher luminance than the sign itself.The sheeting material on the sign post can get more illumination from the headlamp since it is closer to the headlamp axis.The total reflective area of traffic signs increases with the additional sheeting on the sign posts.The analysis shows that the increase in the sign reflective area is 30% for Wrong Way Sign, 24% for Stop Sign,
Human Factors Analysis
The driving task primarily depends on information gathered through visual stimuli.Therefore the human limitations are mainly of a visual nature.The visual limitations increase with age especially at night.The detection capabilities of a target like a sign decreases by age [13].For night driving the luminance is one of the most important factors that affects visual detection.The performance of the human eye is greatly reduced at low illumination levels and also the contrast sensitivity and color discrimination of the eye are lower at low illumination levels [14].Human vision declines by age (the amount of light needed by drivers doubles every 13 years after age 20), [15].Based on the examination of the human limitations and the physical characteristics, the reflectorized sign posts should result in increase in the detection and recognition of signs slightly, decrease in reaction time slightly to initiate correct action, increase in recognition and comprehension of signs slightly, and ultimately should decrease crashes slightly.For curve delineation, reflectorized chevron sign posts also create a perceptual fence effect that helps in the correct perception of the curve radius and selection of the curve speed.The fence effect provides improved curve delineation, which improves a driver's ability to estimate the radius of the curve.
For all signs the reflectorized sign posts also create a "perceptual grounding effect" which anchors signs more effectively in to the environment and improves distance estimation from driver to sign.The grounding effect anchors the sign more effectively into the driving environment and improves a driver's distance estimation capability.
Conclusions
A number of positive qualitative benefits of reflectorized sign posts to the drivers have been identified, especially for nighttime driving.Increased size, area, and visual signal, more reflective area, more luminance and luminous intensity, more color, perceptual fence effect, perceptual grounding effect and decreased reaction times due to increased visual signal and increased luminous intensity are the major benefits provided to drivers by the application of reflective sheeting material on traffic sign posts.However none of these benefits are of a reliable quantitative nature, which would allow a reliable cost/benefit analysis.Detailed analysis of relevant crash data at intersections, curves, and highways has to be analyzed for run-off-the-road crashes, failure to yield crashes, and wrong way driving crashes.The crash rates for previous three years before the implementation of reflective sign posts and crash rates for three years after the implementation of reflective sign posts may provide more reliable data on the cost/benefit analysis of reflectorized sign posts.
Figure 1 :
Figure 1: YIELD, STOP AHEAD, and STOP Signs with Sign Post Reflectors.
Figure 2 :
Figure 2: Straight Road Configuration for all Signs on the Right for TARVIP (not to scale).
Figure 3 :
Figure 3: Total Observed Luminance for 50th Percentile Adult Population and 50th Percentile Headlamp Configuration for Stop Sign on the Right Side of the Straight Road.
Table 1 :
Comparison of Traffic Sign Reflective Area of the Sign and the Sign Post.
Table 2 :
65% for Chevron Sign, and 46% for One Large Arrow Sign.The luminous intensity is also increased with the additional reflective area by 23% for Wrong Way Sign, 26% for Stop sign, 286% for Chevron Sign, and 84% for One Large Arrow Sign at 225 m distance.Comparison of the Luminous Intensity of the Signs and the Sign Posts.
|
2018-12-05T08:03:22.285Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "b724796e353a341e23a53dc04352f286c01fb4f0",
"oa_license": "CCBY",
"oa_url": "https://jag.journalagent.com/z4/download_fulltext.asp?pdir=pajes&plng=eng&un=PAJES-87597",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b724796e353a341e23a53dc04352f286c01fb4f0",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
231839672
|
pes2o/s2orc
|
v3-fos-license
|
Perspective: The dusty plasma experiments a learning tool for physics graduate students
The plasma is an ionized gas that responses collectively to any external (or internal) perturbations. Introducing micron-sized solid dust grains into plasma makes it more interesting. The solid grains acquire large negative charges on their surface and exhibits collective behavior similar to the ambient plasma medium. Some remarkable features of the charged dust grain medium (dusty plasma) allow us to use it as a model system to understand some complex phenomena at a microscopic level. In this perspective paper, the author highlights the role of dusty plasma experiments as a learning tool at undergraduate and post-graduate physics programs. The students could have great opportunities to understand some basic physical phenomena as well as to learn many advanced data analysis tools and techniques by performing dusty plasma experiments. How a single dusty plasma experimental device at a physics laboratory can help undergraduate and post-graduate students in the learning process is discussed.
I. INTRODUCTION
When the gas is subjected to a strong electric field, gas atoms get ionized and the gas phase transforms into an ionized gas phase. This ionized gas consists of equal numbers of positive (ions) and negative charges (electrons) if the gas is completely ionized. Above a threshold density of charged species (electrons and ions), the charged particles interact via long-range Coulomb interaction and capable to exhibit the collective response to an external field similar to other phases of matter. Therefore, the ionized gas medium, named plasma, is considered as the fourth state of matter 1,2 . In laboratory experiments, the gas is often partially ionized therefore a large number of neutral atoms present along with the electrons and ions. What does happen if sub-micron to micron-sized solid particles are introduced into the plasma? As these solid particles come into contact with the plasma, they acquire negative charges on their surface due to the collection of higher electron current than the ion current. The role of dust grains in plasma depends on the concentration or density of charged dust. In the case of very low dust density, well separated charged dust particles only modify the characteristics of ambient plasma and it is named plasma with impurity (or dirty plasma). In the second case where dust density is high, charged dust particles experience the long-range Coulomb interaction and exhibit the collective response to the force field. In this case, plasma is named as dusty plasma 3 .
In laboratory plasma, massive dust particles (M d ∼ 10 −15 to 10 −11 Kg) acquire large negative charges the order of 10 3 -10 5 of electron charge 4,5 . Therefore, the dust grain medium has some remarkable or extraordinary features to differentiate with conventional two components (electrons-ions) plasma. Firstly, a large amount of charge on the dust grain surface increases the average potential energy of the dust grain compared to its a) Electronic mail: jaiijichoudhary@gmail.com average kinetic energy. The Coulomb interaction among the nearby charged dust particles determines the phase (solid, liquid, or gas) of the dust grain medium 3,6-9 . Secondly, extremely small charge-to-mass ratio (Q d /M d ) of dust grains leads to new plasma eigenmodes at very low frequency (1-100 Hz) [10][11][12] . The dust dynamics at such low frequency can be visualized even with naked eyes, which allows us to study the dynamics of dusty plasma medium at microscopic level 12,13 . Due to these novel features of dusty plasma, it can be considered as a model system to understand various phenomena happening in the physical universe 3 .
It is well known that there is a physics laboratory for the undergraduate and post-graduate physics programs in the academic institutions. The purpose of establishing a physics lab along with theoretical courses is to demonstrate the role of physics to understand physical phenomena through experiments. Students would learn to design, develop, and perform experiments to understand physics laws and naturally occurring phenomena around us. The practical work engages students to develop skills, understand the process of scientific investigation and develop their understanding of concepts of physics. In summary, practicals in physical sciences at the graduate level have a great significance in the creating scientific temper and learning process of students.
The physics laboratories for undergraduate and postgraduate courses are equipped with different experimental devices. Most of the experimental devices are designed to understand some particular physical phenomena or physics law. In recent years, plasma experiments are included in the postgraduate physics laboratories in higher educational institutes around the globe with the aim to create a basic understanding of the fourth state of matter (plasma) [14][15][16][17] . It has been discussed earlier that dusty plasma is created in the background of plasma medium; therefore, the establishment of dusty plasma experimental setup at the physics lab can provide more experimental opportunities for the undergraduate and postgraduate students. The low-cost dusty plasma de-vice can also be used as an ordinary DC or RF plasma device without the dust particles.
A single dusty plasma device can be used to demonstrate various basic experiments, for examples, the study of waves and oscillations 12 , diffraction of waves 18 , Crystal formation, phase transition 7 and vortex formation 19 etc., with the modification in electrode configurations, discharge conditions, and discharge configurations (DC and RF discharge). Apart from this, an external electric and magnetic field can play a significant role to perform the experiments of vortex flow and rigid body rotational motion [20][21][22] . This device could also be used to understand the image analysis tools using the Matlab, Python and ImageJ software 23 to characterize the complex flow patterns and spectral analysis of waves. In the absence of dust particles (without dust), the same device can be used to perform the basic plasma experiments [14][15][16][17] . The plasma experiments help the student to learn about gas discharges and the use of simple diagnostics to characterize plasma. Since plasma is a highly non-linear system, the student can get various kinds of non-linear signals to understand the data analysis tools 24 .
The paper is organized as follows: Section II deals with the detailed description of the dusty plasma experimental setup. The experiments on dusty acoustic waves and opportunities for students are discussed in Sec.III. In Sec.IV, diffraction of dust acoustic waves by a cylindrical object and its application for understanding the diffraction of sound waves are discussed. The experiments on dusty plasma Crystal and phases of dust grain medium are explored in Sec.V. The vortex formation and rotational motion of dust grain medium in the absence and presence of an external magnetic field are discussed in Sec.VI. An opportunity to use the dusty plasma images in learning various image processing and image analysis tools is explored in Sec.VII. In the absence of dust particles, plasma is a highly non-linear system. A discussion on time-series data and data analysis techniques is given in Sec.VIII. A brief summary of the proposed dusty plasma experiments at graduate-level physics programs along with concluding remarks is provided in Section IX.
II. DUSTY PLASMA EXPERIMENTAL SETUP
A borosilicate glass tube or stainless steel (SS-304) of appropriate inner diameter (5 cm to 15 cm), thickness (8 to 14 mm), and length (10 cm to 50 cm) along with sufficient radial ports could be used for making a dusty plasma experimental device (or plasma device) 25,26 . The Axial and radial ports of the tube are used for pumping, gas feeding, holding electrodes, pressure measurement gauges, and dusty/plasma diagnostics purposes. A geometrical (3D) view of a typical dusty plasma setup made up of a glass tube is shown in Fig.1(a). For plasma production between two well-separated planar electrodes, either radio-frequency (RF) power source (P ∼ 100 W) or direct current power supply (V dc > 600 V, I dc > 0.5 A) are mainly used. A rotary pump attached to the glass tube or SS tube is used to create a base vacuum of < 10 −3 mbar. The relative pressure inside this vacuum chamber is measured using a Pirani gauge. A needle valve or mass flow controller (MFC) attached to the vacuum chamber is used to feed required gas into the vacuum chamber to perform experiments 25,27,28 . Apart from the glass or SS-304 vacuum (or experimental) chamber, an Aluminium chamber can also be used to make a dusty plasma experimental setup [29][30][31][32] The view of the aluminum dusty plasma device is shown in Fig.1(b). This setup is currently used to study the magnetized dusty plasma 30 at Justus-Liebig University Giessen, Germany. Such tabletop dusty plasma devices are more suitable to study the magnetized dusty plasma. It should be noted that there are different types of dusty plasma devices 32-37 which can also be used to explore the physics at undergraduate or postgraduate level.
Once the plasma is produced in the vacuum chamber, dust grains are injected into the plasma volume using a dust dispenser. The dust particle can be submicron to micron-sized mono-dispersive glass (plastic) or poly-dispersive particles of mass density ∼ 1-2 gm/cm 3 . The dust grains in contact with plasma acquire negative charges order of 10 3 to 10 5 e − and confined near the sheath region by balancing upward forces (electric and thermophoresis forces) and downward forces (gravitational and ion drag forces) [38][39][40] . Here e − is the charge of an electron. The charged dust particles are illuminated by a combination of low power red or green laser (30 to 100 mW) and plano-convex lens. The scattered light coming from charged dust particles are captured using a high frame rate (> 20 fps) and high resolution (> 2 MP) CCD or CMOS camera. A typical schematic diagram of dusty plasma experiment in DC discharge configuration 41 is shown in Fig.2(a). A schematic diagram of dusty plasma experiments in radio-frequency 31 (or DC) discharge configuration is depicted in Fig.2 The captured video or frames by CCD or CMOS camera are stored on PC for further analysis. Computerbased software such as Videomach, ImageJ along with the MATLAB, Python image processing tools is used to analyze stored image data (or frames) for further understanding the dynamics of dusty plasma medium. 23,42 . Sometimes different diagnostics such as Langmuir single probe 43 , double probe 44 and emissive probe 45 can be used to characterize the backgrounds plasma of dust grain medium.
III. DUST ACOUSTIC WAVES
Wave in the gas, liquid, and the solid medium is an important topic for physics students. They are generally familiar with sound waves in different mediums which are the results of the elastic displacement of the particles of the medium about their equilibrium position. Since the motion of particles in the medium is forth and back along the direction of propagation of waves, the sound waves are the longitudinal waves. A typical sound wave in a gas medium is displayed in Fig.3(a). Similar to the well-known mediums, dusty plasma medium also supports very low frequency (< 100 Hz) acoustic modes (DAW) [10][11][12]41,46,47 . This novel feature of dusty plasma attracts undergraduate and postgraduate students to understand the wave motion in any medium (gas, liquid, and solid) by performing experiments on dust-acoustic waves in dusty plasma device. The excitation of dust-acoustic waves in dusty plasma device is possible in direct current (DC) discharge as well as in radio-frequency (RF) discharge configurations 10,41,[46][47][48][49][50] . A schematic diagram of exper-imental dusty plasma setup (DC and RF discharge) to study the acoustic waves is shown in Fig.2. The dust acoustic waves excited in the DC discharge configurations ( Fig.2(b)) are displayed in Fig.3(b). The possible causes for excitation of DAWs include ion-streaming instability and dust-acoustic instability are discussed in detail in the references 46,51 . In Fig.3(b), bright vertical bands (or red bands) are nothing but the compressed wavefronts of the DAW, and the dark region (or low dust density region) corresponds to the rarefaction regions. Since the intensity of a bright band (wavefront) is proportional to the dust density, the intensity plots along the propagation direction at different times help to obtain the wave parameters such as wavelength (λ), phase velocity (v d ), and fre- (1) (2) (a) A double anode configuration using DC power source to produce dusty plasma. (1) and (3) anodes, (2) cathode, (4) DC power supply, (5) CCD camera, (6) dust particles, (7) cylindrical lens and (8) quency (f d ). Intensity profile corresponding to Fig.3(b) is shown in Fig.3(c). In Fig.3(d), the intensity profiles of DAWs at different times are plotted. For getting intensity profile plots, one can use ImageJ (free available) software, Matlab and Python-based image processing and analysis tools, some other image analysis tools. It is also possible to get the space-time plots (see Fig.3(e)) using the recorded frames of propagating DAW with help of the MATLAB or Python. The space-time plots are used to characterize the dust acoustic waves 52 .
Apart from linear DAWs, various kinds of nonlinear waves can also be excited in dusty plasma using this dusty plasma device in DC discharge configuration 27,34 . A single video frame of dust grain medium during the propagation of nonlinear dust acoustic wave is presented in Fig.4(a). In Fig.4(b), intensity profile of propagating nonlinear wave ( Fig.4(a)) is plotted. These both images are taken from the original paper of Merlino et al. 34 . The nonlinear characteristics of the excited dust-acoustic wave can be verified by fitting a harmonic function (sine or cosine) of the fundamental frequency ( f d ) and its harmonics to the intensity profile of propagating DAW, which is shown in Fig.4(b). Space-time plots can also be used to get the frequency spectrum of propagating acoustic waves through dust grain medium 53 . One can also identify the nonlinear characteristics of dust-acoustic waves based on the frequency spectrum obtained from space-time plots. There is also a possibility to modulate the self-excited dust acoustic waves and excite the linear and nonlinear waves in the dust grain medium by external forcing. In summary, it would be interesting for undergraduate and postgraduate students to explore the linear and nonlinear waves in the dust grain medium that relates the wave motion in a different medium.
IV. DIFFRACTION OF DUST ACOUSTIC WAVES
Diffraction is an intrinsic property of the waves (mechanical waves and electromagnetic waves) in any kind of medium or vacuum. It describes the change in the direction of waves as it travels around an obstacle (barrier) or between a gap of the barrier. In daily life, we hear some noise (sound) of speaking persons from adjacent rooms through door openings which is a result of the diffraction of sound waves. The diffraction or bending of sound waves around an obstacle can be demonstrated by the study of the water waves in a ripple tank (references therein 18 ). The amount of diffraction (spreading or bending) of the wave depends on the size of the object and the wavelength of the wave.
Instead of a water medium, a dusty plasma medium can be used as a model experimental system to demon- Fig.2(b) (top to bottom).(d) An average Intensity plots of same selected region (Fig.2(b)) at different times. It represents propagation of dust acoustic waves from anode to cathode. Fig.3 strate the diffraction of sound waves around an obstacle. The diffraction of dust acoustic waves from a cylindrical or spherical object can help to understand the diffraction phenomena of sound waves. It is possible to change the wavelength of DAW by altering the discharge parameters 41 which help to understand the amount of diffraction around different sized cylindrical objects. The dusty plasma experimental setup with a larger sized cathode (> 10 cm) and smaller sized anode (< 2 cm) configuration in DC discharge is suitable to explore the diffraction of DAWs. 18 Kim et al. 18 reported experimental results on the diffraction of dust-acoustic waves by a cylindrical object in weakly magnetized DC discharge plasma. In their experiments, the cathode was the grounded chamber and the anode was a 2.5 cm diameter metal disk. In Fig.5(a), the DAWs appear as bright vertical fringes that propagate from anode to cathode. The diffraction of DAW around a cylindrical rod (obstacle) was studied using the video images at different times 18 . The bending of DAW (diffraction) around the cylinder is shown in Fig.5 (b). A ripple tank (shallow water waves in a tank) is considered a model to study the sound waves in twodimensions (2D). Therefore, a more realistic simulation of the diffraction of sound waves in 2D can be obtained using a ripple tank. Image of a ripple tank simulation of waves (originating from a point source) interacting with a circular object is depicted in Fig.5(c). Since the dust acoustic waves and sound waves obey a similar set of equations, the resulting diffraction patterns can be compared in both cases 18 . Thus, a dusty plasma medium can be considered as an excellent model system to learn the diffraction of sound waves at the graduate level in the physics laboratory.
V. CRYSTAL FORMATION AND PHASE TRANSITION
In solids, atoms or molecules are closely packed. Solids could be either in crystalline or amorphous form. In crystalline solids, atoms or molecules are arranged in an ordered (long-range order) pattern whereas atoms or molecules have a short-range order in the amorphous solids. The crystalline solids can also be categorized into single-crystal solids and poly-crystalline solids. The poly-crystalline solids consist of multiple single crystal regions (grains) and the boundary separating these regions is called the grain boundary. A unit cell (smallest repeating unit) is considered as a building block of a crystalline solid. This unit cell is described by lattice vectors (lengths of each side of the unit cell) and the angle between lattice vectors. The length of lattice vectors and angles between them differentiate the types of unit cells of a crystal structure. X-ray spectroscopy is a diagnostic tool to explore the crystalline properties of solid materials. In the spectroscopic technique, one can get the diffraction patterns of scattered X-rays at different planes of crystal but difficult to realize the different crystal basis of three-dimensional (3D) crystals.
For understanding the crystalline structure and phase transition in solids at undergraduate and postgraduate physics programs, a realistic model system is required. In dusty plasma experiments, it is possible to create a Coulomb crystal (2D and 3D) of micron-sized charged dust particles. We term such structure a dusty plasma crystal 7,8,31,32,54 . One can see the dusty plasma crystal even with naked eyes and can realize the periodic arrangement of the atoms in crystalline solids as shown in Fig.6(a). White dots in both figures represent the dust grains. An arrangement of dust particles in a 2D plane along a vertical direction is depicted in Fig.6(b) that represent the types of dust crystal structures (bcc and hcp). The crystalline nature of dust grain medium is confirmed through the characteristics parameters such as Voronoi diagram 33 and radial pair correlation function, g(r) 55,56 .
The dusty plasma experimental device either in DC or RF discharge configuration can be used to obtain the dusty plasma crystal at an appropriate discharge conditions 9,33,56 . Using the freely available image analysis software or tools, students can analyze the dusty plasma crystal properties and correlate the results to understand the single crystalline, polycrystalline, and amor- phous solids. By changing the discharge parameters such as gas pressure and input power, the melting of crystal or phase transition can be understood by obtaining the radial pair correlation function. The profile of g(r) against radial distance (r) measures the phases of the dust grain medium 55,56 . Plots of g(r) in Fig.6(c) for different discharge conditions represent the solid, liquid, and gaseous phase of dust grain medium.
VI. VORTEX AND RIGID ROTATIONAL MOTION
In daily life, we see the naturally occurring or induced vortices in fluids such as whirlpools in rivers and tornadoes. The vortices which are induced by any external force on a fluid element are termed forced vortices. In fluids, such vortices can be induced by rotating a vessel containing fluid or by paddling in a fluid. For studying the naturally occurring vortices, a full understanding of vortex behavior at the microscopic level is required. To demonstrate the vortex motion for graduate students in the physics laboratory, dusty plasma can be considered as a model system. In a dust grain medium, vortex motion can be induced around a charged probe (metal wire) in unmagnetized RF discharge 57 (see Fig.7(a)). An external magnetic field can also be used to excite the vortex motion in dust grain medium 58 . Five consecutive still images are used to reconstruct the image of Fig.7(b) where one can observe the vortex motion in dusty plasma at a given B-field. The free available particle image velocimetry (PIV) code 42 and ImageJ software 23 are very useful to obtain the velocity profile of vortex flow and angular velocity distribution of particles in a given vortex 58,59 . A PIV image corresponding to the vortex flow in the presence of magnetic field 58 is depicted in Fig.7(c). The dusty plasma device either in unmagnetized or magnetized RF discharge configuration can be used to demonstrate the vortex motion in fluids. Students can visualize the vortex motion at particle level in the dusty plasma medium and correlate these results to understand the vortices in Apart from the vortex motion, dust grain medium can also be used to demonstrate the rigid rotational motion of any medium. In the presence of a weak magnetic field (B < 0.05 T), dust particles in either DC or RF discharge configuration exhibits the rigid rotational motion 20,60 . It provides a platform to learn the rotational motion of many-body systems by estimating the angular frequency of rotating particles. Students may come to know the differences in translation and rotational motion. A single frame video image of an annulus dusty plasma is shown in Fig.8(a). The rigid body rotational motion of dust grains in the region of an annual in the presence of Bfield is estimated by a PIV image (see Fig.8(b)). A constant angular frequency variation along radial direction indicates the rigid rotational motion of medium 60 . Thus, dusty plasma experiments at the graduate level may introduce students to the concepts of rotational motion of many-body system and vortex flow in fluids.
VII. IMAGE PROCESSING AND ANALYSIS TECHNIQUES
In dusty plasma experiments, the scattered lights from the solid charged particles are captured using a fast frame CCD or CMOS camera and image data (frames) are transferred to PC. These stored images are later analyzed to get the dynamics of dust grain medium for given discharge conditions. It provides a platform to undergraduate and post-graduate physics students for understanding the basics of image processing and image analysis techniques using various software and computational tools such as ImageJ, MATLAB, Python, etc. Using this dusty plasma device, students can get various kinds of dusty plasma data in form of images such as dust grain oscillation, dust-acoustic waves, rotational motion, linear flow of dust particles, dusty plasma crystal, etc. They can use these images to learn various tools and techniques for analyzing images and calculate the dusty plasma parameters which are helpful to understand the dynamics of dusty plasma. A typical raw image of dust grain medium and superimposition of six images (composite image) are shown in Fig.9. The superimposition of six images is done with the help of ImageJ software. In other examples, as shown in Fig.3(d) and Fig.7(c), MATLAB image processing tools are used to get the intensity profile of DAWs at different times and velocity profile of dust particles respectively. Thus, such hands-on experience will help students gain a better understanding of the image processing and image analysis techniques using ImageJ, MATLAB, Python, and other software.
VIII. TIME-SERIES DATA ANALYSIS
Students at the graduation level have knowledge of nonlinear (complex) systems existing in the universe. Nonlinear systems exhibit sensitive dependence to the initial conditions of the systems. For example, a dou- A DC glow discharge plasma is assumed to be a highly complex non-linear system. It can be used as a nonlinear system to understand other natural or artificial highly nonlinear systems. The temporal irregular data (time-series data) in the form of discharge current or floating potential of plasma (with or without particles) are recorded at a given discharge condition 24,62,64 . The pattern of the time-series data depends on the discharge parameters such as gas pressure, discharge voltage, external magnetic field, etc. In Fig.10(a), time-series data (floating potential) of a DC glow discharge plasma are displayed. The FFT of the same time series data is shown in Fig.10(b) from where one can obtain the value of frequency of fundamental mode as well as higherorder harmonics 24 . The harmonics appear with an integer of the fundamental oscillation frequency which sug- gests the nonlinear behavior of the plasma medium. The phase space diagrams of the different temporal fluctuation are depicted in Fig.10(c). With changing the discharge parameters, the transition from a stable state to a chaotic state occurs. The transition from chaotic to periodic also possible with changing the discharge parameters. Apart from this nonlinear dynamical behavior, the plasma medium could also show the periodic oscillations and limit cycle oscillations that could be checked by different data analysis tools. Thus, DC glow discharge plasma provides a good platform for undergraduate and postgraduate students to understand the dynamical behaviour of highly nonlinear systems, predict the irregular fluctuations, and learn the time-series data analysis techniques.
IX. SUMMARY
In this perspective paper, the role of dusty plasma experiments in the learning process of undergraduate and post-graduate physics students at higher institutions/universities is discussed. I have proposed some basic dusty plasma experiments like waves and oscillations, diffraction of waves, crystallization, phase transition, vortex motion, rigid rotational motion, and data analysis techniques to demonstrate some basic physics experiment, create a scientific temper among graduate students, and provide a platform for learning experimental tools and techniques. How a single dusty plasma device either in direct current or radio-frequency discharge configuration can be used to perform various basic physics experiments as well as to learn various image and data analysis tools and techniques. A detailed discussion on each dusty plasma experiment and data analysis tools are presented in this paper. However, this paper only highlights opportunities for physics graduate students for performing some basic experiments in the physics lab using the dusty plasma device which can be operated either in DC or RF discharge configuration. The main focus of this article to highlight only the advantages of dusty plasma experiments to physics students by citing the previous experimental studies. A detailed procedure (or tutorial) for an individual experiment could be a future scope.
X. ACKNOWLEDGEMENT
The author is grateful to Prof. Merlino, Prof. Lin I, Prof. Hyde and Dr. Shaw for allowing him to reuse the published figures with permission of publishers. Author is also thankful to Dr. R. Rajawat, Dr. V. Kella and Dr. A. Gupta for careful reading of this paper.
|
2021-02-08T02:16:07.664Z
|
2021-02-03T00:00:00.000
|
{
"year": 2021,
"sha1": "36585e34a0d96c6e5968e51e36ac347aa54f9a45",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.03165",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "36585e34a0d96c6e5968e51e36ac347aa54f9a45",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
265234617
|
pes2o/s2orc
|
v3-fos-license
|
IN SILICO ANALYSIS OF SECONDARY METABOLITES OF Clerodendrum Inerme AS A POTENTIAL ANTIDIABETES COMPOUNDS
Clerodendrum inerme can potentially alleviate diabetes, but little is known about its molecular mechanisms. This study aimed to investigate the chemical compound of C. inerme and its molecular mechanism to treat diabetes. The KNApSAcK was used to find secondary metabolite of C. inerme . A screening was done to find compounds by estimating Absorption, Distribution, Metabolism, and Excretion (ADME) on the SwissADME. The SwissTargetPrediction tool connects predictions of target proteins from compounds that pass screening to various probable proteins and utilizing the StringDB to show the network between target proteins and associated diseases. After finding the target protein, continue docking the chemical compound to the target protein using PyRx with AutoDock 4.2.6. The result from StringDB found four chemical compounds ((Z)-3-Hexenyl beta-D-glucopyranoside, Rhodioloside, Sammangaoside B, Clerodermic acid) that can connected to 4 target proteins (DPP4, IL1B, PPARA, PPARG). According to the docking results, clerodermic acid has good protein binding properties with DPP4, IL1B, PPARA, PPARG, rammangaoside B with PPARG, and rhodioloside with DPP4. C. inerme contains clerodermic acid, rammangaoside B, and rhodioloside compounds, which can potentially treat diabetes mellitus.
Background
Diabetes mellitus is a collective term for heterogeneous metabolic disorders whose main finding is chronic hyperglycemia.The cause is insulin secretion disorder, insulin disorder effect, or usually both (Petersmann et al., 2019).Diabetes mellitus is classified into three types.Type 1 diabetes causes cell damage, resulting in the body's inability to produce insulin.Insulin resistance, a condition in which cells fail to respond to insulin properly, is the starting point of type 2 diabetes (Tanase et al., 2020).experienced by pregnant women is due to a decrease in the body's ability to produce insulin to control blood sugar levels during pregnancy (Setiabudy, Nafriadi and Instiaty, 2016).The goal of diabetes mellitus treatment is to achieve normal levels of insulin in the plasma (Ferguson and Finck, 2021).According to IDF data, by 2021, 537 million adults aged 20-79 years are diagnosed with diabetes.The number is expected to rise to 643 million by 2030 and 783 by 2045.In Southeast Asia, 90 million adults have diabetes, which causes 747,000 deaths (Webber, 2021).New therapies must be invented to meet the needs of health care, including promotion, prevention, treatment, and rehabilitation, so that the prevalence of diabetes mellitus can be decreased.Dozens or even hundreds of new drugs are released to the market every year after going through a time-consuming and expensive development process (Hairunnisa, 2019).Clerodendrum inerme, commonly known as gambir laut in Indonesia, belongs to the Verbenaceae family.It is commonly found in Australia, Asia, Malaysia, and the Pacific Islands.C. inerme is traditionally used to treat malaria.It is also used as a thermal suppressant, uterine stimulant, pest control agent, and antiseptic (Kar et al., 2019).Although it is not clear that this plant has antidiabetic effects, it is a good plant to study the contents of secondary metabolite compounds with antidiabetic effects.This research aims to find new drugs with antidiabetic effects through network pharmacology and molecular docking.The method is used as an early stage of research before further in-vivo research.
Tools
This study was conducted using several online databases and software.
Secondary Metabolite of C. inerme Identification and Network Pharmacology Analysis
The secondary metabolite of C. inerme was obtained from KNApSAcK, and PubChem was used to search for the canonical SMILE compounds (Kim, 2021).Screening of compounds using the SwissADME website that will predict the bioavailability of the compound using the BOILED-egg method.Furthermore, SwissTargetPrediction was used to predict proteins that can interact with secondary metabolite compounds (Lena et al., 2023).Search for protein targets that interact with diabetes was carried out using the GeneCards (Stelzer et al., 2016).followed by looking for protein intersections predicted to have ties to compounds from the plant using the Venny database (Oliver, 2015).Furthermore, the intersection results are entered into the StringDB for network pharmacology analysis (Szklarczyk et al., 2021).
Molecular Docking Analysis
Docking molecules using 3D files obtained from PubChem and prepared using Avogadro by MMF94s Method.Separation of 3D files between the target protein of diabetes mellitus and its ligand was done using BIOVIA Discovery Studio.Molecular docking was done using PyRx 0.8 with autodock 4. The results of the docking are then visualized using the Proteins.Plus webserver.
Identification, Bioavailabity Prediction, and Network Pharmacology Analysis of Secondary Metabolite of C. Inerme
The secondary metabolite of C. inerme was obtained using the KNApSAcK database.There were found 24 metabolite compounds contained in the C. inerme.SMILES canonical code was searched using PubChem, but six of these compounds were not found in its SMILES code, so they were not included in the study (Table 1).Then, the next stage is carried out, namely, selecting compounds based on ADME.The term used is Lipinski's Rules of Five (RoF), which says that in compounds which have a molecular weight lower than 500 Da, the number of hydrogen bond donors is less than 5, the number of hydrogen bond acceptors is less than 10, and log is lower than 5 have high bioavailability (Nogara et al., 2015).From Lipinski's RoF, it was obtained four compounds: Mol 6, Mol 10, Mol 12, and Mol 21.Besides that, the compounds were also examined using the Brain Or Intestinal EstimateD (BOILED-Egg) method.For this purpose, BOILED-Egg is proposed as an accurate prediction model that calculates the lipophilicity and polarity of small molecules (Daina and Zoete, 2016).
Proceedings of International Pharmacy Ulul Albab Conference and Seminar (PLANAR), 3, 57-65 60 From these results, one compound (Mol 21) penetrated the blood-brain barrier, marked in the yellow section.The other three compounds (Mol 6, Mol 10, and Mol 12) are in the white part, meaning these compounds cannot penetrate the blood-brain barrier but can be absorbed in the digestive tract.
Figure 1. Bioavailability prediction with BOILED-Egg Method
The four compounds that pass ADME will be checked to see whether they can bind to proteins using SwissTargetPrediction.The prediction results are a percentage probability of the target protein binding.After knowing the target protein of the plant compound, continue comparing it with the Diabetes Mellitus-related protein obtained from GeneCards.178 proteins were predicted to interact with secondary metabolite compounds, and there are found 15,390 diabetes mellitusrelated proteins.Intersection with Venny found that only 159 proteins from SwissTargetPrediction were related to Diabetes Mellitus.
Molecular Docking Analysis
Molecular docking was carried out between the four secondary metabolite that pass Lipinski's RoF and BOILED-Egg, with the proteins DPP4 (PDB ID = 6EOR), IL1B (PDB ID = 6Y8I), PPARA (PDB ID = 3KDT), and PPARG ((PDB ID = 8HUP).More complete docking results can be seen in table 4. The docking results obtained only a few have the highest potential as anti-diabetics.A compound can be said to be good if the energy binding value and the inhibition constant value were low.The lower the value of the energy binding and inhibition constant, the better the compound binds to protein (Muchlisin et al., 2022).Docking result that have good potential only Mol 10 to DPP4 (-8.16 kcal/mol, 1.04 µM), Mol 12 to PPARG (-6.98 kcal/mol, 7.6 µM), and Mol 21 to DPP4 (-7.79 kcal/mol, 1.94 µM), IL1B (-7.63 kcal/mol, 2.56 µM), PPARA (-9.07 kcal/mol, 225.97 ηM), and PPARG (8.09 kcal/mol, 1.17 µM) which has high potential as an anti-diabetic drug candidate.The bonds formed between the compound and the target protein are in the form of hydrogen and hydrophilic bonds which can be seen in table 5.In the docking results of Sammangaoside B to PPARA, no hydrogen or hydrophilic bonds occurred.
Conclusion
The compounds Rhodioloside, Sammangaoside B, Clerodermic Acid have potential as new antidiabetic drugs by binding to the proteins DPP4, IL1B, PPARA, PPARG.
Figure 3 .
Figure 3. Network Pharmacology prediction results using String-db.The red color shows the target protein associated with Diabetes Mellitus.
Figure 4 .
Figure 4. Network pharmacology of target prediction protein for Diabetes Mellitus
Table 1 .
List of metabolite compounds contained in C. inerme taken from the KNApSAcK website.
|
2023-11-17T16:38:27.388Z
|
2023-11-13T00:00:00.000
|
{
"year": 2023,
"sha1": "5cd65e27b8d964a6794fce34a5ede3cbc99f0ed0",
"oa_license": "CCBYSA",
"oa_url": "http://conferences.uin-malang.ac.id/index.php/planar/article/download/2472/1148",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "44fe3108943687aa3b2767c76a9ba15da54e88a3",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
}
|
235449177
|
pes2o/s2orc
|
v3-fos-license
|
No evidence of rhythmic visuospatial attention at cued locations in a spatial cuing paradigm, regardless of their behavioural relevance
Abstract Recent evidence suggests that visuospatial attentional performance is not stable over time but fluctuates in a rhythmic fashion. These attentional rhythms allow for sampling of different visuospatial locations in each cycle of this rhythm. However, it is still unclear in which paradigmatic circumstances rhythmic attention becomes evident. First, it is unclear at what spatial locations rhythmic attention occurs. Second, it is unclear how the behavioural relevance of each spatial location determines the rhythmic sampling patterns. Here, we aim to elucidate these two issues. Firstly, we aim to find evidence of rhythmic attention at the predicted (i.e. cued) location under moderately informative predictor value, replicating earlier studies. Secondly, we hypothesise that rhythmic attentional sampling behaviour will be affected by the behavioural relevance of the sampled location, ranging from non‐informative to fully informative. To these aims, we used a modified Egly‐Driver task with three conditions: a fully informative cue, a moderately informative cue (replication condition), and a non‐informative cue. We did not find evidence of rhythmic sampling at cued locations, failing to replicate earlier studies. Nor did we find differences in rhythmic sampling under different predictive values of the cue. The current data does not allow for robust conclusions regarding the non‐cued locations due to the absence of a priori hypotheses. Post‐hoc explorative data analyses, however, clearly indicate that attention samples non‐cued locations in a theta‐rhythmic manner, specifically when the cued location bears higher behavioural relevance than the non‐cued locations.
| INTRODUCTION
In our everyday life, we continually encounter more visual stimuli than our brain is able to process. Visuospatial attention allows us to spatially select behaviourally relevant stimuli from cluttered visual environments. Classical interpretations of visuospatial attention, such as the spotlight analogy (Cave & Bichot, 1999;Eriksen & St James, 1986;Posner et al., 1980), have largely ignored the temporal dynamics of spatial attention. Visuospatial attention is often studied using spatial cueing paradigms (Egly et al., 1994;Fan et al., 2002;Posner, 1980). These paradigms assume that visuospatial attention uninterruptedly monitors a certain behaviourally relevant location.
The conception that sustained attention is a continuous process, however, has been challenged in recent years (Fiebelkorn et al., 2013;Landau & Fries, 2012;VanRullen et al., 2007). Visual attentional performance has been shown to fluctuate, whereby the likelihood of detecting a given stimulus increases and decreases over time (Fiebelkorn et al., 2013;. This waxing-and-waning of attentional performance follows a rhythmic pattern, predominantly at the theta (7-8 Hz) frequency (VanRullen, 2018). These rhythmic attention cycles seem to operationalise a rhythmic sampling of the visual environment. Hence, a sequential sampling of visual stimuli or spatial locations is possible during periods of heightened perceptual sensitivity . This phenomenon has been found in different types of attention (e.g. covert attention and overt attention; Fiebelkorn et al., 2013;Helfrich et al., 2018;Landau & Fries, 2012;Re et al., 2019;Song et al., 2014) using a variety of study designs. Furthermore, evidence from neurophysiological studies indicates a tight link between rhythmic sampling during attentional fluctuations and neuronal oscillations, both in humans and in non-human primates Spyropoulos et al., 2018). Electroencephalography (EEG) studies show that detection performance of attended stimuli is predicted by both alpha and theta oscillations in the visual cortex (Busch & VanRullen, 2010;Mathewson et al., 2009). These findings have been further corroborated by studies employing MEG (Landau et al., 2015) and TMS (Dugué et al., 2016) showing further evidence of the role theta-rhythmic modulations of visual attention. Studies on non-human primates have found that rhythmic neural activity in higher order cortical areas such as the FEF and in visual areas (Spyropoulos et al., 2018) relates to rhythmic attentional behaviour. For a recent full review on converging neurophysiological evidence of rhythmic attention sampling, see .
The variable landscape of study designs does not easily allow to compare results across studies. Indeed, studies show strong discrepancies in their results. Some studies have found rhythmic sampling effects at locations that were probed by a spatial cue (Helfrich et al., 2018;Landau & Fries, 2012), some studies only found effects at non-cued locations Senoussi et al., 2019), and some found effects at both cued and non-cued locations (Fiebelkorn et al., 2013). Also, the reported frequency of rhythmic sampling is not consistent across studies, with frequencies ranging from 2 to 12 Hz (Landau & Fries, 2012;Song et al., 2014). Thus, although evidence of rhythmic sampling can be found across multiple behavioural paradigms, there is no broad consensus. Therefore, there is a strong need for consistent study designs and results, and the replication of key findings within this field is necessary, especially considering that many results obtained in this field of research might go unnoticed due to the so-called 'file drawer' effect, where negative results remain unpublished (VanRullen, 2013).
Two seminal studies have demonstrated the existence of rhythmic fluctuations in attentional performance using a modified version of the Egly-Driver task (Fiebelkorn et al., 2013;Helfrich et al., 2018). This task involves detecting a target that can appear at one end of one bar (i.e. the cued location), or at the other end of that same bar (i.e. space-based, non-cued location), or at the equidistant end of another bar (object-based, non-cued location) (Egly et al., 1994). Fiebelkorn et al. (2013) found evidence of rhythmic sampling at the cued and non-cued locations (Fiebelkorn et al., 2013). Although rhythmic sampling at the cued location occurred at 8 Hz (with another non-significant peak found at 4 Hz), the attentional sampling frequency at the non-cued location depended on the location of the target. Namely, sampling of the non-cued target within the same object, the other end of the bar, occurred at a frequency of 8 Hz, whereas sampling between objects, the non-cued target at the other bar, happened at a lower rate of 4 Hz (Fiebelkorn et al., 2013). From this study, it appears that attention samples the cued location at approximately 8 Hz, but also periodically samples other locations where a target could appear. Helfrich et al. (2018) followed up on this study. Using a similar task design, they showed that behavioural oscillations in detection accuracy on the Egly-Driver task tightly map onto neural oscillations in the frontoparietal dorsal attention network (Helfrich et al., 2018). With respect to the specific nature of the found behavioural oscillations, they did not find an 8-Hz peak at the validly cued location. Instead, they found that rhythmic sampling occurred within a broad frequency range around 4 Hz (the peak was only visible after alignment of all subjects' individual spectral peaks). Rhythmic sampling at the non-cued locations was not examined in their study. In summary, there is a discrepancy between the exact frequency range of the effect at the cued location. Moreover, both studies show low effect sizes for this effect at the cued location. Another study, investigating rhythmic sampling in working memory, did not find any effect at the cued location . Given the importance of these studies and the relevance for the field, it is crucial to understand whether, and at what frequency, the effects occur at the cued location.
Within the behavioural rhythmic sampling literature, it has repeatedly been found that the frequency of behavioural oscillations depends on the amount of behaviourally relevant locations in the visual field (Holcombe & Chen, 2013;VanRullen, 2016). Namely, attention seems to sample one object after the other at an $8-Hz frequency, resulting in a split of this $8-Hz sampling frequency over the total amount of behaviourally relevant locations (Fiebelkorn et al., 2013;Jans et al., 2010;VanRullen, 2016). For instance, Landau et al. (2012) found that two locations were each visually tracked at a $4-Hz rhythm, where the fluctuating pattern of detection accuracy at one location was the anti-phase pattern of that of the other location. This suggests that attention samples the two locations one after the other, where each location is attended to at every second sampling moment. Furthermore, when tracking two moving objects, each object is sampled at $3-4 Hz, but when tracking three objects, the sampling frequency declines to $2.3 Hz (Holcombe & Chen, 2013). However, none of these studies contain a probabilistic cue, which renders one location more relevant than another. This cue is key in the Egly-Driver task to modulate endogenous attention, but it could potentially modulate the exact frequency of the rhythmic pattern at which attentional sampling occurs.
In endogenous attentional paradigms, spatial cues often function as a means to render spatial locations behaviourally relevant. The informativeness of the spatial cue, the cue validity, is the likelihood that the cue correctly predicts an upcoming stimulus. A spatial cue dictates the behavioural relevance of a certain location, that is, the extent of attentional resources devoted to that location. A very informative cue promotes sustained attention at one location, whereas a non-informative cue promotes the direction of attention towards multiple locations (Chou & Yeh, 2018). Cue informativeness can be regulated by changing the number of trials where a cue validly predicts an upcoming stimulus. A fully informative cue should prompt all attentional deployment towards the cued location and leave little to no attentional resources to sample away towards other locations. If 8 Hz is the fundamental rhythmic sampling frequency, then this should lead to sampling only at the cued location, at approximately 8 Hz. A less informative cue, on the other hand, would lower the behaviour sampling frequency . We hypothesise that a low cue informativeness (validity) would motivate observers to adopt a strategy where attentional resources are equally divided over locations, leading to more occasional switches from the cued object to the non-cued object. We therefore expect a predominant 4-Hz component in the power spectrum, indicating a regular and frequent sampling back and forth between the cued and non-cued locations (which is then also effectively sampled at 4 Hz). Note that we expect a 4-Hz and not a $2-Hz sampling frequency even though in the Egly-Driver task three locations have to be sampled, because the results in Fiebelkorn et al. (2013) clearly indicate that the frequency split in the Egly-Driver task was only based on objects, and not locations. The notion that attentional switches away from the cued object to the non-cued object occur more frequently after non-informative cues than after informative cues has been discussed by .
In this study, we investigated rhythmic sampling of attention using a modified Egly-Driver task (after Fiebelkorn et al., 2013;Helfrich et al., 2018). Firstly, we aimed to investigate whether there is evidence of rhythmic sampling, and at what frequency, at the cued location. We expect to demonstrate evidence of rhythmic sampling around either 4 Hz or 8 Hz (or both), as reported in Fiebelkorn et al. (2013) and Helfrich et al. (2018). Secondly, we investigated whether the frequency of rhythmic sampling at the cued and non-cued locations depends on the informativeness of the cue. If behavioural relevance of spatial locations, manipulated by cue informativeness, indeed influences rhythmic attentional sampling, we should see a predominant $4-Hz cued-location sampling at low cue informativeness and predominant $8-Hz sampling at high cue informativeness.
| Participants
In total, 32 participants (mean age: 23.0, range: 19-28, 19 females) participated in the study. All participants were right-handed and had normal or corrected-tonormal vision. Five participants were excluded due to a high number of blinks and/or saccades (>20%), and one participant was excluded due to outlying behavioural performance (z > 3). The study was approved by the Ethics Review Committee Psychology and Neuroscience (ERCPN) at Maastricht University, The Netherlands (ethical approval number: OZL-177_03_03_2017_S12), and in concordance with the World Medical Association Declaration of Helsinki. All participants gave their written informed consent before participating in the study. Participants were compensated for their time with a monetary reward or participation credits.
| Procedure and experimental design
Participants performed a variant of the Egly-Driver task, modified to investigate detection performance across different cue-to-target interval bins (Egly et al., 1994;Fiebelkorn et al., 2013;Helfrich et al., 2018). Participants were seated in front of a PC monitor in a lightly dimmed room. Viewing distance was kept stable at 57 cm from the monitor by means of a chin rest. We performed video-based monocular eye tracking at 1000 Hz with the EyeLink 1000 system (SR Research, Mississauga, Ontario, Canada). A standard 9-point calibration and validation procedure was used to calibrate the eye tracker. After calibration of the eye tracker, participants were familiarised with the task using a practice block (60 trials, on 80% cue informativeness). Participants were asked to maintain fixation on a centralised fixation dot throughout each trial and to blink only after their response.
Trials started with a 300-ms fixation period, after which two peripheral white bars (size 4.4 Â 22 , at 8.8 eccentricity) appeared on the screen, oriented either horizontally or vertically (Figure 1a). After a variable delay of 400-800 ms, a spatial cue (a black line around the one end of one bar, thickness 0.5 , area of coverage 4.4 Â 4.4) appeared for a duration of 100 ms. The spatial cue predicted with varying probabilities (depending on the cue informativeness condition; see below) where the target would appear. The target was a small change in luminance on one end of one bar (size 4.4 Â 4.4 ), which appeared for a duration of 100 ms. Cue-to-target intervals were binned between 500 and 1683 ms in steps of $16.7 ms (equivalent to display refresh rate). The trial distributions were pseudorandomly constructed so that each interval bin contained four validly cued trials. Cueto-target intervals were randomly distributed across the experiment. Participants were asked to press a button on a response box (with the right hand) if they detected the target and to refrain from responding if no target was detected. There was a window of 1500 ms where responses were recorded. Target detection performance was titrated at 80% (similar to Helfrich et al., 2018 by adjusting the target luminance every 15 trials in steps of 1 RGB value (max. 255 [white])-thus, when performance was below 80%, the RGB value was increased by 1 and vice versa. The starting RGB value was 250. Task stimuli were presented on a gammacorrected Iiyama ProLite monitor with an aspect ratio of 1920 Â 1080 and a refresh rate of 60 Hz. Stimuli were programmed using the Psychophysics Toolbox (PsychToolbox; Brainard, 1997) in MATLAB (MathWorks, Version 2018b).
Our task consisted of three different cue informativeness conditions (see Figure 1b). The moderately informative cue condition involved identical cue parameters as in Helfrich et al. (2018). Here, the cue indicated the correct location of the target in 80% of the cued trials. During invalid trials (20%), targets could either appear at the other end of the cued location on the same bar (10%) or at the other bar at a location equidistant from the cued location (10%). Next to this, we introduced two other conditions. First, we added a non-informative condition in which the cue correctly predicted the location of the target in 33% of the cued trials. The probability that the target would appear at the cued location or either of the other two non-cued locations was, thus, equal. Second, we added a fully informative condition in which the cue always predicted the location of the target in cued trials; thus, the distribution was 100% validly cued trials versus no invalidly cued trials. In each condition, we added a small number of catch trials (10%) on top of these validinvalid trial distributions. Catch trials, in which a cue was shown but no target appeared, were meant to keep the participants engaged. We divided the session into six blocks-two blocks for each cue informativeness type. The order of the cue informativeness conditions was counterbalanced across participants. Each condition consisted of 288 valid trials, so the number of invalid (noncued) trials differed in each condition (non-informative cue: 864 trials, moderately informative cue: 72 trials), as well as the total number of trials (non-informative cue: 950 trials; moderately informative cue: 396 trials; fully informative cue: 317 trials). Valid (cued) trials were equally divided over the 72 cue-to-target interval bins, which resulted in four valid trials per bin. Invalid (non-cued) trials were randomly divided over the cueto-target interval bins (thus, on average, there were eight trials per bin in the non-informative cue condition and one trial per bin in the moderately informative condition). Before each block and in every small break (after 50 trials), participants were informed about the current cue informativeness percentage to encourage them to adopt an appropriate attentional strategy.
To allow us to directly replicate the behavioural findings of Helfrich and colleagues, our moderately informative cue condition involved nearly identical task parameters and data preprocessing and analysis (see next section), apart from the following: (1) Trials started automatically, and not at button press, in order to increase the flow of the experiment, (2) we used an eye tracker to filter out and discard saccades and eye blinks, (3) participants were informed about the cue informativeness percentage, and (4) we vastly increased the number of trials and the number of participants (originally seven participants [main experiment] or 14 participants [control experiment], mean of 190 trials) (Helfrich et al., 2018). Note, however, that the number of trials is substantially lower than reported in an earlier study with a similar paradigm by Fiebelkorn et al. (2013).
| Data preprocessing
Data were preprocessed and analysed using custom Python scripts. A total of five participants, for which the number of rejected trials due to blinks and saccades exceeded 20%, were excluded from the analysis. Two participants were excluded due to a z-score above 3 in any behavioural performance measure (detection accuracy or reaction time), as it suggests not complying with the instructions. We removed all trials that were contaminated by saccades (exceeding 2 of visual angle) or eye blinks using an automatic detection algorithm. This algorithm detected the presence of blinks or saccades from the epoched eye-tracking data. The critical time window for trial exclusion ranged from cue onset until target onset (thus, trial lengths varied). This ensured that behavioural effects were not confounded by breaks of central fixation during the cue-target interval and that volunteers indeed performed covert and not overt shifts of spatial attention.
| Data analysis
To recreate the course of hit rates (Egly-Driver task) over the entire cue-to-target interval, we followed the preprocessing steps by Helfrich et al. (2018). First, we calculated the average hit rates over a window of 50 ms. We then slid this window forward over the entire cue-to-target interval, in steps of 1 ms. We smoothed the raw time course using a boxcar rolling average with a window size of 25 ms. A representational time course can be found in Figure 1c. Next, we applied a Hanning window and zero-padded the time course to a length of 10 s. In order to analyse the power across the frequency spectrum, we applied a fast Fourier transform (FFT) over these preprocessed time courses.
For our first aim, we examined whether there is rhythmic sampling at the cued location and at which frequencies. Power spectra were first tested for significant peaks using non-parametric permutation testing at subject and group levels. We constructed a surrogate distribution of power spectra. To that aim, we first randomised the hits and misses in each time bin across the F I G U R E 1 Overview of methods. (a) Schematic overview of a single trial. Trials started with the appearance of a central fixation dot, that participants were asked to fixate on throughout the trial. Horizontally or vertically oriented bars appeared after 300 ms and were showed for a variable duration between 400-800 ms, after which a cue appeared for 100 ms. After a variable cue-to-target interval (500-1700), the target (a slight change in luminance) was shown for 100 ms. Participants were asked to press a button promptly upon target detection. (b) Schematic overview of cue informativeness conditions and likelihood of target appearance (in %) on each possible location. The moderately informative condition is a replication of Helfrich et al. (2018). On top of these valid-invalid trial distributions, 10% of catch trials were added. (c) Illustrative behavioural time course of detection accuracy across the cue-to-target interval, at the cued location, for one typical participant cue-to-target interval for each participant. Then, for 1000 iterations, we conducted the same steps as described above. At subject level (for each participant individually), we determined the p-value per frequency, represented by the proportion of values of the surrogate distribution that exceeds the power at that particular frequency. The frequency with the highest p-value above the confidence level (P < 0.05) between 2 and 10 Hz served as the peak frequency for that participant. At group level, we took two approaches to compare the observed data against the surrogate distribution. First, we constructed a group distribution of surrogate data by averaging the individual surrogate data by each permutation. We then determined the proportion of values of the group-averaged surrogate distribution that exceeds the group-averaged power spectrum (i.e. the p-value). Second, we ran a paired-samples t-test between the observed data and the medians of the power spectrum of the individual surrogate distributions. All p-values were corrected for multiple comparisons using the false discovery rate (FDR) procedure (Benjamini & Hochberg, 1995).
Furthermore, we used two additional spectral analysis methods as per Helfrich et al. (2018). Both of these analyses include a group alignment of individual spectral peaks to account for the fact that the exact spectral peak frequency potentially could not be consistent over participants. First, we z-scored the power spectrum relative to the median and the SD of the surrogate distribution and selected the frequency with highest z-value, in the range of 2-10 Hz, as individual peak frequency (IPF). Second, we used Irregular Resampling Auto-Spectral Analysis (IRASA; Wen & Liu, 2016) to separate the oscillatory component from the fractal component (1/f activity) in the signal. We used a time window of 75% of the total signal and a step size of 50 ms. We applied IRASA to both the observed time series and permuted time series. We then selected the frequency where the oscillatory component maximally exceeded the fractal component. For both of these analysis methods, we aligned the individual power spectra according to the found peak frequencies. Subsequently, we compared the aligned observed data against the median aligned surrogate data using a pairedsamples t-test.
For our second aim, we examined whether cue informativeness altered rhythmic sampling behaviour. Firstly, to determine whether cue informativeness significantly affected overall perceptual accuracy, a 2 Â 3 repeated measures analysis of variance (RM ANOVA) was conducted with factors LOCATION (cued, same-object noncued and different-object non-cued) and CONDITION (moderately informative and non-informative). The fully informative condition could not be included in this analysis, because it lacks non-cued trials. To analyse the other two cue informativeness conditions (fully informative and non-informative cues), we repeated the abovementioned spectral peak identification analyses (i.e. subject-level permutation testing, group-level permutation testing, paired samples of observed data vs. the surrogate distribution, group alignments based on z-scoring and IRASA).
As an extra analytical step, in order to provide more evidence for specific null or alternative hypotheses, we used the Bayesian framework for t-tests, as proposed by Rouder et al. (2009). Using JASP (JASP Team, 2020), we conducted Bayesian paired-samples t-tests between the observed power spectra and the medians of the surrogate power spectra for each cue validity condition and at each target location separately. We first averaged the power at predetermined peaks, namely, at 4 Hz (between 3.5 and 4.5 Hz) and at 8 Hz (between 7.5 and 8.5 Hz), which we based on previous studies (Fiebelkorn et al., 2013;Helfrich et al., 2018). The null hypothesis (H 0 ) poses that around those predetermined peaks, there is no difference between the observed power and the power of the surrogate distribution; the alternative hypothesis (H A ) states that the observed power is higher than the power of the surrogate distribution. The Bayesian analysis compares the likelihood of the data fitting under H A (Hypothesis 1) versus H 0 (Hypothesis 1), resulting in the Bayes factor (BF). A BF 10 of 3, for example, indicates that the data are three times more likely to fit under H A than under H 0 (Wagenmakers et al., 2018). A BF 10 of 1 indicates no evidence, 1-3 anecdotal evidence, 3-10 moderate evidence and 10-30 strong evidence for H A (for a full overview, see Wagenmakers et al., 2018). We always assigned a Cauchy prior distribution with r = 1/√2 to our analyses.
We also determined the effects of rhythmic sampling at the non-cued locations (i.e. same-object and differentobject locations) and compared these effects across the moderately informative and non-informative conditions. This comparison was not possible in the fully informative condition due to a lack of non-cued trials. We constructed time series for the non-cued location in the same manner as the cued location (see above), except that we used a longer sliding window of 100 ms, to accommodate for the scarcity of trials. In the moderately informative cue condition, there were more trials (i.e. 288) in the cued than in the non-cued locations (i.e. 36 at each location), impeding direct comparison across locations. To overcome this, we took 10 samples of 36 hits and misses amongst the validly cued trials, created a time series for each sample (see above) and averaged these into one time series. Within each condition, for each location separately, we compared the observed data against the surrogate data using the two non-parametric testing analyses described above ([1] score the group-averaged power spectrum against a group-averaged surrogate distribution, and [2] perform a paired-samples t-test of individual power spectra vs. the medians of individual surrogate distributions). We also investigated whether the significant effects at the pooled non-cued locations in the moderated informative cue condition could not be explained by an autocorrelation in the behavioural time course (Brookshire, 2021). We used the Monte Carlo singular spectrum analysis (SSA) method, originally proposed by Allen and Smith (1996), to differentiate the signal from aperiodic background activity. In this analysis, Monte Carlo simulations are used to estimate the expected spectral signal based on coloured AR(1) (autoregressive model with 1 positive coefficient) noise. To reduce our spectral resolution (unnecessary for this analysis), we down-sampled our time course to 50 Hz. SSA was performed with a sliding window of 20 samples using the Broomhead and King estimation (Broomhead & King, 1986). We used source code for the python implementation of the SSA available at https://github.com/ VSainteuf/mcssa.
Finally, as a robustness check, we ran simulations to see what effect we would be able to statistically reveal using the current trial amount of four trials per bin. To this aim, we simulated data of 26 participants and a range of 1-10 trials per bin in steps of one trial per bin. We simulated trials with hits or misses and cue-target interval bins (72 in total, ranging stepwise between 500 and 1700 ms). Per trial, we randomly sampled between hit (1) and miss (0), where the probability of sampling a hit (P hit ) was defined by a 4-Hz sinusoid with the following formula: where b is the effect size (amplitude) ranging from 0.0 to 0.2, in steps of 0.02, and where i denotes the cue-target interval. We introduced interindividual variability β, based on the results that we found at the cued location in the moderately informative cue condition. The coefficient of variation at 4 Hz in the power spectrum was $0.5 (0.54 precisely). For each participant, we randomly drew β from a distribution where mean = 1 and SD = 0.5. The probability of a miss (P miss ) is P hit À 1. Individual variability in underlying frequency (4 Hz) is not taken into account in these simulations. We created 500 distributions of hits and misses and one surrogate distribution of 500 permutations per participant. Then, we ran these distributions through our analysis pipeline: construction of behavioural time series, zero padding, applying a Hanning window and performing an FFT. For each effect size, trial per bin and each of our 500 simulations, we scored the group-averaged simulated data against a surrogate distribution. The surrogate distribution was created by analysing, in the same pipeline, 500 distributions of 80% hits (1) and 20% misses (0).
| No evidence of rhythmic attention at the cued location
For our first aim, we investigated if there is evidence of rhythmic sampling at the cued location, and at which frequency, in an Egly-Driver task. Overall reaction time for this condition was 467 AE 45 ms (mean AE SD). In our experiment, detection accuracy was aimed to be titrated at $80%, using an adaptive staircase procedure, as per Helfrich et al. (2018). The actual observed detection accuracy at moderate cue informativeness was 78.52% (AE2.89%), which is only slightly but significantly lower than 80% (t 25 = À2.56, P = 0.014). On a subject level, we assessed whether the individual spectral power exceeded the 95th percentile of the surrogate distribution, at any frequency in the range between 2 and 10 Hz (see Figure 2a for a representational power spectrum). This was the case for only four participants, where the mean peak frequency lied outside the theta band (at 9.1 Hz). On average, the highest spectral peak was found at the $85th percentile (84.96%) of the surrogate distribution, at a frequency of 4.64 AE 2.79 Hz (see Figure S2).
At group level, we scored the mean spectral power against the group-averaged surrogate distribution (see Figure 2b). The spectral power did not exceed the 95th percentile at any frequency (P min,uncorr. = 0.44 at 2.0 Hz). As a second means of comparing against the surrogate distribution, we ran a paired-samples t-test against the median of the individual surrogate distributions. Here, we found no significant spectral peaks either (P min,uncorr. = 0.52 at 2.0 Hz). In order to investigate the extent of evidence for the null hypothesis, we ran a paired-samples t-test under the Bayesian framework for spectral peaks at 4 and 8 Hz. Based on Fiebelkorn et al. (2013), who found a significant peak around 8 Hz, and Helfrich et al. (2018), who found a significant peak around 4 Hz, our H A at both of these peaks stated that the observed data are greater than the median surrogate distribution. We found a BF 01 of 7.20 at 4 Hz (i.e. the data are 7.20 times more likely to fall under H 0 than H A ) and a BF 01 of 17.35 at 8 Hz, indicating moderate and strong evidence for H 0 , respectively. Furthermore, we used two peak alignment methods as per Helfrich et al. (2018). First, we z-scored the power against the median and SD of the permutations and selected the highest z-scores for each participant between 2 and 10 Hz (see Figure S1 for individual plots). Using this method, we found a mean peak frequency of $5 Hz (4.67 AE 2.81 Hz). This peak was not significant, with a z-score of 1.28 AE 0.56 (Figure 2c). Second, we used irregular resampling (IRASA) to filter out the fractal (1/f) component of the signal. Between 2 and 10 Hz, the power spectrum exceeded the fractal component maximally at an average frequency of 4.64 AE 2.07 Hz. Once again, we constructed an aligned power spectrum around the IPF (see Figure 2d). As any peak will naturally arise due to the alignment to the maximum peak frequency over subjects, we need to correct for this in our statistical test. Therefore, we ran each permutated time series (see Section 2) through the IRASA procedure and identified peaks for each permutation. Subsequently, we compared the aligned observed data against the median aligned surrogate data (peak at 5.24 AE 0.08 Hz) using a pairedsamples t-test (see Figure 2d). At the spectral peak, the aligned observed power spectrum was significantly lower than the aligned surrogate power spectrum (t 25 = À2.81, P = 0.009).
| No influence of cue informativeness on rhythmic attention at the cued location
As our second aim, we investigated whether the existence and extent of rhythmic attentional sampling depend on F I G U R E 2 Finding evidence of rhythmic attentional sampling behaviour at moderate cue predictability (replication of Helfrich et al. (2018). (a) Representational single-subject power spectrum (in black) after applying a Fast Fourier Transform (FFT) on the behavioural time courses (e.g. as in Figure 1c). One method of determining the presence of distinct peaks is by comparing each subject's power spectrum to the subject's surrogate distribution (95th percentile in red, dotted), created by permuting the hits and misses 1000 times across the cue-totarget interval. (b) Group-averaged power spectrum (in black, mean AE SEM) and 95th percentile of group-averaged surrogate distribution (c) Aligned power spectrum (mean AE SEM) after z-scoring each individual power spectrum against the median and SD of the permutations and taking the highest z-score as individual peak frequency (IPF). The red dotted line denotes statistical significance (i.e. a z-score of 1.645). (d) Aligned power spectrum (mean AE SEM, in black) after applying irregular resampling (IRASA) to extract the 1/f (fractal) component. The individual peak frequency (IPF) is the frequency at which the power spectrum maximally exceeds this 1/f component. As an additional control, we constructed aligned power spectra of the randomly permuted data and performed a paired-samples t-test against the median aligned power spectra (mean AE SEM, in red) per frequency. The observed aligned spectral peak is significantly lower than the surrogate aligned peak (** denotes p < 0.01) the informativeness of the cue. First, we assessed whether overall detection accuracy at each (cued and non-cued) location was dependent on the informativeness of the cue using a 2  3 RM ANOVA (Condition  Location). Data were normally distributed, as assessed by a Shapiro-Wilk test (P > 0.05 for all combinations). A Greenhouse-Geisser correction was applied due to violation of the sphericity assumption. Detection accuracy was significantly different across both LOCATION (F 1.91,0.05 = 12.43, P < 0.001) and CONDITION (F 1,0.06 = 24.33, P < 0.001), as well as the LOCATION  CONDITION interaction term (F 1.62,0.05 = 11.56, P < 0.001). See Figure 3a. Post hoc analyses revealed that detection accuracy did not differ significantly across target locations within the noninformative cue condition. In contrast, within the moderately informative (80%) cue condition, detection accuracy at both the same object (0.73, 95% CI [0.04, 0.11], P = 0.001) and the different object (0.11, 95% CI [0.08, 0.14], P < 0.001) was significantly lower than at the cued location. There was no significant difference in detection accuracy at the same object versus at the different object (0.37, 95% CI [À0.02, 0.09], P = 0.17). These results indicate that we successfully altered behavioural performance by altering the informativeness of the cue. Note that there was no significant difference at the cued location between the cue informativeness conditions, as expected, because detection accuracy was always titrated at $80%.
We repeated the above-mentioned spectral peak identification methods to assess attentional rhythmicity at the cued location across our different cue informativeness conditions (for fully informative cue and non-informative cue, see Figure 3b,c). Scoring subject by subject against the surrogate distribution yielded no significant spectral peaks (fully informative cue: 87.36% at 5.09 AE 2.55 Hz; non-informative cue:82.78% at 6.43 AE 2.58 Hz). There were also no significant spectral peaks on a group level, neither according to a paired-samples t-test of the observed power spectrum against the medians of the permutations (fully informative cue: P min,uncorr. = 0.14 at 3.5 Hz; non-informative cue: P min,uncorr. = 0.77 at 6.6 Hz) nor when scoring the observed data against a groupaveraged surrogate distribution spectral peaks (fully informative cue: P min,uncorr. = 0.14 at 3.6 Hz; noninformative cue: P min,uncorr. = 0.62 at 6.6 Hz). In order to assess the evidence for the null hypothesis, we also ran a Bayesian paired-samples t-test to compare the spectral power of the observed data to the median of the surrogate distribution. As mentioned in the introduction, we have a directional H A for the fully informative cue condition at 8 Hz, stating that the observed data are greater than the median surrogate distribution, and a non-directional H A at 4 Hz, stating that the observed data are different than the median surrogate distribution. We found a BF 01 of 2.94 at 4 Hz and a BF 01 of 4.61 at 8 Hz, indicating anecdotal and moderate evidence for H 0 , respectively. For the fully informative cue condition, our hypotheses are reversed: H A at 8 Hz states that the observed data are different than the median surrogate distribution, whereas H A at 4 Hz states that the observed data are greater than the median surrogate distribution. Here, we found a BF 01 of 13.16 at 4 Hz (i.e. the data are 13.16 times more likely to fall under H 0 [no difference] than H A ) and a BF 01 of 2.72 at 8 Hz, indicating strong and anecdotal evidence for H 0 , respectively.
We also did not find evidence for attentional rhythmicity using the two spectral peak alignment methods after Helfrich et al. (2018). We z-scored against the mean and SD of the permutations (fully informative cue: a z-score of 1.56 AE 0.15 [mean AE SEM] at a mean peak frequency of 5.09 AE 2.55 Hz [mean AE SD]; non-informative cue: a z-score of 1.31 AE 0.16 [mean AE SEM] at a mean peak frequency of 6.43 AE 2.61 Hz [mean AE SD]). See Figures S3 and S4 for power spectra aligned around the identified spectral peaks for both additional conditions. Using IRASA, we found spectral peaks around $5 Hz for both conditions (fully informative: 5.25 AE 0.48 Hz; noninformative: 5.69 AE 0.49 Hz), but the power at the subsequently aligned peaks was not significantly higher than the median of the surrogate aligned peaks (fully informative cue: t 25 = 0.34, P = 0.74) or was even significantly lower (non-informative cue: t 25 = À2.47, P = 0.02; see Figures S3 and S4).
Our simulation showed that from four trials per cuetarget interval bin onwards (the number of trials per bin used in our study) and a P-value cut-off of 0.001, the power would be 80% for effect sizes as low as 0.04, corresponding to a sinusoidal function of detection accuracy varying between 0.78 and 0.82. With 10 trials per bin, we would have reached a power of >80% (i.e. 100%) with an effect size as low as 0.04. A heatmap of the proportion of significant tests for each effect size and trial per bin combination, for different cut-off values, (P < 0.05, P < 0.01 and P < 0.001) can be found in Figure S5.
| Characteristics of rhythmic attention at non-cued locations
We were also interested in whether the effects of cue informativeness might become visible at the non-cued locations, instead of the cued locations. Therefore, we decided to analyse time course and spectral power at the non-cued locations using location-specific permutation testing, after Fiebelkorn et al. (2013). Note that the data can only be considered for exploratory analysis. Due to the low number of trials per bin at the non-cued locations in the moderately informative cue condition (our focus was on the cued conditions), we cannot construct a time course that is similarly reliable as is the original study by Fiebelkorn and colleagues. Descriptively, there is no phase opposition visible as was the case in Fiebelkorn et al. (2013) (see the blue and yellow time courses in Figure 4b). To investigate whether the power spectra of these time courses contained any spectral peaks, we scored the group-level spectral power between 2 and 10 Hz against the group-averaged surrogate distribution within each cue informativeness condition and for each location separately. We did not observe any significant spectral peaks (i.e. spectral power exceeding the 95th percentile of the surrogate distribution) at any location within the non-informative cue condition (cued: P min,uncorr. = 0.527 at 9.3 Hz; same object: P min,uncorr. = 0.10 at 5.4 Hz; different object: P min,uncorr. = 0.60 at 6.00 Hz). In contrast, the indication for attentional rhythmicity at the non-cued locations was higher when the cue was moderately informative. Namely, in the moderately informative condition, we observed a spectral peak around $3 Hz at the differentobject location, which was not significant after F I G U R E 3 Rhythmic attention at other cue conditions. (a) Perceptual accuracy at cued and non-cued locations for each cue informativeness condition. Within the moderately informative cue condition, but not within the non-informative cue condition, detection accuracy at both the same object and different object was significantly lower than at the cued location. Triple asterisks (***) denote statistical significance with p < 0.001. (b-c) Power spectra (mean AE SEM, in black) for time-resolved behavioural estimates of detection performance at the cued location for the noninformative cue condition (b) and the moderately informative cue condition (c). Dotted lines denote the 95th percentile of the surrogate distributions corrections for multiple comparisons (2.7-3.5 Hz, P min,uncorr. = 0.02 at 3.2 Hz, P min,FDR = 0.42). Moreover, although a spectral peak at $7-8 Hz is visible at the same-object location, it is not significant either (P min,uncorr. = 0.10 at 7.6 Hz). There was no distinctive spectral peak at the cued location (P min,uncorr. = 0.40 at 4 Hz). We repeated our analysis after first averaging the time series at each location and constructing one power spectrum of this averaged time course. We did not find any significant peaks after scoring this power spectrum against a surrogate distribution (all P min,uncorr. > 0.8). Thus, although the insufficient number of non-cued trials and the low statistical power as a result of it hamper adequate interpretation of the results, these results seem, descriptively, consistent with the most prominent result of Fiebelkorn and colleagues (2013).
The behavioural time courses showed a phaseconsistent pattern for the two non-cued locations in the moderately informative cue condition. Thus, we pooled the time courses across these two locations (see Figure 4). Once again, we scored the data against the against the group-averaged surrogate distribution of hits and misses at the pooled, non-cued locations. We found a distinct, significant spectral peak at 7-8 Hz (P min,FDR < 0.001), specifically between 7 and 8.3 Hz. Our effects were still significant after correcting for autocorrelated (AR(1)) noise (P < 0.001 for 7.2-7.5 Hz) (see Figure S6). When we repeated the same analysis for the non-informative cue, we did not find any significant effects (all P > 0.05). Finally, to verify that we did not have evidence of opposing phase effects for both non-cued locations (same-object vs. different-object locations) as previously found by Fiebelkorn et al. (2013), we again pooled across the non-cued locations but changed the polarity of the different-item location. Any phase opposition should enlarge any sinusoidal modulation by this subtraction. No significant effects were found (all P > 0.05).
F I G U R E 4 Time-resolved behavioural estimates of detection accuracy (mean AE SEM) at the cued (in grey) and non-cued locations (pooled data, in brown), and the same-object non-cued (in yellow) different-object non-cued (in blue) locations, for the noninformative cue condition (a) and the moderately informative cue (replication) condition (b). Note the pronounced waxing-and-waning pattern of perceptual performance at the non-cued locations in the moderately informative cue condition (in b). (c-d) location-specific power spectra for the noninformative cue condition (c) and the moderately informative cue condition (d) (gey: cued location, brown: non-cued locations (pooled data), yellow: same-object location, blue: different-object location). Dotted lines in corresponding colours denote the 95th percentile of the surrogate distributions. Triple asterisks (***) denote statistical significance with p < 0.001
| DISCUSSION
Recent studies have suggested that visuospatial attentional performance is not continuous, but rhythmically fluctuates, reflected by a heightening and lessening of perceptual sensitivity. It is thought that these heightened periods of perceptual sensitivity allow for attentional shifts from one spatial location to another. This rhythmic attentional sampling mechanism expresses itself as a rhythmic pattern of behavioural performance (e.g. detection accuracy) at each separate spatial location. Here, we investigated rhythmic attention at the validly cued location in a modified Egly-Driver task. As previously reported in studies employing this task (Fiebelkorn et al., 2013;Fiebelkorn et al., 2018;Helfrich et al., 2018), and a similar task version in monkeys , we expected to demonstrate evidence of rhythmic attention around either 4 or 8 Hz, or both. In addition, we also tested for a possible effect of the behavioural relevance of the sampled location, comparing three conditions: a fully informative cue, a moderately informative cue and a non-informative cue. Using several different peak identification methods, we found no significant rhythmic attentional sampling at the cued location. Our null results are further corroborated by hypothesis testing in the Bayesian framework that unilaterally point towards evidence for the absence of an effect. Our manipulation of cue type did show a main effect of cue informativeness on overall behavioural accuracy (see Figure 3). However, we found no differences in spectral power at the cued location across cue informativeness conditions as a result of this modification. The spectral patterns of detection accuracy at the non-cued (i.e. sameobject and different-object) locations, in the moderately informative cue condition, bear similarities with Fiebelkorn et al. (2013) although the low number of trials in this condition and the absence of a specific a priori hypothesis impeded us from drawing robust conclusions. Interestingly, however, when inspecting behavioural performance pooled across both non-cued locations, a highly significant and distinct 7-to 8-Hz pattern of rhythmic attentional sampling was revealed for the moderately informative cue condition. These patterns were not present for the non-informative cue condition.
In an earlier study using a modified Egly-Driver task, Fiebelkorn et al. (2013) found effects at the cued location (at $8 Hz), but the most prominent effects were found at the non-cued locations. In a similar paradigm, Helfrich et al. (2018) found significant rhythmicity in attention (at $4 Hz) at the cued location. They focused only on the cued location, because a sufficient number of trials to analyse non-cued locations were not feasible in their (ECoG) study. In contrast to these two studies, our study failed to find pronounced effects at the cued location but did found significant effects at the non-cued location. Another recent study assessing behavioural rhythmicity in an object-based versus space-based paradigm also found no significant performance fluctuations at the cued location . In this study, it was suggested that attention clearly prioritises the cued location when the cue renders a location behaviourally relevant. As a result of this, attention never fully switches away from the cued location so that, consequently, no rhythmicity in behaviour can be found there (Lou et al., 2020;Peters et al., 2020). In certain moments, the authors argue, attention swiftly sweeps across the object towards the non-cued location on the same object; in other moments, it is distributed across the cued and the non-cued location on the different object. It could be that rhythmicity in attention becomes visible only when attention needs to swiftly reorient to lesser relevant locations and back to the most relevant location, as has been previously found (Senoussi et al., 2019). This notion seems consistent with our results at non-cued locations, suggesting that following an informative cue, attention seems to rhythmically monitor the two non-cued locations at 7-8 Hz. It is also consistent with our results in the non-informative cue condition, in which there seem to be no rhythmic sampling effects at any (cued or noncued) location. We speculate that, in that case, no reorienting of attention is necessary, as each location bears similar behavioural relevance or attentional weight as the other location. Note that this is in the absence of a very salient cue, such as a flash stimulus (as used in Landau et al., 2012), which could serve as a sampling starting point, even though it is non-informative.
Our results of an attentional sampling at non-cued locations in the behaviourally relevant cue condition fits well with findings reporting that the 7-to 8-Hz frequency is most often found when attention is undivided and directed to one location only, as found in electrophysiological studies (Busch et al., 2009;Busch & VanRullen, 2010). Curiously, whereas earlier studies found that non-cued locations on the same object and the different object were sampled in an anti-phasic pattern (Fiebelkorn et al., 2013;Peters et al., 2020), we rather found a phase-consistent pattern in this study.
We expected to see a shift of the rhythmic sampling frequency at the cued location when the behavioural relevance of the cue disappears. Namely, once the informative value of the cue disappears, non-cued locations should be given more attentional weight. This would result in a loss of the detection accuracy benefit at the cued location, which normally occurs when the cue is behaviourally relevant (Chou & Yeh, 2018;He et al., 2004). Indeed, our overall behavioural results clearly show that the benefit of the cue dissipates when the cue becomes non-informative. There is a clear detection accuracy gain at the cued location compared with the non-cued location in circumstances where the cue carries information. This performance benefit disappears completely after a non-informative cue, where detection accuracy at each location is equal. It is proposed that the less relevant a cue is, the more attentional saccades there are to be expected towards other locations . A completely non-informative cue renders all locations equally relevant for attention, resulting in a systematic one-by-one rhythmic sampling of all possible locations (Jia et al., 2017). Indeed, in a task that requires monitoring two equally relevant locations, each location seems to be sampled in alternation, each at a $4-Hz rhythm (Landau & Fries, 2012;Re et al., 2019;VanRullen, 2016). We successfully managed to manipulate attentional weight that is allocated to the cued versus non-cued locations, as shown by the significant difference between the average accuracy at the non-cued locations versus the cued location in the moderately informative condition and no difference in the informative cue condition. However, attentional rhythmicity at the cued location seems to be unaffected by the informativeness of the cue. The differential effect of a location's attentional weight on rhythmic attention towards that location may play a role in the vast dispersion of reported frequencies in rhythmic attention paradigms.
Several methodological factors should be discussed in light of our null results at the cued location. First, by losing its informative character, a cue might fail to structurally reset the brain's rhythmic sampling pattern. A non-informative cue could still be salient enough to reset spatial sampling, for example, when it is a bright flash (Landau & Fries, 2012), but when it is not, it might not be enough to reset spatial sampling when it bears no attentional weight. This could explain the absence of evidence for attentional rhythmicity at any (cued or non-cued) location during the non-informative condition. The ability of the cue to reset the brain's overt attentional sampling is a necessary prerequisite for reliably evaluating fluctuations in detection accuracy in a behavioural paradigm (VanRullen, 2016). Indeed, it is established that salient stimuli, such as a loud noise, reset ongoing neuronal oscillations (Lakatos et al., 2008). A 'flash' event that is salient enough clearly initiates a reliable object-to-object attentional sampling pattern that always starts at the flashed location (Landau & Fries, 2012). In line with this notion, in the present study, we could have used a more salient, briefly flashing exogenous cue to increase our confidence that we reliably reset the attentional sampling phase in each trial. Another option would have been to use an auditory stimulus concurrently with the visual cue (Fiebelkorn et al., 2011;Lakatos et al., 2007).
Second, detection accuracy was titrated at $80% in the present study, after Helfrich et al. (2018). We wonder whether this percentage was sensitive enough in our sample, as other studies tend to use lower thresholds (Busch & VanRullen, 2010;Fiebelkorn et al., 2013). Even though Helfrich et al. (2018) adjusted the detection threshold to $80% (to keep subjects engaged), they found similar effects when running an identical task in a control sample. The main issue with a high detection threshold is a ceiling effect of detection performance, compromising the exposure of the highest potential amplitude of attentional fluctuations. This would result in behavioural oscillations that are lower in amplitude or that are capped, which would hamper adequate inspection of power-frequency components in the behavioural time courses.
Third, rhythmic attentional sampling paradigms need a vast number of trials in order to construct time-resolved behavioural estimates of detection performance across cue-target intervals. We were able to reliably estimate effects at the cued location, as we had 288 validly cued trials to perform this analysis. This number is considerably higher than the study by Helfrich et al. (2018), which had $137 validly cued trials on average. To gain additional insight into this matter, we ran simulations across combinations of different trials per bin and effect sizes. Here, we found that with an effect size (amplitude of underlying sinusoidal function) of down to 0.04, we found a statistical power of 95% at P < 0.05. Even though these simulations assume that fluctuating patterns of performance are equally present and equally measurable in all participants, they indicate that the current number of cued trials could detect an effect size that we deem relevant (a sinusoidal accuracy modulation between 0.78 and 0.82). Nevertheless, a higher number of trials would have contributed to more robust results for two main reasons. Firstly, the lower the trial number, the more noise emerges into the data, arising from momentary lapses in attention and responses that are unrelated to the rhythmic fluctuations in attention that is subject to the main measurement. Secondly, if we had aimed at doing a comprehensive and reliable analysis of the results of all factors and conditions, it would have been necessary to consider adjusting the study to contain sufficient number of cued and non-cued trials. For example, Fiebelkorn et al. (2013), where analysis of effects at the non-cued locations was included, used $388 non-cued trials, compared with 72 in this study. Thus, though the number of trials in this study was sufficient to draw conclusions on rhythmic attention at cued locations, it was likely not sufficient to confidently draw conclusions about rhythmic attention at non-cued locations for the moderately informative condition. Adding more trials would furthermore have allowed us to determine hemifield-specific effects, which have been found in other paradigms (Landau & Fries, 2012).
Fourth, a related issue that could have brought more noise into the data is caused by the fact that trials were not self-initiated. Self-initiated trials would have allowed for more frequent breaks by the participant, for example, when they feel fatigued or overwhelmed by the task. The automatically induced trials in this study might have introduced more noise due to decreased attentional performance caused by fatigue. However, we did introduce very frequent breaks, after every 50 trials (around 2.5 min), allowing the participant to take as long a break as desired.
Fifth, the addition of an electrophysiological technique would have provided more insight and evidence in the phenomenon of rhythmic attention in this study. Using electrophysiology, it is possible to directly link behavioural performance (e.g. detection accuracy) with neuronal processes, as has been shown before Helfrich et al., 2018). This not only sheds more light on the neural correlates of rhythmic attentional sample but additionally allows for a trialby-trial analysis of the data (as opposed to relying on across-trial, aggregated performance).
There is no clear consensus on methods to analyse the spectral dynamics of behavioural oscillations (Helfrich et al., 2018;Zoefel et al., 2019;Zoefel & Sokoliuk, 2014). In this study, we used several approaches to find meaningful spectral peaks. We used permutation-based approaches, where the observed power on each frequency is compared to a surrogate distribution (Fiebelkorn et al., 2013;Helfrich et al., 2018). We also used an approach where we separated the oscillatory activity from the 1/f background activity using IRASA and subsequent alignment of the highest oscillatory power above the 1/f background (Helfrich et al., 2018;Wen & Liu, 2016). The 1/f signal, also called pink noise, is a widely occurring phenomenon in natural and biological systems, where power tends to fall off with increasing frequency. As such, it is a known component of neurophysiological signals (He, 2014). The behavioural time series in this study are an aggregation of averaged performance at time points; therefore, it does not resemble a naturally occurring, continuous measurement. However, 1/f noise has been previously found in similar artificially constructed time series of cognitive performance (Gilden et al., 1995;Kello et al., 2010;Wagenmakers et al., 2004). Thus, there is still good reason to believe that there is a distinct 1/f component present in our behavioural signal. However, it could still be fruitful to explore other means of quantitatively analysing spectral peaks based on the 1/f component. One of these other methods, FOOOF (Fitting Oscillations & One-Over-F), allows more elaborate modelling of the 1/f signal through more extensive parameter setting (Donoghue et al., 2020). It proposedly overcomes a problem that occurs using IRASA, where oscillations with a large amplitude are difficult to separate from the signal (Donoghue et al., 2020). Thus, in studies into behavioural oscillations, it is necessary to carefully consider the suitability of the spectral peak detection method.
Behavioural or electrophysiological spectral group data can be attractively displayed using spectral peak alignment (as in Helfrich et al., 2018; see also Holt et al., 2019;Richter et al., 2017). In our study, even though we did not find effects at the individual level, aligned peaks still provided a strong visual impression that spectral peaks were present (Figure 2c,d). However, statistical analyses suggest the opposite. We showed that peak alignment after z-scoring the observed data against the surrogate data, and the IRASA procedure, in both cases did not resulted in a statistically significant peak. Performing the IRASA procedure over randomised data yielded an aligned peak that was higher than the aligned peak of the observed data. This shows that whereas peak alignment is a visually attractive means of presenting group data, it might result in a false impression that spectral peaks are present in the data. Therefore, it is necessary to conduct and report statistical evidence of the relevance of the group-aligned spectral peak in question.
Above all, the current study illustrates that a robust method of evaluating rhythmic attentional sampling in behavioural paradigms still ought to be found. There is a strong need to deduct the vast variability in paradigms and move towards more standardised ways of collecting data of rhythmic behavioural studies. For example, a systematic way (i.e. salient flash) to reset the ongoing attentional sampling phase could be implemented in most behavioural paradigms. The effect of this phase-resetting event could be further warranted by electrophysiological findings. Another question is whether dichotomous data (e.g. detection accuracy) or continuous data (e.g. reaction times) better capture rhythmic variability in behaviour. Both types have been used in rhythmic attention paradigms (Fiebelkorn et al., 2013;Landau & Fries, 2012;Peters et al., 2020). Furthermore, there is still no consensus on the best way to identify peaks in power spectra that result from behavioural time courses.
In conclusion, we found no effect of rhythmic attention, nor effects of behavioural relevance of a cue on rhythmic attention, at the cued location. We did find indications of periodic attentional sampling towards noncued locations, with effects occurring specifically when the cue renders one location behaviourally relevant (i.e. it is informative). These attentional switches seem to occur systematically, in a theta-rhythmic fashion. However, these results are not suitable for drawing robust conclusions due to the absence of a priori expectations and the low number of trials in this condition.
The attentional weight of an object or location that is required in the wide landscape of rhythmic attention paradigms might play a role in the vast differences that are observed in rhythmic sampling frequency. However, more research needs to be conducted to the exact role of a spatial location's attentional weight on rhythmic sampling. Other open questions concern the causes of interindividual variability in sampling frequency and what role rhythmic attention serves amongst other attentional processes.
|
2021-06-17T06:16:25.663Z
|
2020-10-07T00:00:00.000
|
{
"year": 2021,
"sha1": "043ceb4cad8d0889cf1eda5509297fee134bfb77",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.15353",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "bdcbde295f23103fa4d2f7e8b893145d1d5ffb1d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
238791283
|
pes2o/s2orc
|
v3-fos-license
|
Optimisation on the thermal insulation layer thickness in buildings with environmental analysis: an updated comprehensive study for Turkey’s all provinces
This study determines the optimum insulation layer thickness to be applied to external building walls considering the heating degree-day (HDD) method, then energy saving costs, payback periods, and carbon dioxide (CO2) emissions are calculated accordingly. The optimisation analysis is performed for four different thermal insulation materials (glass wool, rock wool, extruded polystyrene, and expanded polystyrene). Natural gas is chosen as fuel for heating purposes, and horizontal perforated brick is preferred in the wall. One of the original features in this study is environmental analysis to determine the CO2 emission for the insulated wall in Turkey provinces. Another feature is that it has the most up-to-date data about HDD values and fuel and insulation material costs. The worst and best insulation materials are obtained as rock wool and glass wool, respectively. The optimum insulation layer thickness for the best case is varied between 0.07 m and 0.23 m, depending on the HDD values of provinces. The annual total energy saving cost is in the range of 4.4–53.5 $/(m2year), and the payback period is 0.11–0.38 years. Besides, the reduction in annual CO2 emission is changed between 53.2% and 94% for the best case, compared to the uninsulated wall. Journal of Thermal Engineering Web page info: https://jten.yildiz.edu.tr DOI: 10.18186/thermal.978057
INTRODUCTION
Energy demand and consumption escalate rapidly with increasing population all over the world. Countries that supply most of their energy through imports resort to more import policies to respond to energy requests. The increment of energy imports may cause energy bottlenecks in the future. From this point of view, technological developments to increase energy saving appear cheaper way than energy imports. The energy consumption of the building sector in Turkey, with 27%, is ahead of other sectors (transportation, industry, agriculture and forestry, commercial and public services), according to the statistical data in [1]. Most of the consumed energy in buildings is used in space-heating systems for Turkey [2]. Improving the building envelope by adding thermal insulation material has become an effective method to reduce heating and cooling demands [3]. A law accepted in Turkey aims to use energy effectively, reduce energy costs, and protect energy resources and the environment [4]. An energy identity certificate regarding thermal insulation material has been made compulsory with the energy performance regulation for buildings [5].
Diminishing the energy demands in the buildings can be procured by minimising heat losses [6]. The vast majority of heat is lost from the external building wall due to inadequate insulation thicknesses, thereby leading to energy waste [7]. Therefore, increasing the insulation layer thickness reduces heat losses from the wall significantly, thereby cutting back on expenses required for thermal comfort in buildings [8]. However, the insulation thickness must be neither too high nor too low to use energy virtually [9]. A building wall with low insulation thickness allows heat to pass from inside to outside or outside to inside, leading to an adverse impact on thermal comfort, energy savings, and air pollution [10]. A building wall with thick insulation material reduces the heat loss and subsequent heating load and fuel cost, but each increment in insulation thickness causes a gradual increase in investment costs for insulation [7,10]. The optimum insulation layer thickness varies depending mainly on the degree-day values, the fuel types, and the insulation materials [11,12].
Contemporary studies involving the optimisation analysis of the thermal insulation layer thickness have been primarily focused on the efficient use of energy in buildings [13,14]. Besides determining the optimum thicknesses, some studies consider the cost [15,16] and environmental [17][18][19] analyses. It is vital to choose a thermal insulation material with an appropriate layer thickness according to different climatic conditions to ensure maximum energy saving [20]. Although there are many similar studies for different countries, the literature review is limited to Turkey's findings. Turkey's climate zones are divided into four main climate types by an older Turkish Standard [21]. A significant number of studies are done with just one city regardless of the climatic region in Turkey. For example, in Malatya [10], Denizli [15,18,22], Erzurum [17], İstanbul [20], Bilecik [23], Bursa [24], İzmir and Ankara [25], Diyarbakır [26]. Moreover, some studies are carried out by selecting only one [27][28][29][30][31][32][33][34][35][36] or more [11][12][13][37][38][39] cities from each climatic zone; furthermore, by merely opting for cold towns [9,14] in Turkey. In most of them, different parameters (types of bricks, insulation materials, and fuels) are investigated by economic and environmental analysis to find the ideal configuration. Some of the notable works are detailed below.
Çomaklı and Yüksel [9] conducted the life-cycle cost analysis to determine the optimum insulation layer thickness for Erzurum, Kars, and Erzincan, the coldest provinces of Turkey. The authors used stropor as insulation material and coal as fuel in their review. They calculated that the optimum insulation layer thicknesses are 0.105 m, 0.107 m, and 0.085 m for Erzurum, Kars, and Erzincan, respectively. They also noted that energy-saving costs achieve up to 12.7 $/(m 2 year), and the maximum payback period is 1.58 years. Moreover, the same authors [17] investigated the environmental effect of fuel oil on the external building wall containing stropor in Erzurum, Turkey. The authors specified that the reduction in carbon dioxide (CO 2 ) emission is about 27%.
Bolattürk [13] carried out the life-cycle cost analysis for sixteen different cities of Turkey (İskenderun, Adana, Antalya, Aydın, Manisa, Trabzon, İstanbul, Mardin, Uşak, Isparta, Eskişehir, Nevşehir, Erzincan, Hakkâri, Ağrı, and Ardahan), five different fuel types (coal, natural gas, fuel oil, liquefied petroleum gas (LPG), electricity), insulation material (polystyrene) to determine the optimum insulation layer thickness, energy saving costs and payback periods. The author indicated that the optimum insulation layer thicknesses range from 0.024 m to 0.172 m, the improvements in energy saving costs change between 22% and 79%, and the payback periods vary from 1.3 to 4.5 years, depending on parameters. The researcher suggested that the best suitable type of fuel is natural gas for all climatic conditions when examined for atmospheric contamination. In another study by Bolattürk [40], the optimisation analysis for polystyrene layer thickness is performed by considering both heating and cooling demands for seven different Cite this article as: Aktemur C, Bilgin F, Tunçkol S. Optimisation on the thermal insulation layer thickness in buildings with environmental analysis: An updated comprehensive study for Turkey's all provinces. J Thermal Eng 2021;7(5):1239-1256. cities in Turkey (Adana, Antalya, Aydın, Hatay, İskenderun, İzmir, and Mersin). The author used the P1-P2 economic model to calculate the optimum insulation layer thickness. The author concluded that the heating degree-day has a more significant effect than the cooling degree-day on the determination of optimum insulation layer thicknesses for Turkey's climatic conditions. Dombaycı et al. [15] conducted the life-cycle cost analysis of the optimisation of insulation layer thickness using two different insulation materials (expanded polystyrene and rock wool) and five different fuel types (coal, natural gas, fuel oil, LPG, and electricity). The authors stated that the best insulation material is expanded polystyrene, and the ideal fuel type is coal for Denizli, Turkey. They also reported that the optimum insulation layer thicknesses vary between 0.032 m and 0.259 m, the energy-saving costs range from 4.6 $/(m 2 year) to 102.9 $/(m 2 year), the payback periods change from 1.15 to 3.03 years. Also, Dombaycı [18] investigated the environmental impact of optimum insulation layer thickness for the best insulation material (expanded polystyrene) and fuel type (coal) and found that the reduction in CO 2 emission is 41.5%. Then, Dombaycı et al. [32] examined the optimum insulation layer thickness with economic and environmental analysis for Aydın, Samsun, Eskişehir, Ardahan, which are located in four different climate zones of Turkey. The authors selected expanded polystyrene and polyurethane as insulation materials, coal and natural gas as fuel types. They identified that the optimum insulation layer thickness is in the range of 0.025-0.137 m, the energy-saving cost is 11.8-96 $/(m 2 year), the reduction in CO 2 emission is 64.2-83.3%. A thermoeconomic analysis considering exergy is utilised to calculate the optimum insulation layer thickness for İzmir, Trabzon, Ankara, Kars in Turkey by Dombaycı et al. [41]. They reported that the exergy reduction varies from 27% to 56.6% for expanded polystyrene and from 22% to 51% for polyurethane.
Akyüz [26] calculated optimum insulation thickness, energy saving, cost-saving, payback period, and greenhouse gas emission for the city of Diyarbakir in Turkey. He employed natural gas, coal, and fuel oil as an energy sources, and utilised expanded polystyrene as insulation material. The optimum insulation thickness, payback period, and the annual prevented environmental impact for natural gas, coal and fuel oil was found to be 0.057 m, 0.066 m, and 0.089 m, 2.85, 3.57 years and 2.05 years and 17.45 kgCO 2 / m 2 , 51.28 kgCO 2 /m 2 and 26.7 kgCO 2 /m 2 , respectively. Akyüz [35] determined the economic and environmental impact of thermal insulation for building walls in the cities of İzmir, İstanbul, Ankara, and Erzurum in Turkey. He employed expanded polystyrene, glass wool rock wool, and extruded polystyrene as insulation material and natural gas as an energy source. He found that payback periods for all scenarios have the lowest and highest value for RW and XPS, respectively. He concluded that thermal insulation is more effective in colder climates in terms of economic and annual avoided environmental impact.
Ustaoğlu et al. [38] conducted an experimental study to determine the thermal properties of lightweight concrete with different vermiculite content. They also do an analytical simulations to evaluate the energy consumption on a real building application for a variety of fuels and different climatic regions of Turkey. The proposed concrete can provide a significant reduction in energy consumption and reduce the carbon emission associated with the lower energy needs of buildings. They found that the payback period ranged from 1.4 years to 9 years, depending on the fuel.
Altun et al. [39] examined the effectiveness of insulation of an uninsulated building in two different processes according to TS 825: short-term (savings in annual heating energy need, additional insulation costs and additional greenhouse gas emission) and life cycle (life cycle cost and greenhouse gas emission). In addition, the payback periods of the additional investment in terms of costs and greenhouse gases were also analysed. Analysis has shown that insulations made according to the standard provide improvements up to 75% in annual heating energy need, 70% in life cycle cost, and 73% in life cycle greenhouse gas emission. The results reported that effective shell insulation greatly improved building energy performance and also significantly reduced building lifecycle costs and greenhouse gas emissions.
Şahin et al. [22] presented a comparative study, taking into account the different insulation materials and CO 2 emissions, in determining the most economical combination between the optimum insulation thicknesses in different fuel types for the city of Denizli in Turkey. They observed that the optimum insulation thickness, which makes the cost minimum, varies between 0.012 and 0.031 m for heating in the winter months and 0.009-0.022 m for cooling in the summer months. They concluded that while glass wool is suitable as insulation material with a difference of 22-24%, polyurethane with a difference of 10-34% would be more suitable in terms of low CO 2 emission.
Akan et al. [37] produced three different composite materials in different proportions from the mixtures of natural and waste materials and used them to determine the outer wall thickness of the buildings in twelve cities selected from four different climatic zones of Turkey. They determined that the annual energy requirement per unit surface area of the exterior walls of insulated buildings is 11.213-965.715 kJ/m 2 . They also observed that insulation costs ranged from 22.841 $/m 2 to 114.841 $/m 2 , and the payback period ranged from 2.5 to 6.5 years.The large-scale studies have been performed by Kürekçi et al. [11] and Kürekçi [12] for all provincial centres in Turkey. These studies estimated the optimum insulation layer thickness and carried out the economic analysis, but no effort is devoted to identifying environmental impacts. Furthermore, it is worth mentioning from the papers in [11,12] that heating and cooling degree-day values are not up to date, even the insulation material and fuel costs.
The determination of optimum insulation layer thickness incorporated into the external building wall is still a live subject [42]. Many studies have indicated that the optimum thicknesses depend on different parameters such as heating and cooling degree-day values in cities, the types of bricks, insulation materials, and fuels [13]. It is also noted that the optimum parameters depend on the costs of insulation material and fuel, the rates of interest, and inflation [25]. Nevertheless, the research covering updated costs and rates is not available in the literature to calculate the optimum insulation layer thickness with economic and environmental analyses. To fill this void, this study used the most up-to-date data, which are heating degree-day values for all cities in Turkey, insulation material and fuel costs, interest, and inflation rates, and provided more realistic results. Besides, the reduction in CO 2 emission by optimum insulation layer thickness for Turkey's provinces is examined for the first time in this study. Commonly recommended brick (horizontal perforated brick) and fuel (natural gas) types are used in calculations. The aim is to apply the life-cycle cost analysis to minimise energy-saving cost and insulation cost, then the environmental analysis to reduce CO 2 emission. Four different insulation materials (glass wool, rock wool, extruded polystyrene, and expanded polystyrene) are viewed in terms of optimum layer thickness for all cities. Later, the energy saving costs, the payback periods, and the reduction in CO 2 emissions are determined considering the optimum values. This study is expected to assist the readers in constructing future buildings in Turkey's all provinces.
METHODOLOGY
An insulated external building wall is considered a composite structure consisting of internal plaster, brick, thermal insulation material, and external plaster. The wall without any thermal insulation material is called the uninsulated wall, while the wall with thermal insulation material is named the insulated wall. The uninsulated and insulated walls are schematically depicted in Figures 1a, 1b, respectively. Horizontal perforated brick is used in the external building walls because it is the most preferred brick type in Turkey [21,43]. It stands out with less weight than other brick types; thus, it does not impose an extra burden on the building [44]. An essential property of horizontal perforated brick is that it provides indoor heat regulation with its high heat storage feature.
A comprehensive list of thermophysical properties of wall components and insulation types is shown in the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Handbook [45]. The thermophysical properties of the external building wall components are listed in Table 1. The thermal insulation materials mentioned in Table 1 are frequently applied in buildings in Turkey and do not harm the atmosphere and the ozone layer. The costs of thermal insulation materials, given in Table 1, are formed by averaging the annual costs, shown in Figure 2. Local currencies of all products included in this study are converted into dollars at the annual exchange rate [46].
Since the external building wall has more surface area, the heat loss amount is further than through windows, floors, and ceilings [12]. Thus, insulation application on the external wall has become a critical requirement to reduce the heat loss and fuel consumption of buildings [48]. Therefore, this study assumes that heat loss occurs only through the external wall. Heat loss per unit domain of the uninsulated or insulated wall is calculated as in Eq.(1) [48].
The overall heat transfer coefficient of the uninsulated or insulated wall (U) is determined by Eq.(2) [49].
R i (0.13 (m 2 C)/W) and R o (0.04 (m 2°C )/W) are the indoor and outdoor heat transfer resistances, respectively. R w is the heat transfer resistance of the uninsulated wall, R wt is the total heat transfer resistance of the uninsulated wall, and R ins is the heat transfer resistance thermal insulation material [48]. Also, x ins and k are the thermal insulation layer thickness and the thermal conductivity, respectively. When x ins is zero meters, U corresponds to the overall heat transfer coefficient of the uninsulated wall. Depending on heating the degree-day method, the annual heat loss per unit domain of the uninsulated or insulated wall is calculated by Eq.(3) [48].
HDD refers to the heating degree-day value, which is used to estimate the heating energy demand [42]. It is calculated based on a base temperature (T b ), which can be defined as an equilibrium point between a heat resource and heat loss of the wall [49]. This study assumes that the base temperature is less than or equal to 15°C. It means no need for heating at the outdoor air temperature (T o ) above 15°C [49]. HDD can be expressed as in Eq.(4), and the "+" sign here shows that only positive results have been collected for a year.
Turkey is divided into five climate zones based on a new revision in a Turkish Standard [43]. Figure 3 shows the map of all Turkey cities allocated to the five climate zones as heating degree-day values. These heating degree-day values are obtained by averaging the data of the last decade (2009-2019) in this study. In Figure 3, the first climate zone corresponds to the least HDD and the hottest cities, while the fifth climate zone is to the highest HDD and the coldest cities.
The comparison of HDDs of the last decade with the average values is shown in Figure 4 Turkey [49]. To provide a simple review, the HDDs are grouped at certain intervals and are shown in Figure 4(a-d) for all cities. The average values of the last decade are considered in all calculations of this study. In Figure 4(a-d), the highest annual heating demand reveals in Ardahan (4610°C-days). The minimum annual heating demand is identified to be in Mersin (583°C-days).
In the light of these HDD values, the annual heating energy demands of the uninsulated wall (E H ) and the insulated wall (E H,ins ) are calculated by Eq.(5) and Eq. (6), respectively [48].
Where η = 0.90 is the efficiency of the boiler [50]. Thermal insulation applications are necessary to reduce the heat loss from the building walls and increase energy saving. The most critical parameter for thermal insulation is the economic analysis to find the proper insulation layer thickness. Thus, the annual heating energy costis determined by Eq. (7) for the uninsulated wall (C AH ) and Eq.(8) for the insulated wall (C A,H,ins ) [48].
Where, LHV = 34518 kJ/m 3 is the lower heating value of natural gas [50]. Investigations in this paper are performed with natural gas as an energy source for the heating system in Figure 5. Because the natural gas combustion process works almost perfectly, very few waste products are released into the atmosphere as pollutants. Since it does not contain contaminating factors such as SO 2 , ash particles, and unburned gases, natural gas is the least damaging fossil fuel to nature. Also, it is suggested that the best suitable type of fuel is natural gas for Turkey's climatic conditions by a study [13]. Therefore, it is demanded as the most suitable option in Turkey for domestic heating. C fuel = 0.367 $/m 3 is the cost of natural gas, and it is obtained by the average of the annual costs of natural gas, shown in Figure 6 [50].
In the present paper, the life-cycle cost method is used to determine the optimum insulation layer thickness, considering its economic dimensions. The life-cycle cost method is an extremely comprehensive method used to determine the annual total costs of the building walls in the pre-assessment periods [51]. This method covers the costs of thermal insulation materials and fuels, and it considers the effects of interest and inflation. The new methods are developed based on the life-cycle cost method. It appears as a method used frequently in the literature and in real life [16]. The annual total energy saving costs for heating demand are estimated based on the lifetime (N =10 years) and Present Worth Factor (PWF). PWF is calculated with the interest (i) and inflation (f) rates as follows: Subsequently, regarding the life-cycle cost analysis, the annual total heating energy costs of the uninsulated wall (C H ) and the insulated wall (C T,H ) are calculated by Eq. (11) and Eq.(12), respectively [48].
After that, the optimum insulation thickness (x opt,H ) reducing the annual total heating energy saving cost is calculated with Eq.(13) [48].
A H represents the annual total heating energy saving cost, and it is calculated by Eq. (14). Payback period (PP H ) is calculated by Eq.(15) [48].
Besides the life-cycle cost analysis, the reduction in CO 2 emission is calculated in this study to investigate the environmental effects caused by fuel consumption. The natural gas combustion reaction equation is as follows: It is assumed to be complete combustion to facilitate the calculation process. For the heating demand per year, the annual total CO 2 emission is calculated by Eq. (17) The reduction in CO 2 emission is calculated by subtracting the annual total CO 2 emissions of the uninsulated wall and the insulated wall and then dividing the annual total CO 2 emission of the uninsulated wall. ρ fuel is the density of natural gas equal to 0.79 kg/m 3 [53]. The molecular weight of natural gas (M) is calculated by Eq. (19).
M g y z t 12 16 14 (19) The general chemical formula of natural gas is C g H y O z N t and g, y, z, t, are given in Eq. (16). All calculations are analysed by considering the flow chart shown in Figure 8.
RESULTS AND DISCUSSION
The optimum insulation layer thicknesses, energy saving costs, payback periods, and CO 2 emissions are calculated with the average heating degree-day (HDD) values of Turkey's all provinces for the last ten years (from 2009 to 2019). The optimisation analysis is carried out for four different thermal insulation materials (glass wool-GW, rock wool-RW, extruded polystyrene-XPS, and expanded polystyrene-EPS). The thermal insulation material types are widely used in Turkey. Natural gas is used for heating purposes because it is the most used fuel type in Turkey. The life-cycle cost method preferred by researchers in the literature is used for economic analysis. Furthermore, a CO 2 emission analysis is done to investigate the environmental effects caused by fuel consumption. All cases in the research paper are calculated by using a custom-made code considering the flow chart in Figure 8. Coding and optimising in other auxiliary programs such as Excel take much longer than Fortran. Therefore, Fortran software has been preferred to save time in this study. The obtained far-reaching findings are detailed below.
The results produced for four different insulation materials (GW, RW, XPS, EPS) with HDD values of Ardahan, Turkey are shown in Figure 9(a-d) to demonstrate the effect of different insulation layer thicknesses on the annual costs of insulation, fuel, and total. As can be seen in Figure 9(a-d), there are two significant parameters that affect the annual total heating energy cost of the insulated wall, which is defined as the sum of the insulation and fuel costs. The heat loss decreases as thermal insulation layer thickness increases in external walls. Therefore, the heating demand reduces, and the annual total energy saving cost decreases. However, if thermal insulation layer thickness is too much, the insulation cost continues to increase. In this case, the annual total heating energy cost of the insulated wall begins to rise after a certain point due to the extra insulation cost. The point where the annual total heating energy cost is minimum gives the optimum insulation layer thickness. These points where the annual total heating energy cost is minimum (C T,H =12. 25, 26.13, 20.08, and 15.56 $/(m 2 year)), are expressed as the optimum insulation layer thickness (x opt,H = 0.23, 0.09, 0.10, and 0.17 m) for the situations in Figure 9(a-d), respectively.
The results produced with Ardahan's HDD values in the case of using natural gas as an energy source are shown in Figure 10(a-c) to indicate the effect of different insulation layer thicknesses (GW, RW, XPS, EPS) on the annual total heating energy saving cost (A H ), the payback period (PP H ), the annual total CO 2 emission (M CO 2 ,ins ). The annual total heating energy saving cost, calculated by the difference between the annual total heating energy costs of uninsulated (C H ) and insulated (C T,H ) walls, is given in Figure 10a for different insulation layer thicknesses. The annual total heating energy saving cost increases with increasing insulation layer thickness; it attains a peak and then begins to decrease. For example, the maximum annual total heating energy saving cost (A H = 53.50 $/(m 2 year)) is obtained with 0.23 m insulation layer thickness for GW. As the insulation layer thickness increases, the payback period (PP H ) always tends to rise (Figure 10b). Nevertheless, the trend of payback period increments after reaching the optimum insulation thickness because of increasing insulation cost and decreasing the annual total heating energy saving cost. For Ardahan, the payback period varies between 0.11 years (x opt,H = 0.23 m for GW) and 0.30 years (x opt,H = 0.09 m for RW) considering the thermal insulation material type and the optimum insulation layer thickness. The annual total CO 2 emission (M CO 2 ,ins ), shown in Figure 10c, reduces with increasing insulation layer thickness. After the optimum value, the variation of the annual total CO 2 emission decreases despite the increment of insulation layer thickness, and its curve becomes an approximately horizontal form. Table 2 shows the comparison of results between the present study and two studies in the literature. EPS as a thermal insulation material, Ardahan as a province, and natural gas as a fuel type are selected in Table 2. Since the input parameters differ in all studies, the results obtained by equations in the methodology section are different from each other.
The calculations are repeated with thermal insulation materials for all cities in Turkey to determine the optimum insulation layer thickness. Figure 11(a-d) illustrates the variation of optimum insulation layer thicknesses (x opt,H ) with increasing HDD values in Turkey's all provinces for different insulation materials such as GW, RW, XPS, and EPS. For example, Mersin province has the lowest HDD (583 °C-days) in Figure 11a, while Ardahan province has the highest HDD (4610 °C-days) in Figure 11d. The optimum insulation thickness is lower in hotter cities (HDD is low) and higher in colder cities (HDD is high). Briefly, it can be stated that the required thermal insulation layer thickness increases as the heating demand increases. The optimum insulation layer thickness is in the range of 0.07-0.23 m for glass wool, 0.01-0.09 m for rock wool, 0.02-0.1 m for extruded polystyrene, and 0.04-0.17 m for expanded polystyrene. While the optimum insulation layer thicknesses are the least level with RW, which is the most expensive thermal insulation material, they are the highest level with GW, which is the cheapest thermal insulation material. Besides, the optimum insulation layer thickness is a maximum of 0.23 m for GW. When the optimum insulation layer thicknesses that respond to heating demand are examined, they decrease in order of GW, EPS, XPS, and RW due to the increment of thermal insulation costs. saving costs. In this case, the payback periods decrease with increasing the annual total heating energy saving costs. As can be seen from Figure 13(a-d), the payback period does not regularly decrease and fluctuates, in contrast to the continuous increase in cost. Because there is no proportionality between the insulation cost and the annual total heating energy saving costs. The highest payback period (1.69 years) is realised with the use of RW in Eskişehir, while the lowest payback period (0.11 years) is with the use of GW in Kilis. The most advantageous thermal insulation material is GW, and its payback period ranges from 0.11 to 0.38 years for all cities, depending on the optimum insulation layer thicknesses.
The variation of the annual total CO 2 emission is calculated by the optimum insulation layer thicknesses for building heated by natural gas. Figure 14(a-d) indicates the variation of the annual total CO 2 emission (M CO 2 ,ins ) with increasing HDD values in Turkey's cities. The obtained results are presented in Figure 14(a-d) for different thermal insulation materials (GW, RW, XPS, and EPS). An observation like the annual total heating energy saving costs can be accomplished to the annual total CO 2 emission, so CO 2 emissions increase with rising optimum insulation layer thicknesses. The fluctuations in the annual total CO 2 emission change similarly to the payback period curves due to the lack of a linear ratio between HDD values and optimum insulation layer thickness. By using RW, the lowest CO 2 emission (3.40 kg/(m 2 year)) takes place in Eskişehir, while the highest CO 2 emission (9.57 kg/(m 2 year)) is in Kilis. The minimum CO 2 emissions are 1.92 kg/(m 2 year) (EPS) and The study's primary goal is to identify the annual total CO 2 emission for an insulated wall compared to the uninsulated wall depending on the fuel type used in buildings for Turkey's all provinces. Adhering to this goal, the variation of the reduction in CO 2 emission is calculated by the optimum insulation layer thicknesses. Figure 15(a-d) shows the reduction in CO 2 emission with increasing HDD values in Turkey's cities. The reduction in CO 2 emission increases in the order of RW, XPS, EPS, and GW for insulation materials. Compared to the uninsulated wall, the reduction in CO 2 emission varies from 1.97% (in Adana), 20.43%, 36.67% and 53.19% (in Osmaniye) to 86.41%, 89.88%, 92.33% and 94.05% (in Ardahan) for RW, XPS, EPS and GW, respectively.
CONCLUSIONS
The present study examines the effect of thermal insulation material types (glass wool, rock wool, extruded polystyrene, and expanded polystyrene) and their optimum layer thicknesses on energy-saving costs and payback periods in heated buildings with natural gas. Specifically, this study aims to determine the reduction in carbon dioxide (CO 2 ) emission of the insulated wall compared to the uninsulated wall in Turkey's provinces. To these aims, the life-cycle cost analysis is done by using the most actual data, such as heating degree-day (HDD) values, insulation material, and fuel costs, interest, and inflation rates. The attained noticeable results in this study are presented, which is as follows: • As HDD values changes between the lowest (583°C-days in Mersin) and the highest (4610°C-days in Ardahan), the optimum insulation layer thickness increases in the range of 0.07-0. 23 Consequently, this study will be a resource to serve the architects and engineers during the construction of future buildings in Turkey's provinces. The number of parameters can be increased with different fuel types, insulation materials, and wall component types to evaluate the environmental effects in future works. Moreover, the optimum insulation layer thicknesses can be determined by the thermoeconomic analysis.
DATA AVAILABILITY STATEMENT
No new data were created in this study. The published publication includes all graphics collected or developed during the study.
CONFLICT OF INTEREST
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
ETHICS
There are no ethical issues with the publication of this manuscript.
|
2021-09-27T20:52:44.320Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "941cf0ede818c1e911a5b40b276490e624561898",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1906057",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "47d05aa6a18186b497b555f1c4e2811a47b8de52",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
268714485
|
pes2o/s2orc
|
v3-fos-license
|
Active Distribution Network Fault Diagnosis Based on Improved Northern Goshawk Search Algorithm
: Timely and accurate fault location in active distribution networks is of vital importance to ensure the reliability of power grid operation. However, existing intelligent algorithms applied in fault location of active distribution networks possess slow convergence speed and low accuracy, hindering the construction of new power systems. In this paper, a new regional fault localization method based on an improved northern goshawk search algorithm is proposed. The population quality of the samples was improved by using the chaotic initialization strategy. Meanwhile, the positive cosine strategy and adaptive Gaussian–Cauchy hybrid variational perturbation strategy were introduced to the northern goshawk search algorithm, which adopted the perturbation operation to interfere with the individuals to increase the diversity of the population, contributing to jumping out of the local optimum to strengthen the ability of local escape. Finally, simulation veri fi cation was carried out in a multi-branch distribution network containing distributed power sources. Compared with the traditional regional localization models, the new method proposed possesses faster convergence speed and higher location accuracy under di ff erent fault locations and di ff erent distortion points.
Introduction
Distribution networks are located at the end of the power system and connect with users directly.It is particularly important to ensure the operational stability of the distribution network.Under the influence of the double carbon background, more and more distributed generation (DG) systems are being connected to distribution networks and, in this situation, the direction of the system power flow would no longer be unique.The traditional distribution network is transformed into a multi-directional complex active distribution network (ADN), and the structure of the distribution network would be more complex, leading to the increase in the difficulty of fault location, which brings great challenges to the stable operation of the ADN [1,2].Thus, it is of great research significance to study the fault localization methods suitable for ADNs [3][4][5][6][7][8][9].
Due to the access of DGs, when a fault occurs, the fault characteristics of ADNs are quite different from those of traditional distribution networks [10], which can be mainly summarized as follows: (1) The location of access points and the access capacity of DGs affect the direction of the system power flow and the amplitude of the fault current [11].(2) The outputs of each DG are uncertain, causing uncertainty in the fault transient process.(3) The low-voltage distribution network possesses more branches and their line parameters are unevenly distributed, increasing the complexity of fault analysis [12].Against the background of automated upgrades of distribution network equipment, the Citation: Guo, Z.; Ji, X.; Wang, H.; research of fault localization methods based on the current information uploaded by feeder terminal units (FTUs) has become a hot spot [13].
A variety of distribution network fault localization methods have been put forward, which can be classified according to the localization results: Fault routing, fault ranging, and fault segment localization [14].And the realization of research methods is mainly based on matrix algorithms and intelligent algorithms.The matrix algorithm [15] combines the distribution network topology with the current information uploaded by the FTU to generate the fault discrimination matrix and locates the fault section through matrix operation; the intelligent algorithm is based on the theory of "minimum fault diagnostic set" and converts the fault section localization problem into a mathematical optimization problem, which could be solved using intelligent algorithms.The authors of [16] use the multiverse algorithm to locate faults in distribution networks, which improves the algorithm through the introduction of the adaptive elite strategy and adaptive mutation operation and, though the localization ability of the method is not bad, it requires a lot of computational resources and has some limitations in treating some specific faults.The authors of [17] propose a localization method based on the vulture search algorithm, which improves the algorithm's optimization ability by introducing the crossover operator, the non-uniform variation operator, and the somersault foraging strategy.However, the high computational cost in fault localization limits its application.The matrix algorithm and the chaotic binary particle swarm algorithm were used to solve the distribution network fault localization problem in [18], which establishes the causal association matrix and criterion of zones and nodes based on the actual structure of the distribution network.Nevertheless, the convergence and stability of the algorithm need to be further considered since they determine whether the method could be effectively applied in ADNs.The authors of [19] apply an improved algorithm based on the quantum ant colony algorithm to solve the distribution network fault location problem; the authors of [20] verify that introducing the improved sine-cosine algorithm into the local development stage of the algorithm increases the population diversity at the late iteration stage, prevents the algorithm from falling into a local optimum, and effectively improves the algorithm's solution accuracy and convergence speed; the authors of [21] propose an improved differential evolution algorithm, self-adaptive differential evolution with Gaussian-Cauchy mutation (SDEGCM), which introduces two strategies, Gaussian-Cauchy mutation and parameter self-adaptation, to improve the performance of the algorithm; the authors of [22] employ a Hunger Games search algorithm based on Gaussian-Cauchy variants, however, the selection and adjustment of the parameters of the above four algorithms have a large impact on the results, and sufficient parameter optimization and testing are needed for specific problems.
To solve the problem, this paper proposes a zonal fault localization model for distribution networks based on the improved northern goshawk optimization (INGO).The chaotic initialization strategy was used to improve the quality of the sample population, and the sinusoidal cosine strategy and the adaptive Gaussian-Cauchy hybrid variance perturbation strategy were introduced into the northern goshawk search algorithm (NGSA), which used perturbation operations to interfere with the individuals to improve the diversity of the sample population, contributing to jumping out of the local optimum to strengthen the ability of local escape.Finally, the effectiveness and reliability of the proposed method were verified by comparing with the northern goshawk optimization (NGO) algorithm, gray wolf optimization (GWO) algorithm, and whale optimization algorithm (WOA).
Distribution Network Zoning Models and Systems
Distribution system problems include the short circuit, overload, ground fault, and other fault problems.Zonal fault localization in distribution networks means dividing the distribution network into a series of zones and monitoring and analyzing the flow of electrical energy in each zone to help locate the positions where faults occur quickly.This zonal fault localization method can help improve the accuracy and efficiency of fault diagnosis and shorten the fault-processing time, ensuring the stable operation of the power system.
Taking the dual-source distribution network shown in Figure 1 as an example, it is stipulated that the state of the end nodes in each region represents the state of the whole region.The first level of the hierarchical localization model uses an algorithm to locate the region where the fault occurs according to the regional state, and the second level locates the specific faulty zone within the region obtained from the localization.For example, a fault occurs in zone 9 in the distribution network.The first level starts the regional localization with the algorithm according to the state of each region and provides the localization results for the Region 3 failure; the second level starts the specific segment localization according to the state of nodes 8, 9 in Region 3 to determine the faulty zone.
Coding Methods
In order to accurately determine the direction and source of the fault current when a fault occurs in a zone, this paper defines that the direction of current flow from the main power supply to the load side is the positive direction.Ij denotes the uploaded value of the fault current at node j.Ij = 1 when the FTU detects a forward fault current, Ij = −1 when the FTU detects a reverse fault current, and Ij = 0 when the FTU does not detect a fault current, as shown in Equation (1). 1, positive fault current lowing at switch 0, no fault current lowing at the switch 1, reverse fault current lowing at switch (1)
Construction of Switching Functions
The switching function can realize the conversion of line fault state and switch fault information.Considering the switching of distributed power supply, the switching function can be defined as follows [19]: where * is the switching function value of node j, also known as the desired state value; Ku and Kd are the power casting coefficients of the upstream and downstream regions of node j, respectively, and the coefficients are set to 1 when there is a power supply input, otherwise, they are set to 0; ∏ , is the value of all zones or operations between node j and each upstream power supply; ∏ , is the value of all zones or operations between node j and each downstream power supply; ∏ , and ∏ , are the values of the upstream and downstream zones or operations of node j, respectively; M1 and M2 are the number of power sources of node j upstream and downstream, respectively; N1 and N2 are the number of the zones of the node upstream and downstream, respectively.
Objective Function and Switching Function
The principle of localization based on the FTU fault current information is to minimize the sum of the difference between the uploaded and actual values of the currents at each node.The smaller the sum of the difference, the higher the similarity between the solved fault situation and the actual fault situation.The objective function was defined by Equation ( 3) [19].
where is the factor preventing misjudgment and | | the factor preventing misjudgment for node .When the uploaded value of a zone is not equal to the actual value, there is a possibility that the minimum value corresponds to a series of faults and, in order to make up for this shortcoming, | | is introduced, and it is set to 0.5.
Taking Figure 1 as an example, when a fault occurs in zone X5 in Region 2, X = [X1, X2, L, X16] = [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0], and the values of switching functions for node 1 and node 2 in Region 1 can be given by Equation ( 2) as follows: Similarly, the switching function values of the nodes in Region 3, Region 4, and Region 5 can be calculated as follows: * * * * 0 , * * * 0 , * * * -1 .It could be found that the switching function values of the nodes in the faulty region are not equal, while the switching function values of the nodes in the nonfaulty region are equal to that of the nodes at the end of their respective regions.To further validate the partitioning basis, assuming different zone faults and multiple faults occur in Region 3 and there is no information distortion, the switching function values of all the nodes in the non-faulty regions can be calculated, as shown in Table 1.
Northern Goshawk Optimization
Northern goshawk optimization [20] (NGO) was proposed in 2022 by Mohammad Dehghani.It simulates the behavior of the northern goshawk during the hunting process, which includes prey identification and attack, pursuit, and escape.In the optimization algorithm for northern goshawks, the hunting process of northern goshawks can be divided into two stages: the exploration stage (prey identification and attack) and the exploitation stage (pursuit and escape).The mathematical model established based on the NGO according to the different hunting stages can be summarized as follows: (1) Exploration stage In the first stage of northern goshawk hunting, it randomly selects a prey and then quickly attacks it.The mathematical expression for the behavior of the northern goshawk in this stage can be presented by Equations ( 6)- (8). is the fitness value corresponding to it; R is a random number that belongs to [0,1] and, furthermore, the value of i could be either 1 or 2; r and i are random numbers that were used to generate the stochastic NGO in the search and update the behavior of the random numbers.
(2) Exploitation stage When the northern goshawk starts the process of capturing the prey, the prey tries to escape at the same time.During the process of pursuing the escaped prey, the movement speed of the northern goshawk is extremely fast, and it can capture the prey at any time and in any place.Assuming that the northern goshawk in this hunt is in the attack position of radius R, then the mathematical expression for the second stage can be presented by Equations ( 9) and (10) [20].
0.05 (10) where t is the current number of iterations; T is the maximum number of iterations; , is the new state of the i-th northern goshawk in the pursuit stage; , , is the new state of the i-th northern goshawk in the j-th dimension in the pursuit stage; and , is the adaptation value in the new state.It can be seen that the NGO achieves parameter optimization by searching for the optimal penalty parameter c and kernel parameter g of the diagnostic model system, which could improve the classification accuracy.However, the following limitations still exist: (1) During the initialization of the sample population, the distribution of the initial solution is random and uneven, and the quality of individuals in the population varies, which can easily lead to a lack of population diversity, resulting in missing the potential optimal solution.(2) During the process of prey escape in the second stage, the northern goshawk chases the prey at extraordinary speed, which can easily lead to the algorithm falling into the local optimum [20].
Northern Goshawk Optimization for Binary
The value of zone state can be only 0 or 1, thus the position of the northern goshawk needs to be represented in binary form.The position of the northern goshawk can be updated according to the following equations [20].
The Improvement of NGO
In order to improve the optimization performance of the NGO algorithm used in fault location in ADNs, the following improvements were carried out: (1) When the individual northern goshawk chooses another search area, the decision would be made based on the information available at the previous stage.If the northern goshawks in the population are all trapped in a localized search, they will not be able to capture the prey during the global search optimization process accurately.In order to compensate for this deficiency, the sinusoidal projection was introduced to promote a more uniform distribution of the northern goshawk population in the search space, which can solve the problem of "premature convergence" to a certain extent.Meanwhile, the global detection ability of the algorithm could be further enhanced by the crossover operation and the non-uniform variation operator.The crossover operation is to swap the positions of northern goshawk population and recalculate the fitness value, and when the fitness value of the new positions is better than that of the previous northern goshawks, the previous northern goshawks would be replaced, which increases the diversity of northern goshawk population after the iteration.The non-uniform variation operator is to perturb the position of the northern goshawks, which helps increase the diversity of the individual population, leading to an increase in the search range and the search accuracy of the northern goshawk algorithm.When the non-uniform variation strategy perturbs the position of the northern goshawks, k dimensions would be randomly selected for each northern goshawk to be perturbed.Once the new northern goshawk individuals generated by the perturbation are better than the previous northern goshawk individuals, the previous northern goshawk individuals would be replaced.
where denotes the location of the northern goshawk at the (t + 1)-th iteration; denotes the position of the northern goshawk at the t-th iteration; T denotes the maximum number of iterations, for example, T = 100; r is the search range of the northern goshawk population; b = 2 is a system parameter, which determines the degree of non-uniformity.
(2) The sine-cosine algorithm (SCA) [21] was introduced to avoid trapping in the local optimum, and the diversity of individuals could be maintained by using the oscillatory change characteristics of the sine-cosine model working on the positions, which is beneficial to the improvement of the global search capability of INGO.
For the basic sine-cosine algorithm, the step search factor / ( is a constant, and 1 in this paper; is the number of iterations) has a linear decreasing trend, which is not conducive to further balancing the global search and local development ability of the NGO.Thus, a new non-linear decreasing search factor was defined, as shown in Equation ( 15), which has a larger weight value and a more slowly decreasing speed during the early stage, contributing to the improvement of the global optimization ability; it has a smaller weight value and a more quickly decreasing speed during the later stage, where the algorithm's advantage in the local development could be enhanced, accelerating the process of obtaining the optimal solution.
where is the step search factor after updating; is the maximum number of iterations; is the adjustment factor, and 1; is the number of iterations.
(3) The NGO algorithm is easily trapped in the local optimum during the later iterations, so the adaptive Gaussian-Cauchy hybrid mutation perturbation strategy [22] was introduced to enhance the algorithm's ability to develop locally and search globally, improving the probability of obtaining the optimal prey location.Since the result of the mutation perturbation operation has randomness, if the mutation perturbation operation is carried out on all individuals, the complexity of the algorithm would be inevitably increased.Thus, in this paper, the mutation perturbation is just carried out on the optimal individual, and then the positions before and after the mutation are compared and the best one is chosen to enter the next iteration.To increase the diversity of the individuals and expand the population search range, Equation ( 14) was defined.
where is the optimal position of individual X in the t-th iteration; is the optimal position of individual X in the t-th iteration after the mixed Gaussian-Cauchy perturbation; is the Gaussian variation operator; ℎ is the Cauchy variation operator; and the weight coefficients / , 1 / max , which change progressively in a one-dimensional linear manner to ensure the balanced and smooth iteration.With the continuous iteration of the algorithm, positions of most of northern goshawk individuals would not change much.In this situation, the Gaussian distribution function coefficients were used to perturb the population, which helps the algorithm to jump out of the local optimum degrees of freedom and to overcome the interdimensional interference problem in the high-dimensional space at the same time.
Fault Location Process
The flow chart of fault location based on the improved northern goshawk algorithm is shown in Figure 2, and the specific steps can be summarized as follows: (1) Read the fault current status information of sectional switches, contact switches, circuit breakers, and other components detected by the FTU, then it is uploaded to the SCADA system of the master station.Based on the information, the actual fault current arrays of the switching nodes are generated according to the number of nodes.
Calculus Analysis
To verify the validity of the method proposed in this paper, a mathematical model of the IEEE 33-node ADN structure was built on the Matlab platform, which is shown in Figure 3. X1-X33 represent the 33 feeder segments, 1-33 represent the 33 switching nodes, and K1-K3 represent the access switches of each distributed power source.Due to the access of distributed power supply, the complexity of fault location increases.During the simulation experiment, the distributed power supply was connected to the ADN from random nodes and the number of the distributed power supplies varies.Meanwhile, it needs to be noted that the data for the troubleshooting algorithms in this paper came from a local data center and control center in Jilin Province, China.
Simulation Test Analysis
Assuming that a fault occurred in zone 7 in Region 6 in Figure 3, all three distributed power sources were in operation and the fault current information was not distorted.Firstly, the fault current information uploaded by the FTU can be calculated according to Equation 0,0,0,0], and then the current information of the end nodes in each region could be extracted as [1,1,−1,0,−1,0,−1,−1,−1,−1,0], using the improved northern goshawk algorithm to search for faulty areas, with the corresponding results obtained as [0,0,0,0,0,0,1,0,0,0,0,0,0,0,0].It can be deduced that a fault occurred in one of the zones in Region 6.Then, the INGO algorithm was applied and the results of the fitness vs. number of iterations are shown in Figure 4.The localization result was obtained as [1,0,0,0], i.e., the failure in zone 7 is consistent with the assumption.In order to illustrate the fault tolerance of the proposed method better, the distortion of information was further added to the faulty nodes.It is assumed that faults occurred in zone 9, zone 12, and zone 22 at the same time and that the state of node to 0, the state of node 18 changed from −1 to 0, the state of node 32 changed from 0 to −1.
Then, the convergence curve of fault location when node information was distorted could be calculated and obtained, as shown in Figure 5.Meanwhile, the state matrix [0,0,1,0,0,1,1,1,0,0,0,0,0] could be obtained, which means that faults occurred in Region 3, Region 6, and Region 7, respectively.On this basis, the state values of the zones in Region 3, Region 6, and Region 7 could be further calculated, and the results [0,1], respectively.It can be reasonably deduced that faults occurred in zone 9, zone 12, and zone 22, which is in accordance with the hypothetical positions of faults set.From Figure 4, it can be seen that the convergence curve presents a straight line after the first iteration, which means that the method proposed in this paper finds the optimal solution at the beginning of the iteration, and then the system convergence reaches a stable state.Similarly, it can be seen from Figure 5 that the convergence curve presents a straight line after the third iteration, indicating that the method proposed in this paper could find the optimal solution after two iterations.Thus, one conclusion could be drawn firmly that the INGO algorithm zonal localization model proposed in this paper is able to accurately locate the faults in ADNs within the maximum number of iterations, under the preset double faults without information distortion and triple faults without information distortion.Moreover, the method also has a great advantage in terms of the convergence speed.
Performance Comparison with Other Typical Algorithms
In this paper, the northern goshawk optimization (NGO) algorithm, gray wolf optimization (GWO) algorithm, and whale optimization algorithm (WOA) were chosen to carry out the comparative experiment.Single-point and multi-point fault comparison simulation experiments with different numbers and locations of distributed power sources connected to the distribution network were conducted.Meanwhile, the positioning accuracy rate and the average number of generations of convergence were taken as the algorithm's performance evaluation indexes.Since the above algorithms are all stochastic optimization algorithms, each algorithm was repeated 20 times in the experiment, then the average values of the performance evaluation indexes of each algorithm were calculated and obtained.
Multiple Points Containing Distortion Faults
Due to the complex and uncontrollable environment of the distribution network in actual operation, the FTU equipment nodes are often exposed to harsh environments, which may lead to the phenomena of data loss and data distortion when the detection equipment node transmits fault current information.When a fault occurs in the actual active distribution network, the FTU device at the node may not be able to upload the corresponding fault information owing to the fault, and there would be false alarms, omissions, and misreporting.In simulation experiments, under the premise of single-point and multi-point faults occurring in the distribution network, the FTUs were set to upload the fault current information aberration points, and the distribution network containing distributed power supply was analyzed for fault tolerance.For example, when [K1,K2,K3] = [0,0,0], a fault occurred in zone X11, and the single-point distortion position was node 8, the algorithm iteration comparison curves were calculated and obtained, as shown in Figure 8a.Similarly, when [K1,K2, K3] = [1,0,0], a fault occurred in zone X9, and the multi-point distortion positions were node 6 and node 12, the algorithm iteration comparison curve were calculated and drawn as shown in Figure 8b; when [K1,K2, K3] = [0,1,1], faults occurred in zone X15 and X27, and the multi-point distortion positions were node 3 and node 33, the algorithm iteration comparison curves were calculated and drawn as shown in Figure 8c; when [K1,K2,K3] = [1,1,1], faults occurred in zone X13 and X25, and the multi-point distortion positions were node 5 and node 23, the algorithm iteration comparison curve were calculated and drawn as shown in Figure 8d.From Figures 6-8, it can be found that the INGO algorithm has an obvious advantage over the algorithms of NGO, GWO, and WOA in terms of convergence speed, and the optimal solution can be found in about five iterations for INGO.Though the algorithms of WOA, SCA, and FFOA are able to search for the optimal solution in some cases, owing to their weak global optimization ability, average numbers of iterations of 6, 24, and 75 were needed, respectively, before finding the optimal solution.To prevent the occurrence of contingency, INGO, NGO, GWO, and WOA were carried out another 100 times.The accuracy and average numbers of iterations of each algorithm were measured in different cases, and the results are shown in Table 4. Table 4 shows that as the complexity of the fault type increases, the average number of iterations of the NGO, the SWO, and WOA for fault location increases while the accuracy rate decreases.Among the three algorithms, NGO performed best in terms of the average number of iterations and the solution accuracy.Compared to the previous three algorithms, for IGNO, the dimensionality is reduced and the computing speed and the accuracy of results are improved.For example, compared with NGO, there is an average increase of 4.5% in the accuracy of the solution process for INGO.Moreover, INGO performs better than other optimized search algorithms in terms of computing speed and accuracy, for example, compared with NGO, the computing speed of INGO is improved by 31.9%, which presents huge engineering application value.
( 2 )
Initialize the INGO parameters, such as the number of populations, population dimensions (i.e., total number of nodes), variable range values, and maximum number of iteration generations.(3) Initialize the binary northern goshawk population, in which each individual represents a set of faulty operating states of the feeder line segment [23-26].(4) Calculate the fitness value and update the position of the northern goshawk individuals.(5) Introduce adaptive Gaussian-Cauchy hybrid perturbation variant perturbation strategy and sine-cosine strategy.Then, determine whether the maximum value is reached and, if not, return to step 4.
( 6 )
Determine the fault area and generate initialized individuals by use of the exhaustive enumeration method, then calculate the adaptation value.(7) Determine the fault zone, then check whether the fault region and the fault zone match.If they match, the fault region is determined, and the process is over; otherwise, return to step 6.
Figure 4 .
Figure 4. Convergence curve of fault location without distortion of information.
Figure 5 .
Figure 5. Convergence curve of fault location when node information was distorted.
Table 1 .
Switching function values for each node under different fault conditions.
Table 2 .
Single Point of failure simulation example.
Table 3 .
Multi-point fault simulation example.
Table 4 .
Localization results of different algorithms.
|
2024-03-27T15:23:52.413Z
|
2024-03-25T00:00:00.000
|
{
"year": 2024,
"sha1": "cdac05be004aa300e9235a8102ddf06e63267328",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/13/7/1202/pdf?version=1711376753",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "87f1d13ef3a7bf2c39a7b8134535b07c92a8e4e5",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
257847635
|
pes2o/s2orc
|
v3-fos-license
|
A combined Blockchain and zero-knowledge model for healthcare B2B and B2C data sharing
Abstract The two main forms of healthcare data exchange among entities are business-to-business (B2B) and business-to-customer (B2C). The former uses the electronic data interchange (EDI) technology between healthcare institutions, while the latter is usually conducted by providing web-based interfaces for patients. This research argues that both forms have inherent security and privacy weaknesses. Furthermore, patients lack appropriate transparency and control over their own Personally Identifiable Information (PII). We explore the issues of medical record exchange, analyze them and suggest appropriate solutions in the form of a new model to mitigate them. The vulnerabilities, ranging from critical to minor, include the possibility of Man-in-The-Middle (MiTM) and supply chain attacks, weak cryptography, repudiable transactions, single points of failure (SPOF), and poor access controls. A novel model will be presented in this research for healthcare data sharing which applies the best security practices. The proposed unified model will counter the listed vulnerabilities. It automates the healthcare processes in decentralized architecture by utilizing the smart contracts for B2C transactions such as medicine purchase. The model is based on the Blockchain and zero-knowledge proofs. It is made with novel controls which represent the latest advancements in cybersecurity. It has the potential of setting a new cornerstone.
Introduction
Recent developments driving the transformation of cities to smart cities call for a rise in the importance of providing cybersecurity to the different data being collected.Although smart cities allow seamless connection among citizens and reduce the city's operating costs, they also create cyber risk.The risks to the data therein will affect all participating stakeholders of the smart infrastructure, which includes financial services, healthcare, transportation, and power (Bai, Hu, He, & Fan, 2022).
The Blockchain technology provides functions necessary for the sharing of trusted and verifiable data (Al-Jeshi, Tarfa, Al-Aswad, Elmedany, & Balakrishna, 2022).It allows the sharing of data across different parties in a secure and verifiable manner.The technology is being integrated into several digital services (Swan, 2015;Al-Aswad, El-Medany, Balakrishna, Ababneh, & Curran, 2021).Many countries have plans to invest in the Blockchain for the development of their digital services to improve process efficiency (Ølnes, Ubacht, & Janssen, 2017).
Many security and privacy concerns arise when data is being communicated using the traditional client-server model.Having a unified model which connects isolated steps within the supply chain together can mitigate risks and streamline processes; and the base for such a model is the Blockchain.This research presents the Blockchain, a decentralized technology for storing data shared by a network of peers, as a solution for the presented issues.
In the proposed model, the Blockchain does not replace EDI but replaces the way EDI messages are exchanged.Instead of the data being submitted to an EDI server, the data is submitted to a private Blockchain network.All transacting businesses are part of the network.The Blockchain network, in turn, will validate the transactions by the use of smart contracts and pass the transaction to the other party.
Some security advantages will be provided inherently by the Blockchain network, such as preventing data manipulation and avoiding SPOFs.The proposed model has the following features and advantages: Provide unique security and privacy features needed for B2B transactions.Mitigating the risk of MiTM attacks by delegating the certificate authority (CA) responsibility to members of the network.Reducing the possibility of supply chain attacks by codifying trading agreements and contracts into smart contracts.Avoiding the use of vulnerable protocols by mandating a novel unified EDI exchange mechanism through HTTPS based on the RESTful programming concept.Providing granular access control.This research will address, among other issues, the security and privacy limitations of EDI.It will study the risks of sharing healthcare data and mitigate those risks.The main contribution of this paper is the development of a novel model combining the zero-knowledge proof with Blockchain to solve the security and privacy issues in the healthcare data sharing in both business-to-business or business-tocustomer scenarios (Mozumdar, Aliasgari, Venkata, & Renduchintala, 2016;de Vasconcelos Barros, Schardong, & Cust odio, 2022;Barros, Schardong, & Cust odio, 2022).This combined model has the potential to unify the way data are shared in healthcare (Al-Aswad, Hasan, Elmedany, Ali, & Balakrishna, 2019;Al-Aswad et al., 2021).
The rest of the paper is organized as follows: Section 2 review the most recent related works, in Sections 3, we review EDI technology, its components, controls, security and privacy weaknesses, and their countermeasures; Section 4 discuss the Blockchain Technology as a Solution.In Section 5, we present the combined model which aims to address the discussed risks and offers suitable countermeasures.In Sections 6, we map the security and privacy countermeasures within our combined model.Next, in Section 7, we provide a statement on the reliability of the proposed model's data, and discusses the results and overview of the model's limitations.Section 8 is the conclusions and future works.
Related work
The Blockchain technology enables the sharing of trusted and verifiable data among different entities in healthcare sector (Al-Aswad et al., 2021), it can revolutionize inter-business processes, such as those seen in supply chains or healthcare data sharing (Al-Abbasi & El-Medany, 2019; Kumar et al., 2022;Truong, Sun, Lee, & Guo, 2020).The Blockchain is a distributed digital ledger where data transactions are visible to the participant or peers (Al-Jeshi et al., 2022).A dynamic consent protocol will allow users to grant, deny or revoke access to data for different reasons according to their preferences.Blockchain technology offers a different approach to storing information (Truong, Sun, & Guo, 2019).Transactions are the equivalents to records of the classic database.The Blockchain uses a block for each data to be stored, with each block having cryptographic information combined between that data in the block along with a link to the previous block.A chain of blocks is maintained to establish trust and verifiability.Therefore, if a block within the chain is valid, all blocks up to that block are valid.
EDI refers to businesses electronically communicating data that relate to transactions across the supply chain (Lee & Whang, 2000).The main justification behind using such a technology is that it automates various parts of business processes (Lee, Ainin, Dezdar, & Mallasi, 2015).EDI allows two or more systems to directly communicate and transact medical information without the need for human data entry or involvement.It decreases costs and improves the speed and accuracy of medical data sharing (Gullkvist, 2002).EDI brought many improvements to the way data sharing was being conducted (Narayanan, Marucheck, & Handfield, 2009).
The technology employed in the most common EDI standards, such as X12 and EDIFACT, have some basic security capabilities, such as ensuring the transferred data cannot be read illegally by third parties and messages cannot be changed in transit.The EDI standards are responsible for their own security.Consequently, different ways of implementing security for each standard, all attempting to reach the same goal of ensuring confidentiality and integrity of transferred data, have evolved.The impact and the need for security was not evident until technology and automation reached regulated sectors such as healthcare (Blobel, Pharow, Engel, Spiegel, & Krohn, 1999) and banking (Dosdale, 1994).Mechanisms to provide security for EDI were retrofitted into EDI standards years after the standards were developed.A number of messaging security mechanisms such as X400 (email), X435 (email security) and X500 (directory services) are available to use as a security baseline for non-standard EDI (Abrams, Jajodia, & Podell, 1995).
There are numerous standard and non-standard EDI implementations each having varying limitations.The security and privacy limitations listed pertain to EDIFACT, an EDI standard implemented by the United Nations (UN) in 1987 (Graham, 1995).This research takes EDIFACT as an example of EDI because it is the only international standard available (Salminen, 1994).Other common standards, like X12 and TRADACOMS, are constrained to particular regions/countries or industries, and non-standard EDI are unconventional.As most EDI standards provide similar features, the same weaknesses may be found in standards other than EDIFACT.
In upcoming sections, this paper will discuss the weaknesses of EDI's security and privacy controls.A summary of the weaknesses is listed as follows: Man-in-the-middle (MiTM) attacks are possible due to lack of certificate verification by an authoritative third-party.Trust relationships among businesses render the systems vulnerable to supply chain attacks.Use of vulnerable cryptographic protocols.Transactions can be deleted after occurrence.Systems have a single point of failure (SPOF).Insufficient access controls.
There are many Blockchain architectures that have been implemented to provided services for the healthcare system.Figure 1 (Adapted from Shahzad and Heindel (2012)) represents a Blockchain combined with IoT technologies that enables the healthcare facilities to have efficient and accurate record management, which is critical.Figure 2 (Adapted from Tanwar, Parekh, and Evans (2020)) shows a "Blockchain-based electronic healthcare record system for healthcare applications", in this research, authors "propose an Access Control Policy Algorithm for improving data accessibility between healthcare providers, assisting in the simulation of environments to implement the Hyper-ledger-based Electronic Healthcare Record (EHR) sharing system that uses the concept of a chain-code" (Kim, Yu, Lee, Park, & Park, 2020;Alzuabi, Ismail, & Elmedany, 2022;Attaran, 2022).
A novel platform for monitoring patient vital signs using smart contracts based on Blockchain is shown in Figure 3 (Adapted from Jamil, Ahmad, Iqbal, & Kim, 2020).
Using the Blockchain means that businesses not only can exchange data, but also integrate data according to Swan (2018).In their paper, the authors conceptualize how can accounting ledgers be linked together through the Ripple Blockchain-based network.They indicate that the current ways of transacting, EDI and paper-based included, creates accounting journal entries which have to be confirmed and posted by humans.This is prone to errors and fraud.In the Blockchain-powered supply chain process, the whole process is automated as smart contracts will take over the verification tasks of humans.
With the Blockchain, accounting can benefit from an emerging concept called "triple-entry bookkeeping", where one or more accounts are debited, one or more accounts are credited, and the transaction is confirmed in a distributed ledger.
The use of private Blockchains is explored in a publication by Banerjee (2018), it discusses how can Blockchain be used to improve upon B2B processes that are currently being conducted between enterprise resource planning (ERP) systems.The authors suggest that Blockchain networks will enhance the standardization, synchronization and security of business data while ensuring that the data remains immutable and less prone to attacks.
Electronic data interchange (EDI)
EDI is the communication of business data in a structured, computer-readable form through an electronic medium.Data exchanged using the technology does not need to be re-keyed as it occurs between business systems in different locations (Hill & Ferguson, 1989).The data may be transported using a variety of mediums.These ways include exchange of physical drives, use of intermediaries such as value-added networks, and using the internet (Shi et al., 2020).
In order to implement EDI, an organization must have all the necessary infrastructure to run the technology or have them provided as a service.There are few main infrastructure elements as described by Hill and Ferguson (1989) which will be used within this paper: A common agreed-upon standard for representing business documents.Application to application intercommunication protocols.Translation software to convert internal business data into standard formats.Networking computer hardware and servers.A communication medium, such as Value Added Networks (VANs) or the internet.
Elements of EDI
A number of elements constitute an EDI system.The different elements are illustrated in Figure 4 (Adapted from Shahzad and Heindel (2012)).The contents of an EDI message are detailed in the next section.
The Figure depicts the transfer of EDI messages between a sender and a receiver.The messages are transferred in batches, where one or more messages are grouped together and then sent.A business calls other businesses who are involved with it in an EDI exchange its trading partners.Usually, a retailer, not the supplier, is the party who invokes an exchange.
Compliance checks also include conformance to the use of proper data element separators.The different element separators within a segment in EDIFACT.
Data transformation is the step where data is mapped to the data requirements of the receiver's system.An inbound message has its data mapped according to a pre-specified map definition.The map definition determines the place of each piece of data within the internal system's database.
Every EDI messages is formatted in a special way and is combined with other EDI messages in a batch container.There is a special piece of transaction software which does the "behind the scenes" work to enable this grouping.Its concepts are discussed next.
Controls and message safety
The message safety standard, which guarantees that messages are resistant to various attacks, available for EDIFACT is formally known as the EDIFACT Security extension.This extension was designed to provide baseline protections where protocol-level security is insufficient.It is independent from the transport mechanism (Turi, 1993).We review the features of EDIFACT Security extension.
The EDIFACT message-level security solutions include AUTACK message, CIPHER message, Message Security Header and Trailer (which explains UNH and UNT message wrappers), and KEYMAN message (Thorud, 1994).
AUTACK message
Secure Authentication and Acknowledgement (AUTACK) message is used in two ways: (a) as an authentication message from the sender to the receiver, and (b) as an acknowledgement message from the receiver to the sender.
When used as an authentication message, the AUTACK message proves that the previous EDIFACT messages were sent from the actual sender and not a malicious third party, the messages' contents and sequence are valid, and messages cannot be repudiated by the sender.
When used as an acknowledgement message, the AUTACK message acts as a confirmation by the recipient that the messages were indeed received, messages' contents are intact, messages are complete and the receipt of messages cannot be repudiated.
CIPHER message
The CIPHER message, as its name suggests, provides confidentiality to EDIFACT messages and interchanges.It achieves this by acting as a wrapper of encrypted EDI content.CIPHER headers are added whenever the EDIFACT content is encrypted to enable it to be processed by the receiver.An overall view of a CIPHER message is shown in Figure 5.
If the receivers possess the correct decryption key, they will be able to decrypt and process the message contents like any other EDIFACT message.
Message security header and trailer
The security services can be either provided by a separate AUTACK message or built into the message by including special security headers and trailers.These two methods can provide all security services, such as integrity and non-repudiation, with the exception of confidentiality.
In order to provide security within a message, header and trailer segments groups are added after a UNH and before the UNT.UNH and UNT are security headers and trailers which provide securityrelated metadata.Each segment group corresponds to a particular security service.This way, security can be added to any message.
The role of a security header is to specify the security controls which were applied to the message and to provide the data needed to conduct message validation.It includes listing of used mechanisms and algorithms, including corresponding keys and certificates.
The role of a security trailer is to carry the results of security services specified in the header.Usually, it contains results of algorithm computations.For example, a header may specify that a message uses SHA1 hashing algorithm to achieve integrity, while the trailer will carry the actual SHA1 hash of the message.
KEYMAN message
Key management (KEYMAN) message allows parties in a communication to request and deliver keys, certificates and other cryptographic information.It can also be used to convey revocation of a certificate and a certificate's status.
Security and privacy weaknesses
This research presented a high level view of the security and privacy issues associated with EDI and how they will be addressed.The following is a more detailed discussion of the issues: Businesses must maintain a trading partner profile containing information such as server addresses, bank account numbers, etc.When businesses exchange profiles, a certificate authority (CA) is required to verify that profiles exchanged are authentic.Value-added networks (VANs), a form of private data exchange networks which act as intermediaries between businesses, were used in the past to provide EDI solutions equipped with CA services but were expensive and soon replaced by the internet.With the global shift towards internet-based EDI, CA services for EDI almost ceased existence.The lack of internetbased CAs for EDI opened up EDI to man-in-themiddle (MiTM) attacks including DNS hijacking and packet injection.As opposed to internet websites which are verified by TLS certificates signed by known CAs, there are no known certificate providers for EDI and no established mechanisms for managing those certificates within the EDI protocols.Such mechanisms, if desired, would have to be retrofitted in protocol revisions.Before any EDI communication commences, business documents such as trading agreements and contracts must be signed.Those documents specify limitations on the nature of the business allowed to be done and the volume of transactions.Due to their complexity, those documents are either not codified or are weakly codified into the systems.Businesses may request a transaction through EDI which is out of the arrangement and get it accepted by their partner's system.In this scenario, the trust relationship (Ratnasingham, 1998) between businesses is exploited to affect the integrity of data contained in the system.These cases are a form of cyber-attacks called supply chain attacks (Miller, 2013).Parties must agree on cryptographic protocols.EDIFACT, like other standards, supports a wide range of connections such as FTP, HTTP and others, each offering different cryptographic capabilities.Since EDI is used by many legacy internet systems, the parties may be forced to communicate using deprecated and vulnerable protocols.Transactions can be deleted by colluding vendors and suppliers.A common reason for this is to commit tax fraud.There is no mechanism for third parties, such as auditors, to independently verify the occurrence of a transaction.The transactions can be deleted from the application's database.EDI systems often have a single point of failure (SPOF).A business often has one internet-facing "gateway" server or an ERP system running AS2 or FTP.This risks the availability of the EDI service, especially if a malicious attacker attempts a denial of service (DoS) attack.Any user in a business can view and conduct transactions that represent the business as a whole.
There is no granular level of access control.This leads to a potential loss of privacy.
Note that the presented weaknesses were determined based on observations of this research.They are the points this research will address and solve.
Possible countermeasures for vulnerabilities
This section will highlight the traditional countermeasures (Bendovschi, 2015) that can be utilized to prevent or mitigate the previously listed vulnerabilities (Ingham, Marchang, & Bhowmik, 2020).Note that the actual defenses employed in the proposed model may not match the traditional methods.
MiTM attacks on encrypted communications occur because the public keys of transacting parties are not verified by a trusted third party.Such attacks can be prevented using certificates (Amann, Sommer, Vallentin, & Hall, 2013).The certificates must come from a CA that all trading partners trust.
Supply chain attacks occur because the systems are not equipped with necessary data checks.They usually do the same checks on EDI data as the data inputted from a trusted employee within the organization.Such attacks are mitigated with more stringent checks and the codifications of the physical trading contracts and agreements (Boyson, 2014).
Deprecated and vulnerable cryptographic protocols should simply be replaced with more modern-proof protocols.The use of strong unbroken protocols must be mandated rather than suggested.Transaction deletion is difficult to prevent in case of colluding parties.It requires immutable ledgers, such as the Blockchain.DDoS attacks can also be avoided if the Blockchain was used, as the data is replicated among multiple nodes and there is no central server to attack.Granular access controls can be implemented into the ERP systems used by the employees, but require a network-level implementation to achieve proper protection against advanced persistent threats (APTs).
The Blockchain technology as a solution
The Blockchain is a recent technological advancement which has a disruptive potential.It is a distributed ledger of records, in which each party in the Blockchain network is having a copy of the latest version of the ledger.The ledger provided by the Blockchain is an append-only log used to record transaction data.Using a Blockchain network instead of a common database has numerous advantages: there is no central server to attack, the records cannot be modified by anyone, and the records (with the confidential information encrypted) can be made available to third parties, and so on.All the participants trust the transactions as even if one Blockchain server gets hacked, the records will not change and no damage can be done.
Introduction to the Blockchain
The idea of the Blockchain was first envisioned in paper by Nakamoto (2008).It was originally intended to become a distributed ledger which hosted Bitcoin, a cryptocurrency (electronic currency based on cryptography).Bitcoin is popular because it is the first digital currency to solve the double-spending problem in a practical way using processing power.Double-spending is flaw within digital cash schemes where money could be spent more than once.
As people realized that the potential of the Blockchain is much beyond digital currencies, the concept took off as an independent technology.The Blockchain refers to list of records that are related to each other by cryptography (Yli-Huumo, Ko, Choi, Park, & Smolander, 2016).Each record (block) contains the hash of the record before it, creating a sequence (chain) of records.This is illustrated in Figure 6.The Blockchain records form a ledger which is distributed across many servers which synchronize the records with each other.
The inherent nature of the technology makes it especially suitable for applications having multiple parties who not trust each other (Daneshgar et al., 2019).This is because every party can contribute to adding records to the Blockchain and each can independently verify the information contained within it.
The Blockchain architecture
It is important to understand the contents of a block and how is it cryptographically linked to other blocks.Furthermore, any reader must also know the process in which a new block is added and how do the network peers agree to add it to their ledgers.
With reference to Figure 6, block 0 is the first block in the Blockchain, thus it is known as the genesis block.Block 1 is the child block of block 0. Block 0 is the parent block of block 1.A genesis block has no parent.
Block
A block consists of a header and a body.The body of a block contains transactions.They may be the change of ownership of an asset, increase in the balance of an account, etc.
Consensus mechanisms
The main reason a consensus mechanism is used is to avoid the Byzantine Generals (BG) Problem.The problem mainly questions the course of action to take in case not all peers agree to the same results (Baliga, 2017).It helps the network prevent attacks from malicious nodes.Proof-of-work (PoW) and proof-of-stake (PoS) are famous consensus mechanisms.The model in this research uses Practical byzantine fault tolerance (PBFT), which is another consensus mechanism where 2/3 of the nodes must vote to select the node that builds the next block.Although PBFT is the used in this research, the next section will describe PoW to highlight the most common method of building blocks and later sections will describe how PBFT is a more appropriate selection for the context of this research, how it works and the way it will be utilized.
Taxonomy of the Blockchain networks
There are three types of the Blockchain networks: public, consortium and private.The public Blockchain is open to anyone in the world.Users can check the transactions and participate in the consensus process.A consortium Blockchain consists of a group of organizations, usually based on business partnerships, and is regarded as partially decentralized because only a subset of the members can participate in consensus and the selection of organizations who will participate in the subset is bound to respective business arrangements.A private Blockchain is owned by a single organization only.It is operated mostly to achieve better auditability and availability (Zheng, Xie, Dai, Chen, & Wang, 2018).The different types are compared in Table 1.
Security of the Blockchain
Li, Jiang, Chen, Luo, and Wen (2017) conducted a systematic study for security risks and weakness of the Blockchain different technologies and discussed total of 17 risks in the Blockchain and the causes, 12 of which were in the smart contracts.The vulnerabilities in the Blockchain are summarized in Table 2.
Since Proof of Work (PoW) is a consensus protocol confirms that the participating nodes with most of the processing power are the ones who can create the block, the 51% attack was designed to exploit the core of this concept.The attack states that if an attacker could possess more than or equal to 51% of the combined processing power of all nodes in the pool, new blocks can be added by the attacker and the remaining nodes will recognize the update as legitimate.A similar attack can be waged against Blockchains that utilize the Proof of Stake (PoS) consensus protocol by controlling more than or equal to 51% of the total coins balance in circulation.
The proposed Blockchain model
A Blockchain-based EDI has the potential to solve the security and privacy concerns of the old technologies, especially in those involving supply chains (Saberi, Kouhizadeh, Sarkis, & Shen, 2019).When the Blockchain is used, the identities of EDI trading partners can be posted on the network to become constantly up-to-date and immutable (the secure standards used for posting profiles are based upon initiating a high-level trust when first joining the Blockchain only and are not a secure choice when routinely adding new partners to the legacy pointto-point EDI).This way, the risk of MiTM attacks when transmitting partner information updates is mitigated as there is no need for direct communication (which often uses legacy security standards and self-signed certificates).Businesses do not have to maintain partner profiles because the information is available on the network.When posting data to the network, the utilized cryptographic protocols can be standardized and only the most secure ones can be adopted.Note that trusting a Blockchain once is more secure than building trust every time when a trading partner is added because it is less likely that a single secure handshake would be intercepted and spoofed as opposed with multiple handshakes.The keys needed for joining a Blockchain network can also be practically transferred physically as only a one-time setup is needed, which is not the case for continually adding point-to-point trading partners.
Figure 7 depicts a sample Blockchain network with two organizations and one ordering organization between them.
All transactions in the Blockchain network are secure and auditable.The transactions cannot be deleted and the network is resistant to failures.In addition, the uniquely developed smart contracts tackle the issue of trusting the contents of transactions.
Privacy is enhanced as stricter and more granular access controls can be applied to users between transacting businesses and even within a business.
Businesses can enforce access controls on each other.Furthermore, Blockchain allows for the creation of private channels for confidential deals and can permit the public or the government to access part of the data, such as for transparency or taxation purposes.
Justifying the use of Blockchain to counter existing vulnerabilities
There are two questions to be answered in this section: (a) Is Blockchain applicable to B2B transactions, and (b) Will using Blockchain lead to better mitigation of EDI's risks.It is evident that B2B transaction processing is an excellent use case of Blockchain, as there are multiple institutions involved who by nature do not trust each other.The institutions already can directly communicate with each other without an intermediary, but it is unreliable due to poor controls on the data on the infrastructure.Referring back to Section 3.3, this research has identified six vulnerabilities affecting the security and privacy of EDI.The distributed immutable nature of Blockchain inherently mitigates vulnerabilities relating repudiation of transactions (related nonavailability of known CAs described in Section 3.3) and SPOFs.Other vulnerabilities such as MiTM susceptibility, supply chain attacks, vulnerable protocols and improper access controls will be dealt with by deploying appropriate controls in the proposed model.The countermeasures will be discussed more in the upcoming chapters.
Requirements
The proposed model will be replacing the networklevel protocols currently in use for EDI with a Representational State Transfer (REST) application programming interface (API) connection to a Blockchain network.The REST API is an architecture for creating web services.It is a replacement for remote procedure call (RPC).It will be utilized in the PoC because it has greater flexibility in defining security policies and higher performance than RPC Feng, Shen, and Fan (2009).
Once connected to the REST API, the user will use the HTTP methods: GET, PUT, POST and DELETE to query the Blockchain.The connection will use HTTP over TLS/SSL (HTTPS).The user will communicate with the Blockchain network by invoking chaincodebased functions using HTTP requests.Note that the server hosting the REST API is also a peer in the Blockchain network.The communication between the client and peer is modeled in Figure 8.
In order to protect against the vulnerabilities mentioned in Section 3.3, the model will provide CA services inside the Blockchain.There will be multiple CAs within the network, each run by zero or more organizations.
Chaincode will minimize the likelihood of supply chain attacks.Users will not be able to do any modifications to the Blockchain without the use of a chaincode function.Real-world agreements and contracts must be codified in chaincode.Refer to Section 4.2 to understand how codifying contracts in chaincode are different than other methods.
Security policies will be created in the REST API (Serme, de Oliveira, Massiera, & Roudier, 2012) to mandate strong cryptography between the user and the peer.Hyperledger Fabric will be modified to mandate strong cryptography among the peers who are members of the Blockchain network.Any connections using suboptimal cryptography will be immediately dropped.
Access controls will be built into the REST API.It will provide control authorizations on the user level, rather than the organization level which is currently used EDI.A mechanism will be provided for organizations to define allowable actions for their individual employees.No employee will be allowed to conduct EDI transactions of which they are not authorized.
The Blockchain will inherently provide immutability and availability of data.Organizations cannot collude to hide transactions from tax collectors and attacker's DoS attacks will not succeed since the ledgers are copied to multiple servers.
Security and privacy standards
The model will be designed to satisfy the requirements and best practices mandated in ISO27001 and ISO27002 (Calder, 2013;Vasudevan, 2008) international security standards.Specifically, the model will conform to electronic messaging rules.Figure 9 show the rules pertaining to achieving proper EDI from the standards.Note that ISO27001 discusses a policy-level managerial perspective of standards implementation, whereas ISO27002 explains how to achieve a good implementation of the controls in ISO27001.
The implementation guidance will be followed in the model and in the PoC.The advantage is that it will give a better standing to the work done in this research.Moreover, it proves that the model is a viable extension to EDI rather than an incompletelystudied deviation from the norm.
Network-level view
A core part of the proposed model is the consortium Blockchain network.Such type of network was chosen in particular because it was built for housing multiple organizations where the data exchange could be partially confidential.It allows organizations belonging to same industries to exchange data either in public or in private channels where only certain member organizations can access the data.
Before reaching the network, the message data passes through a number of steps.Data is manipulated and processed all the way to a Blockchain.The actual Blockchain is abstracted away from the user by APIs and smart contracts.
Figure 10 illustrates the proposed model from an action point of view of a single organization.An employee in an organization uses a business application, such as an ERP system, to send an EDI message.The message passes through an EDI translator which maps the message to REST API representational data and methods.It turns EDI messages into smart contract calls encoded in REST API instructions.The REST API then queries peers and orderers (nodes who order transactions in a block) in the network that executes and posts the transactions.
In the figure, peers and orderers are simply represented as smart contract.The smart contract will query an authorization database.It will send the user's transaction signature and transaction channel ID to the database and get back the authorizations of the users on the particular channel and if the user can run the particular smart contract.If the user is authorized and the transaction information is valid, the transaction will be posted to the Blockchain and propagated to other nodes.
In this proposed model, querying the Blockchain without any updates also require the request pass through a smart contract.Reasons are mostly to check authorizations and to prevent users with insufficient privileges from accessing the organization data.Note that private channel data are not shared with nodes that are not members of the channel, so a user from an organization cannot view data shared by completely different organizations who are transacting with each other.
There are more details to the interaction of the smart contracts with the Blockchain.Other diagrams will show the topology and components of the model, and how the interaction works.
Consensus
The consensus mechanism used in the proposed model is PBFT.Different organizations may have different sizes and different capitals amounts.Resourcerelated consensus mechanisms such as PoW or PoS if used will enable a hacker who gains access to the servers of a large organization to control the processing of the entire network.This is possible because organizations will have varying processing capabilities and funds to stake.Another consequence manifests in a larger organization delaying the processing of transactions submitted by other organizations that it considers its competitors.PBFT treats all organizations equally.One organization will have one vote in the network, thus preventing monopolies.
PBFT, in the proposed model, is configured with 1/3 fault tolerance.This means that at least 2/3 of the organization in a network must vote for the validity of transaction before it can be posted.Although this is not practical in a public Blockchain networks, the case is different for consortium Blockchain networks where the number of users is in the hundreds or thousands, not millions.
Certificate authority
CAs are a common component of private and consortium Blockchain networks.They are responsible for signing certificates of the nodes in a network.The X.509 certificates signed by the CA identify nodes which belong to a particular organization.Best practices indicate that each organization must only have one CA (Morkel & Eloff, 2004).
The certificates can be used to sign transactions.When a peer wants to endorse a transaction, it signs it using its key which the CA has verified.Signing a transaction allows it to be traced back to the organization and the peer that signed it.Signing is also a requirement so that transactions will be posted to the ledger.
Another type of CA is the TLS CA.It is different from the common CA in that it handles the encryption of communications between nodes in a network.The key generation and storage functions of a CA may be delegated to a PKCS11 encryption based hardware security module (HSM) for better security.
Membership service provider
A membership service provider (MSP) is one of components added to the network of the proposed model.Using a MSP allows the identification of nodes in an organization as members of the organization.It is basically a set of information identifying the organization which is signed by the CA.It maps the certificate generated by a CA to an organization.Whenever a node (peer or orderer) is added to the network, it must receive a certificate from the CA which also includes information pointing to which MSP it belongs.
Peers
A peer is a type of node in a Blockchain network.It is responsible for maintaining copies of the Blockchain ledger, and committing new blocks.In the proposed model, an organization may own one or more peers.
With reference to Figure 10, a peer runs smart contracts which interact with the ledger.That is, requests from REST API are forwarded to the peers' smart contracts for subsequent execution and return of results.This is the case for smart contracts that only query the Blockchain but do not change its state.
Results of execution are returned from smart contracts that alter the state of the ledger, but are not committed to the database.The result also includes a signed endorsement.The endorsements acknowledge that the peer has approved the transaction, there was no reply attack, the user's signature was verified against the MSP, and the user is authorized to conduct the transaction.
In the case of a ledger update, a peer does not receive commit commands from the REST API, but does receive them from orderer nodes.The peers receive blocks from the orderer nodes for direct import into the ledger.
Endorsement policy
This model advocates the importance of private channels in a network.Such channels allow two or more organizations to have a permissioned ledger where they can post transactions and those transactions remain private.
To achieve even greater privacy, this model mandates that the data contained in private channels to be within the nodes of the organizations who are member of the channels only.But how will the parties agree on the verification of transactions before posting them to the ledger?
The answer is to add a set of rules that define what each member can do on a channel.Members may view the ledger, post new records, validate transactions, add new members to the channel, etc. depending on privileges assigned to them when they are first added the channel.In addition, an endorsement policy is added.It defines which members or how many members need to validate transactions so that it can be posted on a channel.
Endorsement policies can require that one of two trading partners in a channel validate a transaction.They can also require that all trading partners validate the transactions or more than half the trading partners validate the transactions.The policy will depend on the application and the relationship among the involved organizations.
Note that endorsement is different than consensus in this model.Here, endorsement is for peers, while consensus is for orderers.PBFT will still be used for orderers as they need to have an agreed upon order of transactions always, but the peers endorsement depends on the level of trust among organizations and do not affect transaction validation.5.4.6.Ordering service Channels are added to orderers.The orderers are the nodes responsible for receiving endorsed transaction requests from the REST API and creating a block with ordered transactions.The orderers order transactions on first-come-first-service basis.All the orderer nodes use deterministic algorithms to reach the same order.The details of the algorithms will not be discussed as they are standardized (Sousa, Bessani, & Vukolic, 2018) and beyond the scope of this research.
The consensus policy adopted in this model from Section 5.4.1 applies to the ordering nodes, not the peers.The ordering service, which refers to a collection of ordering nodes, uses the PBFT model.This means that the network can tolerate up to 1/3 of the ordering nodes going down.
Network structure
The structure of the network components of the proposed model is depicted in Figure 11.It shows a network consisting of two organizations, however there may be more in a real-life network.Each organization has its own CA and MSP.The MSP is the entity that records information about the identity of the organization and ties nodes to the organization.Each one also has three peers and one orderer.Note that the Figure may imply a SPOF for REST API, CA and MSP, and orderer, but in real-world implementations, those components must be more than one to avoid a SPOF.A single component for each was illustrated in the figure for demonstration purposes only.The REST API, as shown in the previously discussed action model, receives input from an EDI translation software.
The REST API will communicate directly with the peers and orderers.The smart contracts (Chaincode) will run inside the peers of the network.More specifically, the REST API will communicate with the peers and invoke Chaincode within the peers based on arguments it receives from the EDI translator.Each peer maintains a copy of the ledger.
In this model, we separate the Blockchain ledger and the peer chaincode for efficiency reasons.This is because the peers are usually computationally intensive, while the ledger is storage intensive.
The REST API communicates with the orderer only after it receives signed endorsements from the peers.
After consensus of the orderers, a commitment order is sent to peers who are connected to the orderer.The peers will not commit until they have checked that the transaction is indeed signed by the endorsing peers and that it follows the channel's endorsement policy.These steps of the commit process in the proposed model are illustrated in Figure 12.
Notice that Peer-A2 is connected to Peer-B2 and Peer-A3 is connected to Peer-B1.A question arises as to how do peers who are not directly connected to another organization, such as Peer-A1 is not connected to any peer in organization B, listen to commits from the orderers of another organization.The answer is that peers do not need to have a valid link to another member of the same channel.They just must be connected to an orderer.All orderers, after consensus, will broadcast the message to the peers whom they are connected to.This means that a commit order from Orderer-B1 will be repeated by Orderer-A1 so that it can reach peers in organization A. needing to traverse through the Blockchain.Thus, it maintains the latest "state" in a simple form of keyvalue pairs.We adopt the basic form of a state database and tweak it to better suit the requirements of organizations who conduct B2B transactions.
State database
We tweak CouchDB, a lightweight state database with querying capabilities, as a state database for the PoCs Blockchain.It allows the use of rich queries similar to those of structured query language (SQL).
6. Mapping the security and privacy countermeasures, and evaluation
Mapping the security and privacy countermeasures
At the beginning of this research, six security and privacy issues were mentioned.The way each issue was tackled is discussed in the previous sections.The following list will summarize the solution approaches.
(1) MitM attacks: Multiple CAs were deployed, each belonging to an individual organization.Organizations trust each other's CA when they are first enrolled together in a private channel.Refer to Section 5 for applicability to point-to-point transactions.
(2) Supply chain attacks: More stringent restrictions on the allowable transactions.Transactions follow the least privilege principle.The same validation rules could be implemented in EDI processors but are more readily and securely implemented in chaincode language.
(3) Weak cryptography: Mandated the use of strong cryptographic algorithms through REST security policies.Such algorithms are not uniformly mandated by point-to-point EDI, but could be retrofitted.
(5) DoS attacks: Blockchain is immune to DoS attacks due to its distributed nature.
(6) Poor access controls: Stronger interorganizational-level ACLs and newly introduced employeelevel ACLs.Those ACLs can be retrofitted to EDI processors but are readily available in common Blockchain software.
Security and privacy evaluation
The proposed model has implemented all the points in ISO27001 and ISO27002 that pertain to EDI security.Referring back to Figure 9, the mode provided the appropriate protections.It protected the confidentiality, integrity and availability of messages, and ensured that it is transported in a correct way to receivers.
The services provided by the model are reliable due to their cryptographic underpinnings.They will always provide the intended results, and are fail-safe.
Identities of communicating entities are verified using signatures.
Before joining a private channel, organizations are required to convert the trading rules specified in their agreements to smart contracts and access control rules.Users are authenticated using PKI which ensures that any attacker masquerading the identity of a user must first seize the user's private key.
Reliability of data
The PoC shows that the proposed model can link records across organizations.The sales invoices entered in the system by one organization show up as purchases of another organization.This clearly improves the efficiency of business processes.The data does not need to be cross-checked.
Such a result is advantageous for organizations.The accuracy and consistency of data which is linked to other sources is better than unlinked data.Thus, the proposed model brings better reliability of data than common EDI because data is linked across organizations in the proposed model but it's not linked in common EDI.
Discussion
If Blockchain was implemented to host citizen health records, the chain would normally contain all the data.All the data, include resource-heavy images and other files, would have implications on the storage capacity of the nodes hosting the Blockchain.This is because the same Blockchain is stored by all nodes upon joining the P2P network.Furthermore, there are privacy concerns as the citizen health records will be completely accessible from any node.
The bandwidth utilization of such a Blockchain would also be an area of concern.This is because there will be ever-increasing amount of blocks and the updates are dynamic.Downloading the blocks by the nodes during every update may consume a high amount of network resources, especially if the data throughput cannot accommodate such downloads.
The proposed model suggested significant changes to the way EDI messaging is done.Although they may seem radical, they are the way forward for any effort to modernize EDI.Having constructed this model in the form of a PoC means that it is possible to have it implemented on a larger scale.
The best way to implement this model is to begin from the Blockchain network.The network will have to be designed according to the recommendations of this research while taking into account the application and the nature of the entities who will use it.The company must design a set of smart contracts for every operation it wishes other organizations to be able to do when transacting with it.
The next step is to determine which smart contracts are the most critical.This should be done using a risk analysis.The higher risk smart contracts will have to be given more attention later on when including in any business agreements.
Configuring a REST API to interact with the nodes is done after the network becomes in a good working condition.Encrypting the connections between clients, REST API, and peers is important to prevent MiTM attacks.Note that all the entities should use certificates assigned by the organization's central CA.
After the REST API is completed, the EDI translation software should be upgraded to provide REST support.The software will have to be adjusted and mapped before usage.The final step is to conduct a pilot test of the system prior to full implementation.
Limitations
The design of this model takes into consideration the system requirements from an implementation perspective.It does not consider the changes in strategy, culture or policies needed to apply the model.Such considerations are sizable and should be discussed in follow up research.
Conclusion and future works
The main aims of this research were to study the security and privacy weaknesses of healthcare data sharing, determine the ways they can be solved, and to develop and validate a solution model which addresses the weaknesses.
This research conducted a review of the security and privacy issues of healthcare data sharing and found new vulnerabilities not discussed in previous publications.The loopholes can be exploited by attackers to disrupt or alter the normal flow of B2B transactions.They include susceptibility to MiTM attacks, susceptibility to supply chain attacks, use of weak cryptographic protocols, and so on.Modifications to the way common EDI works were suggested in order to mitigate those issues.
A Blockchain-based data exchange model was proposed instead of the common direct B2B EDI message exchange model to solve the vulnerabilities of EDI.The Blockchain is designed, especially to work with B2B data, essentially disposing of old EDI messaging protocols such as email, FTP and AS2.It was implemented in the form of a PoC for demonstrational purposes.The PoC was shown with a sample business process as an example.This research is the first attempt to address EDI's security and privacy issues using Blockchain.It aims to achieve a milestone within the field of EDI and cybersecurity research.
Blockchain is an emerging technology which can bring many benefits to the area of healthcare.However and like with other technologies, it must be inspected and tested thoroughly before it can be offered for real world use.Its risks should be further studies, including comparing its advantages and risks with that of cloud-based models.
We recommend developing a zero-trust unified model which assumes no device or network is trusted unless its identity is verified by the system.This model can be using Blockchain to protect the devices and networks across the smart hub and allow data to be exchanged in a secure manner between devices and services.
Using this Blockchain based model for the security layer of sharing data and access management IAM architecture, the security model combines digital assets within smart city hub and acts as a trustless layer for protecting the data behind databases.This results in enhancing the accuracy of tracking and analyzing various sensors and smart devices, such as home security sensors and internet of health things.In turn, it will enable secure sharing of smart devices and services.
In healthcare industry, the Blockchain has the potential to automate the prescription dispensation and allow to develop a new business models that allow businesses to leverage Blockchain trusted systems to provide a 24/7 services without human interaction.The Zero-trust concept can be implemented using the Blockchain for the patient data received from sensors or IoT devices and can be monitored by patient and medical institutions.The risk associated with IoT is that the device itself could be used by another person to send live data.This can be mitigated by an integrated AI solution to ensure consistency of data.
Figure 2 .
Figure 2. Blockchain-based electronic healthcare record system for healthcare applications.
Figure 4 .
Figure 4.The elements of electronic commerce/EDI.
Figure 5 .
Figure 5. EDI segment wrappers of a CIPHER message.
The header usually contains(Zheng, Xie, Dai, Chen, & Wang, 2017):Block ID: A number used to identify the block's sequence number.Timestamp: Indicates the time this block was created.Previous block hash: A cryptographic digest of the entire block preceding the current block.(Optional) Merkle tree root hash: A cryptographic digest of all the transactions in the current block.(Optional) Block version: The version number of the software used to build the block.(Optional) Difficulty goal: Relates to a proof-ofwork consensus concept where the block hash must be less than a certain value.(Optional) Nonce: A number used once to indicate that significant processing has occurred.It is put in context in Section 4.2.
Figure 8 .
Figure 8. Client and peer communication using REST API.
Figure 10 .
Figure 10.Action steps of the proposed model.
A state database is not a new technique like the others proposed in the model.It is a form of lightweight database solution used alongside Blockchain to provide quick access to stored values without
Figure 11 .
Figure 11.Network structure of the proposed model.
Figure 12 .
Figure 12.Commit process of the proposed model.
Table 1 .
Comparison among types of Blockchain networks.
Table 2 .
Taxonomy of vulnerabilities in the Blockchain.
|
2023-03-31T15:03:45.285Z
|
2023-03-29T00:00:00.000
|
{
"year": 2023,
"sha1": "69564e2e8624417d4103a483856e5738df52bc60",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/25765299.2023.2188701",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "45532cec8079b5dd3ea7b94fbba0d9af68524eb3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
264800464
|
pes2o/s2orc
|
v3-fos-license
|
Revision of Liparis (Orchidaceae, Epidendroideae, Malaxidinae) in Brazil
Abstract The presence of Liparis occurring in Brazil was revised, resulting in three taxons confirmed in national territory. Nine lectotypifications and six neotypifications, together with L. inundata being demoted to a synonym of L. nervosa, are proposed. Species can be distinguished by the leaf blade, the number of leaves per pseudobulb and the presence of calluses in the lip. From those, only L. cogniauxiana is endemic to the country, being restricted to the Cerrado biome. Liparis nervosa occurs in all Brazilian biomes and is the only one registered in the Pampas and the Caatinga, while L. vexillifera occurs in the Atlantic Forest, Amazon and Cerrado biomes. According to the IUCN criteria, L. cogniauxiana, L. nervosa and L. vexillifera are in the ‘‘Least Concern (LC)’’ category due to a broad distribution, the number of occurrences, presence in protected areas and low pressure from indiscriminate collecting.
Introduction
Orchidaceae is one of the richest plant families (Chase et al. 2015) and is well represented in Brazil, with 251 genera and approximately 2,500 species, of which almost 1,500 are endemic to the country (BFG 2018).Although Liparis Richard (1817: 39) is a cosmopolitan group with more than 300 species (Cameron 2005), only three are traditionally recognized as occurring in Brazil (Santos & Smidt 2020).
Liparis was described in 1817 by Louis-Claude Marie Richard, with Ophrys loeselii Linnaeus (1753: 947) as the type species, based on a specimen from Sweden that honors the plant collector Peter Johann Loesel, a german botanist from the 1600s who studied the Prussian flora (Loesel 1703).Linnaeus described this species as plants with bifoliar and lanceolate leaves and glabrous inflorescence with 5 to 8 flowers, slightly reflexed petals, and a strongly ovate lip.The most complete taxonomic approach Rodriguésia 74: e00192023.2023 that classifies Liparis at the infrageneric level was carried out by Garay & Romero-Gonzalez (1999), who proposed a key for identifying the taxa.The authors reported four subgenera and 19 sections.
Liparis was historically split into different smaller groups based solely on morphological characteristics, and some of the subgenera and sections can be interpreted as separate genera according to some authors (du Petit Thouars 1809; Pfitzer 1887; Margońska & Szlachetko 2001;Jones & Clements 2005); nonetheless, these classifications rarely or never represent the evolutionary pattern for the group (Cameron 2005;Pridgeon et al. 2005).Since there is much historical confusion concerning the Liparis taxonomy, it is better to not adopt an infrageneric classification until a more robust phylogeny is proposed for the Neotropics (Pridgeon et al. 2005;Radins et al. 2014).
This research aimed to revise the Liparis species recorded in Brazil.We present taxonomic notes, complete descriptive comments with ecological information, diagnoses, illustrations, photographs, identification keys, distribution maps and an assessment of the conservation status of these species.
Results and Discussion
A total of 767 exsiccates were studied, 736 from specimens collected in Brazil and 31 from other countries.A total of 50 vouchers belong to Liparis cogniauxiana F. Barros & L.R.S. Guimarães (2010: 31), 562 of Liparis nervosa (Thunberg 1784: 814) Lindley (1830: 26) and 124 of Liparis vexillifera Cogniaux (1896: 289).From those, only L. cogniauxiana is endemic to the country, being restricted to the Cerrado biome.Liparis nervosa occurs in all Brazilian biomes and is the only one registered in the Pampas and the Caatinga, while L. vexillifera occurs in the Atlantic Forest, Amazon and Cerrado biomes (Fig. 1).
Liparis cogniauxiana is characterized by its small size with a terrestrial habit and two or rarely three plicate leaves emerging from the lateral or apex of the entirely or mostly aboveground pseudobulb.
It can be recognized and distinguished from the other two species of the genus in its vegetative morphology, by the greater compactness between the pseudobulb and leaves with a short petiole, and its flowers, by the two inconspicuous calluses at the base of the lip.This species resembles Liparis nervosa, being distinguished by a tgetative size of 51-149 mm instead of 92-520 mm in length, by the presence of two or rarely three leaves per pseudobulbs, and by the rounded calluses on the lip.
Liparis cogniauxiana is endemic to the Cerrado biome, usually occurring in low-altitude areas within the forest formation of dry woods, but occasionally can be found exposed to the sun in the formations of dry or rupestrian fields.It blooms from December to April and is fertile from the beginning to the end of summer.With an extent of occurrence (EOO) of approximately 1,402,367.520km², an area of occupancy (AOO) of approximately 200,000 km², together with several records inside protected areas, this taxon falls into the category of ''Least Concern (NT)''.
The Cymbidium bituberculatum protologue mentions a collection by J. Cooper from Nepal that could not be found in any herbarium, and the information in the text is insufficient to confidently determine if it was ever herborized.In the same work from Hooker (1824), an illustration based on the specimen used to describe his species is presented; hence, it is designated as the lectotype here.
The collection of ''Ridgway J. 169, May of 1834'' used in the description for Liparis guineensis could not be found and is probably lost; hence, the illustration based on the same exemplar (Lindley 1834) is here designated as the lectotype.
The identity of Liparis elata var.purpurascens is uncertain.The protologue lacks indication of an original material, and the only two pieces of information given are -''purple leaves and erect bracts''.Although the diagnosis of this taxon as a variety of Liparis nervosa is unclear, for taxonomical balance, we still opted to designate a collection as the neotype: ''R.P. Lyra-Lemos 4618'' (MAC11091), as it bears the purplish leaves mentioned in the protologue.
There are three syntypes of Liparis kappleri (AMES00052492; AMES00271895; P00347774).As no holotype is indicated among these duplicates in the protologue of the species and the material deposited at P ( 00347774) is of better quality, it is designated here as the lectotype.
The Liparis odontostoma holotype collection made by Hooker J.D. in the Sikkim state region of India is probably lost, and as no other original material seems to exist, we choose to designate the collection of Hooker J.D. without number (K000387773) from Mount Khasi in the Meghalaya state region as the neotype, due to it being well preserved and found in a nearby area in India, representing the taxon accordingly.
There are multiple syntypes of Liparis eggersii deposited in distinct herbariums.Therefore, we here designate the collection of GOET (008575) as the lectotype, choosing it among the duplicates due to the good condition of the exsiccate, which presents complete inflorescence with fruits and flowers.
The description of Liparis elata var.latifolia by Ridley (1886) emphasizes broad leaves as a morphological characteristic for recognizing the species.The protologue indicates two distinctive collections as types, with multiple duplicates spread among different herbaria.The collection of Balansa B. 4542 from Paraguay presents specimens with narrower leaves than the collections of Wright C. 1495 from Cuba, thereby being ruled out as a lectotype.Among the duplicates of Wright C. 1495, the material deposited at BM ( 000074263) is chosen to be designated here as the lectotype due to the good condition of the exsiccate -with complete vegetative and reproductive parts as well as for displaying morphological characteristics closely related to the original description of the taxon.
The Liparis elata var.rufina protologue indicates two distinguished materials that are herborized in the same exsiccate, while the collection of Morson without number (K000242155) comprises only an inflorescence and a drawing of a flower belonging to one specimen.The collection of Barter without number (K000242156) is complete -composed of vegetative and reproductive parts from two individuals, in addition to the drawing of a flower; hence, it is here designated as the lectotype.
The Liparis bituberculata var.khasiana holotype is lost, and no other original material seems to exist; thus, we designated the collection of Griffith W. 5068 (K000387771) as the neotype, as it supports the characteristics described by the author and was also collected in India.This same exsiccate includes another collection (K000387770) identified as L. bituberculata var.khasiana; however, this one lacks information about the collector, and the specimen could even belong to another species, as it does have some unique features such as an elongated stem and the absence of pseudobulbs; therefore, it should be disregarded.
Three duplicate collections of original material belonging to Liparis elata var.longifolia are deposited at P (P00338287; P00338288; P00338289).Even though they are all in good condition, we opted to designate the collection of P (00338287) as the lectotype here, as it is the only one with flowers and fruits in the same exsiccate.
While studying Liparis nervosa we noticed that some plant collections had smaller sizes and oblong-lanceolate leaves.These exemplars with these morphological features generally inhabit wetlands in open fields, with some exceptions in the Amazon Forest, where this morphotype (Fig. 3a) is also found in the forests near the Rio Negro River.
The variations in vegetative morphology of these specimens have caused taxonomic controversy, even though the shape and size of floral parts are not different from those of common L. nervosa.This morphotype was recognized previously as Liparis elata var.inundata (Barbosa Rodrigues 1877;Cogniaux 1896;Hill 1926), a synonym of L. nervosa that was recently elevated from the variety position to species status as Liparis inundata by Pansarin et al. (2020).Nonetheless, the difference in phenology within populations was used as a comparative feature between this morphotype and a common L. nervosa.However, we recommend caution when using this characteristic since populations may differ in their flowering season depending on their occurrence; these varieties could be attributes of the habitat itself, driving populations to flower at different times of year.
For now, we prefer to keep L. inundata as a synonym of L. nervosa since the morphological and ecological characteristics used to discern them are not sufficient, and it is not hard to find plants with intermediate morphology (Fig. 3b), which can cause even more taxonomical instability in this taxon.One future approach could be to manually cross-pollinate these different morphotypes, monitor the offspring generated between populations, and compare the morphological characteristics of the different individuals generated.
Liparis nervosa is characterized by its highly variable vegetative size and terrestrial habit, but occasionally, it is found growing on organic matter accumulated between tree trunks or rocks.Mature individuals always have more than two leaves, which are deciduous and show rapid growth after falling, emerging from the side or apex of the entirely or mostly aboveground pseudobulb.The flowers are usually fertile in succession in a spiral conformation from the base of the inflorescence to its apex.
It can be recognized and distinguished from the other two native species by its usually larger size and many leaves, and its pseudobulbs always bear long and thick roots.The flowers are similar to those of Liparis cogniauxiana but can be distinguished by the two tooth-like calluses presented at the base of the lip.
Liparis nervosa can be found in all Brazilian biomes, flowering from January to December.It commonly occurs in lowland forests, montane forests, and marshes but can sometimes be exposed to the sun in open flooded fields or on roadsides near forests.With an extent of occurrence (EOO) of approximately 6,702,407.731km², an area of occupancy (AOO) of approximately 2,128.000km², and a presence in all Brazilian biomes, the taxon falls into the category of ''Least Concern (LC)' cylindrical,, ellipsoid or rarely oblongoid; covered by white, green or brown deciduous foliaceous sheaths.Leaves 47-160 × 10-83 mm, green, one per pseudobulbs, several layers of a sheath-like petiole 8-58 mm length; lamina conduplicate or flat, sometimes involving the floral stem, coriaceous, oblonglanceolate, lanceolate, rarely elliptical, margin entire or undulate, apex acute, rarely obtuse.Inflorescence 25-163 mm raceme; floral bracts in the base of the pedicels, acuminate.Flowers resupinate; yellow or green, sometimes purplish or reddish only in the lip; pedicels 3-8 mm length; ovary 2-7 mm length.Dorsal sepal 6-8 × 1-2 mm, oblong or oblonglanceolate, margin entire and revolute, apex obtuse or slightly acute.Lateral sepals 5-7 × 0.5-2 mm, free, oblong or oblong-lanceolate, margin entire and revolute, apex obtuse or slightly acute.Petals 5-9 × 0.4-1 mm; linear; margin entire and revolute; apex obtuse.Lip 6-7 × 4-6 mm, trilobate, glabrous; base with two longitudinal calluses that extend up to the apex of the lip, with one internal robust vein; lateral lobes extending from the base up to the middle of the lip, rounded; mid-lobe ovate or rarely obovate, slightly reflexed, margin entire, slightly undulate near the apex; apex rounded, rarely emarginate.Column 4-5 mm length, slightly arched; foot short, apex winged; anther green or yellow.Pollinarium with two ovoid, bipartite pollinia.
Figure 1 1 .
Figure 1 -a-d.Distribution of Liparis -a.occurrence of all species in Brazil and indication of the country's position in South America; b. occurrence of Liparis nervosa; c. occurrence of Liparis cogniauxiana; d. occurrence of Liparis vexillifera.
Figure 4
Figure 4 -a-j.Illustration of Liparis nervosa -a.habit of the morphotype previously called Liparis inundata; b. habit of the intermediate morphotype between L. inundata and common Liparis nervosa; c. habit of common L. nervosa; d. frontal view of the flower; e. lateral view of the flower; f. dorsal sepal; g. petal; h.lateral sepal; i. lip; j. lateral view of the column attached to the ovary (a.G.A. Black 2749, 2771; b.G.T. Prance 15906; c, e, d, f, g, h, i, j.T.F.Santos 60).
Figure 5
Figure 5 -a-h.Illustration of Liparis vexillifera -a.habit; b. frontal view of the flower; c. lateral view of the flower; d. dorsal sepal; e. petal; f. lateral sepal; g. lip; h.lateral view of the column (a-h.T.F.Santos 350).
|
2023-11-01T15:17:30.711Z
|
2023-10-30T00:00:00.000
|
{
"year": 2023,
"sha1": "2330356aff9f471552db434dac88b0849fbdf936",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rod/a/ts73THhJKpTHSSBwXXhyVsG/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Dynamic",
"pdf_hash": "95cba58cff23e98839301c0ee998c42018030341",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
168773274
|
pes2o/s2orc
|
v3-fos-license
|
A sustainable development of a city electrical grid via a non-contractual Demand-Side Management
An increasing energy consumption of large cities as well as an extreme high density of city electrical loads leads to the necessity to search for an alternative approaches to city grid development. The ongoing implementation of the energy accounting tariffs with differentiated rates depending upon the market conditions and changing in a short-term perspective, provide the possibility to use it as a financial incentive base of a Demand-Side Management (DSM). Modern hi-technology energy metering and accounting systems with a large number of functions and consumer feedback are supposed to be the good means of DSM. Existing systems of Smart Metering (SM) billing usually provide general information about consumption curve, bills and compared data, but not the advanced statistics about the correspondence of financial and electric parameters. Also, consumer feedback is usually not fully used. So, the efforts to combine the market principle, Smart Metering and a consumer feedback for an active non-contractual load control are essential. The paper presents the rating-based multi-purpose system of mathematical statistics and algorithms of DSM efficiency estimation useful for both the consumers and the energy companies. The estimation is performed by SM Data processing systems. The system is aimed for load peak shaving and load curve smoothing. It is focused primarily on a retail market support. The system contributes to the energy efficiency and a distribution process improvement by the manual management or by the automated Smart Appliances interaction.
Introduction
An increasing energy consumption of large cities as well as an extreme high density of city electrical loads leads to the necessity to search for an alternative approaches to city grid development. One of the promising approaches is to use the concept of Demand-Side Management to achieve load control for electrical load curve [1][2][3][4].
Nowadays the development of market mechanisms and electricity markets of the global energy sector leads to changes in rates system. Advanced metering of electricity and power by accounting tariffs with multiple rates differentiated by times of the day becomes more complicated. There is the perspective for rate changes for medium and small intervals of time. The Smart Metering concept provides for a broad implementation of modern informational and measuring components. A number of government regulations support the course [5][6][7][8].
Taking this into account, the common interests of consumers and energy companies are growing. There is a need for a program of market participants' information support and evaluation of electrical parameters, focused on differentiated by time intervals accounting, which is designed to incentive the participants and provide a non-contractual management of power consumption, distributed generation and load of the grid components [9][10][11][12]. The introduction of these informational resources is beneficial to all the market participants: 1) consumers have an opportunity to assess their power consumption and manage it automatically depending upon the rates as well as to reduce their energy costs, to choose their power supplier with the most favorable terms; 2) owners of distributed generation can get the maximum profit from its use; 3) energy companies are able to influence a demand, a proposal for the distributed generation, reducing or changing the load of the grid components. Also they have an opportunity to conduct advanced statistics of consumption as well as automation of financial and contractual processes. Existing systems of Smart Metering billing usually provide little general information about consumption curve, bills and compared data, but not the advanced statistics about the correspondence of financial and electric parameters. Also, consumer feedback is usually not fully used. So, the efforts to combine the market principle, Smart Metering and a consumer feedback are essential.
The dual-purpose incentives
Besides of the consumers' efforts to contribute the global idea of energy efficiency and ecology improvement, the main incentive for the consumers is an opportunity to decrease their energy costs An analysis of the potentials to decrease the costs is aimed at the determination of rates schedule and a consumer's load curve correspondence extent. Also the analysis includes the influence of technical losses.
A case when there are M different rates (or prices of a complex rate) offered to consumer is considered. All the rates (or prices) can be ranged using their values by the principle "cheapexpensive". The most expensive rate j may be assigned to the effectiveness of αmax = 0 %, the cheapest -αmin = 100 %. Rates with an intermediate price cj can be assigned to an intermediate value of αj, proportional to the relative price difference between this value and the minimum price to the price difference of these two extreme rates.
The principle of ranking is shown in Figure 1. There are three rates: night (from 23:00 to 07:00), semi-peak (from 10:00 to 17:00 and from 21:00 to 23:00) and peak (from 07:00 to 10:00 and from 17:00 to 21:00). The basic parameter of rate j efficiency a can be defined as normalized relative value obtained by minimum cmin and maximal cmax prices of night and peak rates, correspondingly: Thus, the coefficient of efficiency e is the weighted average of the rates. Its value represents the opportunity of savings for the consumer. There are two extreme cases: 1) all the electricity was consumed during rate period with αmax = 0 %, which means the consumer didn't use the opportunity of savings; 2) all the electricity was consumed during rate period with αmin = 100 %, which means the user used all rate for savings.
The actual value of the coefficient of efficiency rates is between 0 % and 100 %.
The weight-average rate use efficiency index e has a triple meaning: 1) utilized economy fact percentage; 2) rates schedule and a consumer's load curve correspondence extent; 3) the elasticity of a demand.
The consumer is able to estimate his savings and choose the most appropriate to him rate. The estimations are suitable for a-priori and a-posteriori dynamic rate selection. The energy company is able to estimate demand-side management efficiency due to the fact that rate use efficiency index is approximately equal to the elasticity of a demand.
There are another components of efficiency rating corresponding the costs for electricity transmission, power market support and other services.
Efficiency for electricity transmission costs is inversely proportional to the share of electricity consumed at the hours of the actual peak: consumption only at the hours of the actual peak leads to βmax = 0 %, the reduction of consumption during the hours of the actual peak is βmin = 100 %. The electricity consummated in the hours of the actual peak can has some corresponding intermediate value βj where Wj -the volumes of electricity consumption during the hours of the actual peak on working days; Wj -volumes of consumption in hours of the actual peak on working days. The rating is aimed at reducing power consumption during the hours of the actual peak. Rating of the cost effectiveness for power transmission. Powerful consumers can be used to create a regulating effect of the load in hours of planned peak. If the maximum load exceeds the average one during the peak hours over a period of time, the efficiency of using grid power is γmax = 0 %, the reduction of consumption in the hours of the planned peak is γmin = 100 %. Intermediate power consumption in the hours of the planned peak can be assigned with some appropriate intermediate value γk: where Wk is the maximum of hourly power consumptions during peak hours on working days; Wav is the average hourly power consumption. The rating is aimed at reducing power consumption in the hours of the planned peak.
Rating of the effectiveness of fact-to-plan consumption deviation. The feature of the retail electricity market is that consumers may not be penalized if the electricity consumption deviates at a certain time in proper directions. Therefore, it is proposed to evaluate the effectiveness of fact-to- plan deviation in terms of the deviation of actual costs from potential costs for the case when the planning is not applied at all. With an increase in electricity costs from the planned level to the level of potential costs with no planning, the efficiency may vary from δmax = 0 %, with the planned costs -δmin = 100 %. Intermediate volumes of consumption in the hours of the planned peak can be assigned some appropriate intermediate value δl: where Cl(p) -costs for planned hourly volumes of consumption; Ci(np) -potential costs for the case when the planning is not applied; Cil(p) -costs of actual hourly volumes of consumption.
The resulting rating is formed as a weighted average for all indicators instead (2): where Cindexed corresponds to the particular type of costs listed above. Thus, the resulting rate use efficiency index is a weighted average taking into account the share of costs for various components. Its value characterizes how much the economy has been used by the customer. The real value of the rate use efficiency will be between 0 % and 100 %.
Using the rate use efficiency index e, it is possible to determine the economy E and lost savings El: Another important parameter is the smoothing of load curve. It contributes to equipment lifetime and durability increase as well as peak lines load decrease, so it is very important. From the point of view of Math Statistics the load change can be characterized by standard variation coefficient v. But this parameter is not evident even for engineers.
One of the ideas how to estimate the variation using this parameter like an incentive for a consumer is to show him the economy based on the losses decrease. The losses in cables and wires connecting a consumer to electricity mains are considered.
From the theory of electrical power engineering it is well known that the minimum of losses during a period of time corresponds to the case when the load flows are equal at all the parts of the period.
It is useful to estimate relative reduction of the loss for single rate and the relative reduction of losses during of time period of n intervals, correspondingly: where v -coefficient of consumption variation; ΔWav -losses for the case of absolutely uniform consumption; ΔWav -losses for the case of an arbitrary consumption. Studies and the calculations show that during the day typical v vary from 0.23 to 0.75 for different rate periods and losses decrease is from 7 % to 30 %, correspondingly. Taking into account that typical losses level is about 10 % for buildings' wiring, savings will be from 0.7 % to 3 %.
Sometimes the problem at hand is not the value of maximum power consumption itself, but the dynamics of load increase at peak times. The concept of easy load curve smoothing by means of Smart Appliances control center and Feedback center interaction is considered (Figure 2).
Figure 2. The planning principle
The system automatically accounts the load start-up durability which is required for the equipment, and to compare it with the time rate change time. In everyday life for consumers this turns out to be convenient -the opportunity to receive the result of the electrical installations at the time of morning rise or production process run. In the first case ( Figure 2) readiness time is equal to the time rate of change. From the point of view of energy company as well as the consumer it is efficient to switch on the device so that the power consumption accounted by cheaper rate (area 1). Operation is planned in such a way that the time of a rate of change is also a readiness time of a process. When the readiness time comes later than time of rate change (the second case), it is profitable that electrical energy is accounted partially by cheaper rate (area 2) and partially by more expensive rate (area 3). The condition of profitable operation: where C4 are the costs of equipment idle time (e.g., losses), C3-C2 is a profit gained by the rate difference.
Distributed generation estimation
The particular installations of distributed generation (DG) are under consideration. Such installations are often privately owned and have small power. First of all, the two effects of the introduction of consumers' generation cause increased attention of energy companies: 1) the effect of masking the load when the sudden shutdown of consumers' own generation may results in feeders overload due to the fact that they was not designed for full capacity; 2) the effect of the consumers' generation far exceeding the demand (e.g. during the night) can cause increased losses and undesirable reverse flows of a great value; The RMS-imbalance of generation to the load shows the extent of balancing during M periods of time: where Wi is load during a single period of time, Wgi is generation during a single period of time.
For the case when the imbalance value is far from zero, a permanent connection to the main grid being a damping component is necessary for the potential island of load and generation.
One-time maximum deviation of the generation from load: This parameter shows the extent of instantaneous damping necessity during some particular periods of time and characterize feeders maximum load.
It is known that from the point of view of distribution utilities the benefits from the presence of DG is primarily consist of load coverage and feeders flows reduction. It results at the possibility of postponing capital investments in grid reinforcement. Maximum feeders load corresponds to the period of load curve maximum, especially at the post-failure conditions or at the repair schemes. Demand for DG and own generation of consumers in these conditions increases. That's why the dualpurpose incentives include operation time during the period of the peak rate, the share of electrical energy produced during the period of the peak rate and the income from the generation of electricity: where Tmax is a duration of the peak rate (special or post-failure rate), Tst is a idle time of DG installation during the period of the peak rate. The share of electrical energy produced during the period of the peak rate: where nmax is a number of single time periods during the peak rate, Pg is a generation power.
Using the high-price rate cmax it is possible to calculate the revenue R as a product of cmax and Wgi as well as lost revenue due to equipment idle time: The mentioned functions of statistics are also available using the Individual Statement.
The set of parameters
The entire system of the mathematical statistics including the dual-purpose incentives was established ( Table 1).
All the information contained in the evaluation is both engineering and motivational (incentive) character. On the one hand, formal recommendations to the consumer are necessary as clarification of opportunities to improve the quality of consumption; on the other hand, a simple formal approach is not comprehensive. The issuance of such recommendations to be implemented with care and some of the issues of engineering psychology and marketing to be used. The recommendations to be linked with the technical, economic, operational and marketing issues facing the energy company at the moment of the recommendations issuance.
The estimation is performed by SM Data processing systems. It can be easily integrated to the Automated Meter Reading and Advanced Metering Infrastructure systems. The cores of these system are shown in Figure 3. Besides energy itself, the measured values may include active and
Estimations performed by Smart metering data processing systems
The interaction structure is shown in Figure 3.
The system includes the unit of dynamic rate change and focused primarily on a retail market support. It contributes to the energy efficiency and a distribution process improvement by the motivation (manual management) or by the Smart Appliances interaction (automated management). The feedback of SM can be integrated with Smart Home and Smart Energy. In a manual mode feedback data are outputted to Individual Statement in easy-understandable form, in automatic mode data are used by Smart Appliance control center through WLAN/HAN. Modern systems provide for the opportunity to connect and disconnect loads during grid malfunction.
A software and hardware complex for an experimental study
A software and hardware complex including SM data processing system was designed and constructed. It was used for Demand-Side Management and its efficiency estimation for different buildings of Ural Federal University campus. A real efficiency of system to be proved by experience of some years practice. Nevertheless, a two of three month application enables to make some notes about non-contractual DSM efficiency.
At the beginning of experiment the following parameters were detected and calculated by the system: up to 37 % of customers are not aware about their electricity consumption profile; from 25 to 40 % of economy potential is used;
|
2019-05-30T13:19:45.805Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "fc11bdbd7a2c0dd19308f630c3feaa419bc10ca3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/72/1/012023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b9644662fb75d1829120918affc82d9976ad4c85",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
}
|
272215396
|
pes2o/s2orc
|
v3-fos-license
|
Complete Heart Block in a Patient Undergoing Combination Immune Checkpoint Inhibitor Therapy
Combination immune checkpoint inhibitor (ICI) therapy is an emerging chemotherapy strategy for patients with solid tumor malignancies. Cardiotoxicity is a rare adverse effect of ICI therapy, most commonly presenting as acute myocarditis and, less frequently, as significant conduction abnormalities. We present a unique case of a 68-year-old female with urothelial cancer who developed shortness of breath and chest pain one week after receiving combination ICI therapy with ipilimumab and nivolumab. Biomarkers were elevated, including high-sensitivity troponin to 14,000 ng/L and creatine phosphokinase to 20,000 U/L. Due to suspicion of acute ICI-related myocarditis, a transthoracic echocardiogram (TTE) was obtained and demonstrated preserved ejection fraction (EF). Pulse-dose methylprednisolone therapy was initiated. However, the patient's clinical status continued to decline, and she developed bradycardia due to a complete heart block (CHB). This was initially treated with a dopamine infusion, but due to hypotension and hemodynamic instability, a transvenous pacemaker was placed. She continued to decline from a heart failure standpoint and developed acute hypoxic respiratory failure, requiring intubation due to pulmonary edema. A repeat TTE acquired three days following the initial echocardiogram demonstrated a newly reduced EF of 30%-35%. Additional anti-inflammatory agents were administered, including mycophenolate, infliximab, and anti-thymocyte globulin, with little improvement in clinical status. Unfortunately, she rapidly deteriorated, resulting in pulseless electrical activity (PEA) arrest and circulatory death. The autopsy revealed severe biventricular myocarditis with partial involvement of the atrioventricular node, consistent with her clinical syndrome of acute heart failure and CHB. A literature review demonstrated very few cases of ICI-related CHB. This case highlights a rare instance of atrioventricular dissociation in a patient with cardiotoxicity due to combination ICI therapy.
Introduction
Immune checkpoint inhibitor (ICI) therapy is used to treat several malignancies, including melanoma, renal cell carcinoma, non-small cell lung cancer, Hodgkin's lymphoma, head and neck cancers, gastrointestinal malignancies, genitourinary malignancies, and hepatocellular carcinoma [1].There are seven ICIs approved for use by the United States Food and Drug Administration (FDA), including ipilimumab, nivolumab, pembrolizumab, cemiplimab, nivolumab, atezolizumab, and durvalumab [2].Mechanistically, ICIs inhibit checkpoints that suppress the immune response, thereby enhancing the immune system's ability to destroy cancer cells.The specific checkpoints targeted by ICIs are cytotoxic T-lymphocyte antigen-4 (CTLA-4), programmed death-1 (PD-1), and programmed death-ligand 1 (PD-L1).Combination ICI therapy inhibits two checkpoints, rather than one, to further augment the immune response for targeted antitumor therapy.ICI-related cardiotoxicity is not a common adverse effect, with an incidence of 0.04% to 1.14%, but it has a high mortality rate of 25%-50% [2].The risk factors for cardiotoxicity are not well understood; however, the risk is higher with combined therapy; some studies have shown an almost doubled rate of mortality [2][3][4].Other risk factors possibly associated with an increased risk of cardiovascular events include female sex, African American race, and tobacco use [5].Cardiotoxicity most commonly presents as myocarditis, whereas severe conduction abnormalities such as complete heart block (CHB) have historically been rare.Emerging data suggest that ICI myocarditis can present with new conduction blocks.This case describes a patient undergoing combination ICI therapy who developed myocarditis and complete atrioventricular dissociation.
Case Presentation
A 68-year-old female with a past medical history of stage IV urothelial cancer, coronary artery disease with a prior stent to the left anterior descending artery (LAD), alcoholic cirrhosis, chronic stage 3b kidney disease secondary to hypertensive nephrosclerosis and hepatorenal syndrome, hypertension, and non-insulindependent type 2 diabetes presented to an outside facility due to progressive shortness of breath and chest discomfort one week after receiving cycle one of combination ICI therapy with ipilimumab and nivolumab, in addition to sacituzumab govitecan.Vital signs were significant for a blood pressure of 220/112 mmHg, requiring a nitroglycerin infusion for the treatment of hypertensive urgency, and a heart rate of 112 beats per minute (bpm).She was found to have significantly elevated biomarkers, including high-sensitivity troponin (hs-Tn) of 14,000 ng/L (reference range: <14 ng/L) and creatine phosphokinase (CPK) of 20,000 U/L (reference range: 29-168 U/L).Other significant lab results included creatinine 1.59 mg/dL (reference range: 0.57-1.11mg/dL), aspartate aminotransferase (AST) 983 U/L (reference range: 15-37 U/L), and alanine aminotransferase (ALT) 401 U/L (reference range: 13-56 U/L) (Table 1).An electrocardiogram (EKG) showed sinus tachycardia and no acute ischemic changes.A transthoracic echocardiogram (TTE) was obtained on hospital day two and revealed an ejection fraction (EF) of 60%-65% and no regional wall motion abnormalities.A left heart catheterization was performed given elevated troponin and chest discomfort, which showed no obstructive coronary disease and a patent LAD stent.Given the elevated biomarkers and clinical symptoms of heart failure, there was high suspicion of ICI-related myocarditis, and high-dose methylprednisolone was administered.Despite therapy, hs-Tn continued to increase to 25,000 ng/L, creatinine further rose to 2.8 mg/dL, while AST/ALT increased to 791/397 U/L (Table 1).Due to the lack of improvement in troponin and multiorgan failure, mycophenolate was administered in addition to high-dose methylprednisolone as an additional anti-inflammatory agent.She then developed CHB (Figure 1) on hospital day four, initially asymptomatic and hemodynamically stable.However, the patient became hypotensive with a blood pressure of 85/43 mmHg (mean arterial pressure (MAP) of 57 mmHg), bradycardic at 32 bpm, and somnolent.This prompted a dopamine infusion initiation, resulting in improved blood pressure (MAP >65 mmHg) and clinical status.On hospital day five, she became hypotensive, bradycardic, and somnolent despite dopamine infusion, requiring the placement of a transvenous pacemaker (TVP).The patient was then transferred to our tertiary care center facility for escalation of care.
FIGURE 1: Complete heart block
On arrival at our facility on hospital day 6, she was hemodynamically stable with a blood pressure of 133/77 mmHg and a heart rate set to 70 bpm on TVP (Figure 2), with SpO 2 >95% on room air.At this time, CPK had risen to 3,777 U/L (reference range: 29-168 U/L), while hs-Tn had further increased to 43,992 ng/L (reference range: 0-59 ng/L), and B-type natriuretic peptide (BNP) was measured at 1,315 pg/mL (reference range: <100 pg/mL).
VIDEO 1: Echocardiogram demonstrating a reduced ejection fraction
View video here: https://youtu.be/NnhHaOKeTbQShe subsequently decompensated further from a heart failure standpoint and developed significant pulmonary edema and acute hypoxic respiratory failure requiring endotracheal intubation.Due to worsening renal function and hyperkalemia, renal replacement therapy was also initiated.Infliximab was added as a third anti-inflammatory agent.A right heart catheterization was performed, which demonstrated elevated intracardiac filling pressures but a compensated cardiac index and output.An endomyocardial biopsy was obtained, which revealed active lymphocytic myocarditis with myocyte necrosis.Due to the worsening clinical status, the new vasopressor requirement (norepinephrine infusion), and the persistently elevated hs-Tn levels (22,701 ng/L), antithymocyte globulin (ATG) was administered as a fourth and final anti-inflammatory agent.Unfortunately, the patient's condition declined rapidly despite multiple lines of therapy, resulting in pulseless electrical activity (PEA) arrest and circulatory death on hospital day nine.
Discussion
To our knowledge, there are only 14 reported cases of CHB due to ICI therapy, the earliest being in 2018.In most cases, patients were treated with ICI monotherapy, while three received combination therapy.Of those receiving monotherapy, the majority improved with high-dose glucocorticoids.Clinical outcomes and mortality were worse in those receiving combination therapy; three of four patients died, as did our patient [6][7][8].This is reflected in the literature, which differentiates mortality by combination versus ICI monotherapy, demonstrating a mortality rate of 65.6% versus 44.4%, respectively [4].Another retrospective study revealed a threefold higher risk of myocarditis in those receiving combination therapy, with an incidence of 0.27% compared to 0.09% in those receiving monotherapy [5].
Myocarditis is a rare complication of ICI therapy, with an incidence of 0.04%-1.14%and often occurring within 30 days of the first or second cycle of therapy [2,4], carrying a mortality rate as high as 50% [2,9].It can present with various symptoms, including asymptomatically elevated biomarkers that represent myocardial injury (e.g., troponin, creatine kinase), chest pain, acute decompensated heart failure, and cardiogenic shock [9,10].In a review by Mahmood et al., it was found that among those presenting with myocarditis, 94% had elevated troponin, 89% had an abnormal EKG, and 51% had a preserved EF (>50%) [11].The degree of troponin elevation has been found to be a reasonable predictor of morbidity in these patients [2].One retrospective multicenter study found that patients with ICI myocarditis who developed CHB were at a higher risk of all-cause mortality at 30 days than those who did not (48% vs. 22.1%, respectively) [12].
The mechanism of ICI myocarditis is unclear, but hypotheses include shared antigens between the targeted malignancy and myocardium, as well as T cells targeting similar or dissimilar muscle antigens [2].Another proposed mechanism is increased immune-mediated activity, which allows for an exaggerated T-cell response and antigen recognition in non-target tissues, increasing circulating cytokines and the formation of autoantibodies in non-target tissues [4,9,11], leading to subsequent tissue inflammation.This was reflected in our patient, whose endomyocardial biopsy pathology revealed infiltration of T lymphocytes and macrophages.Similarly, in two other cases, pathology revealed T-cell infiltration [6,7].Conduction system involvement is rare but is likely a result of myocarditis extending from the muscle to the electrical system.This was confirmed in our patient, whose autopsy revealed severe, extensive myocarditis in the bilateral ventricles and interventricular septum with partial involvement of the atrioventricular (AV) node, explaining her clinical syndrome of acute decompensated heart failure and CHB.In some cases, patients presented with sole conduction abnormalities without heart failure [13][14][15][16], with 75% improving with high-dose corticosteroid therapy.
Management of ICI myocarditis includes high-dose corticosteroid therapy, typically with methylprednisolone.However, in the setting of clinical deterioration, additional anti-inflammatory agents may be utilized.Given the predominantly T-cell-mediated inflammation, agents such as tacrolimus and ATG have also been utilized in refractory cases [9,17].Tacrolimus predominantly suppresses T cells by inhibiting calcineurin, a key factor in T-cell activation.ATG is an immunosuppressive agent derived from the serum of rabbits or horses; in our patient's case, rabbit ATG was used.In this process, rabbits or horses are immunized with human thymocytes (immature T cells in the thymus), leading to the formation of antibodies to these thymocytes that can be isolated to create ATG [18].This agent is particularly useful in ICI myocarditis given its ability to target T-cell-mediated inflammation, which aligns with our patient's autopsy findings of extensive myocarditis with T-cell infiltration.Additionally, as demonstrated in our patient, mycophenolate mofetil (MMF) is used in refractory cases as an adjunct to high-dose corticosteroids.MMF works by impairing DNA and RNA synthesis, which inhibits the proliferation of B and T cells, thereby suppressing the immune response and providing significant anti-inflammatory effects [19].Although MMF has a slower onset of action, its use in conjunction with high-dose corticosteroids allows for rapid control of acute inflammation and long-term immune suppression, addressing both short-and long-term antiinflammatory needs.Our patient likely did not improve despite multiple therapies due to significant and rapidly progressive myocardial involvement, as noted in the autopsy.Thus, early recognition may be vital in mitigating the high rates of mortality in ICI-related myocarditis.
Given the high mortality of ICI-related myocarditis, further research is imperative to understand which patients are at the highest risk, as this knowledge could guide therapy selection, closer monitoring, and early intervention if myocarditis is suspected.Combination therapy is a significant risk factor for more severe disease and higher mortality [4], with additional risk factors including female gender, African American race, and tobacco use [5].Given the variety in clinical presentation and disease severity, it is important to maintain a high degree of clinical suspicion in patients receiving these therapies.
Conclusions
Fulminant myocarditis is a well-described complication of ICI therapy in the literature.However, CHB is a less frequently documented adverse outcome related to cardiotoxicity.Our case involves a patient with urothelial carcinoma who was treated with combination ICI therapy using ipilimumab and nivolumab.This case highlights the well-known risk factor of severe myocarditis associated with ICI therapy, given the combined ICI approach and subsequent severe cardiac manifestations, including acute decompensated heart failure and CHB.Management involves cessation of ICI therapy, high-dose corticosteroids, and potentially additional immunosuppressive agents.TVP placement is indicated in cases of CHB in patients who do not improve with anti-inflammatory therapy and are hemodynamically unstable.Given the various clinical presentations and high mortality rate of ICI-related myocarditis, especially in those with concomitant CHB, it is imperative to maintain a high index of clinical suspicion in at-risk patients to facilitate early diagnosis and prompt treatment.
|
2024-04-07T15:02:25.465Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "12c004258adbe16e5f4f1784796e75ef87cf1e40",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7759/cureus.66776",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dd9269590f43c4ac22a475e37d1831642f6c2f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267361642
|
pes2o/s2orc
|
v3-fos-license
|
Red vision in animals is broadly associated with lighting environment but not types of visual task
Abstract Red sensitivity is the exception rather than the norm in most animal groups. Among species with red sensitivity, there is substantial variation in the peak wavelength sensitivity (λmax) of the long wavelength sensitive (LWS) photoreceptor. It is unclear whether this variation can be explained by visual tuning to the light environment or to visual tasks such as signalling or foraging. Here, we examine long wavelength sensitivity across a broad range of taxa showing diversity in LWS photoreceptor λmax: insects, crustaceans, arachnids, amphibians, reptiles, fish, sharks and rays. We collated a list of 161 species with physiological evidence for a photoreceptor sensitive to red wavelengths (i.e. λmax ≥ 550 nm) and for each species documented abiotic and biotic factors that may be associated with peak sensitivity of the LWS photoreceptor. We found evidence supporting visual tuning to the light environment: terrestrial species had longer λmax than aquatic species, and of these, species from turbid shallow waters had longer λmax than those from clear or deep waters. Of the terrestrial species, diurnal species had longer λmax than nocturnal species, but we did not detect any differences across terrestrial habitats (closed, intermediate or open). We found no association with proxies for visual tasks such as having red morphological features or utilising flowers or coral reefs. These results support the emerging consensus that, in general, visual systems are broadly adapted to the lighting environment and diverse visual tasks. Links between visual systems and specific visual tasks are commonly reported, but these likely vary among species and do not lead to general patterns across species.
| INTRODUC TI ON
Visual sensitivity to long wavelengths (>600 nm) or 'red' sensitivity is relatively rare across the animal kingdom, and we have little understanding of the factors favouring its evolution (but see Mollon, 1989;Murphy & Westerman, 2022b;Osorio & Vorobyev, 2005, 2008).
Birds and reptiles commonly have a long wavelength sensitive (LWS) photoreceptor, but for other animal groups, including insects, fish and mammals, a LWS photoreceptor is uncommon (Kelber et al., 2003;Osorio & Vorobyev, 2005).Among the species with a LWS photoreceptor, there is substantial variation in the peak sensitivity of these photoreceptors (e.g.insects; van der Kooi et al., 2021).Often, the peak sensitivity (λ max ) of the photoreceptor is below 600 nm, however sensitivity extends to wavelengths beyond the peak absorbance value.For example, the human red photoreceptor peaks at 562 nm and still has 30% relative absorbance at 635 nm (Bowmaker & Dartnall, 1980).Most animals have an LWS photoreceptor with λ max below 600 nm, however, some insects, fish, reptiles and amphibians have LWS photoreceptors with λ max beyond 600 nm (Escobar-Camacho et al., 2020;Liebman & Entine, 1968;Martin et al., 2015;van der Kooi et al., 2021) including a butterfly with a LWS photoreceptor peaking at 660 nm (Ogawa et al., 2013).
Among photoreceptor types, the LWS photoreceptor shows the greatest range of peak sensitivity across taxa; yet we have limited understanding of broad scale abiotic and biotic factors that may explain this variation.
Variation in red sensitivity among species can be produced via different mechanisms.Light detection is achieved via a visual pigment, which is an opsin protein coupled to a light-sensitive retinal-based chromophore.Changes to the opsin sequence or chromophore type can shift peak sensitivity.For example, using an A2 rather than A1 chromophore is one of the main mechanisms to shift LWS sensitivity to longer wavelengths in aquatic vertebrates (Carleton et al., 2008;Enright et al., 2015;Martin et al., 2015).In fact, many fish use both A1 and A2 chromophores and modulate the ratio in response to environmental conditions (reviewed in Corbo, 2021).Shifts to longer wavelengths via opsin or chromophore modification reduce chromophore activation energy and increases the susceptibility to activation by heat (Ala-Laurila et al., 2004;Barlow, 1957;Luo et al., 2011).This reduces the signal to noise ratio, which can be particularly problematic in low light conditions when the signal is low, and is thought to restrict the upper limit of visual pigment absorption (Cronin et al., 2014).Another mechanism to shift spectral sensitivity to wavelengths longer than the peak absorbance of the opsin is the use of filtering or screening pigments.This mechanism is commonly observed in insects to produce photoreceptors with peak sensitivity greater than 600 nm (Ogawa et al., 2013;Satoh et al., 2017;Wakakuwa et al., 2004).By narrowing the spectral sensitivity of the photoreceptor, filtering pigments decrease the absolute sensitivity.
In this study, we are interested in sensitivity to red wavelengths (i.e.≥600 nm) regardless of the mechanism.Thus, we use 'LWS' to refer to a photoreceptor with λ max ≥ 550 nm, because this would provide sensitivity to red wavelengths.
The primary hypothesis to explain variation in photoreceptor sensitivities is that they are tuned to the light environment.In aquatic environments, the selective transmission of blue light in deep, clear oceanic waters corresponds to blue-shifted visual sensitivity in fish (Denton & Warren, 1956;Douglas & Partridge, 2011;Schweikert et al., 2019) as well as other organisms living in deeper waters (e.g.crustaceans; Frank et al., 2012;Marshall, Cronin, & Frank, 2003).
Water strongly absorbs red wavelengths and red light is primarily only present in shallow waters (<10 m; Bowling et al., 1986;Marshall, Jennings, et al., 2003;Warrant & Johnsen, 2013).In turbid waters, the suspended particles increase the proportion of red light by more strongly attenuating shorter wavelengths than longer wavelengths of light (Loew & McFarland, 1990;Lythgoe, 1972) (Figure 1).This is associated with red-shifted vision in many fish (Carleton et al., 2005;Corbo, 2021).Within terrestrial environments, small gaps in canopy cover result in higher amounts of red shifted light than open or closed environments because these gaps contain lower proportions of blue light scattered by the atmosphere and higher proportions of direct sunlight (Endler, 1993).A moonless night is around 100 million times darker than a bright sunny day, dramatically reducing the signal to noise ratio in the visual pathway (Kelber & Roth, 2006;Osorio & Vorobyev, 2005;Warrant & Johnsen, 2013).Therefore, thermal noise will be more important in dim light (Osorio & Vorobyev, 2005) and will have a greater impact on longer wavelength photoreceptors because as λ max increases the energy barrier for thermal isomerisation falls (Ala-Laurila et al., 2004;Luo et al., 2011).Based on characteristics of the light environment, we might therefore expect longer wavelength sensitivity in terrestrial than aquatic environments, in turbid than clear waters, in intermediate than open or closed canopy environments and in diurnal than nocturnal species.Evidence for predicted relationships between visual sensitivities and light environment is mixed (Briscoe & Chittka, 2001;Fleishman et al., 1997;Lind et al., 2017;Loew & McFarland, 1990;Partridge, 1989).
However, a comprehensive recent study found that terrestrial species have longer wavelength sensitivity than aquatic species and of the terrestrial species, those occupying closed canopy habitats have longer wavelength sensitivity than those from open habitats (if generalist species are excluded; Murphy & Westerman, 2022b).This study examined photoreceptors with the shortest and longest wavelength sensitivity in each species, irrespective of photoreceptor type, so the longest λ max for the majority of species corresponds to a photoreceptor with little sensitivity to red wavelengths.Whether general relationships exist between red sensitivity (LWS peak sensitivity) and habitat light remains an open question.
A second hypothesis to explain variation in visual sensitivities is that they are tuned to certain types of visual task, such as foraging or signalling.Numerous animals use red sexual signals during mate choice (Amundsen & Forsgren, 2001;Belliure et al., 2018;Hill, 2006;Kwiatkowski & Sullivan, 2002) and shifts in long wavelength sensitivity may improve discrimination of these signals (Carleton et al., 2005;Stieb et al., 2023).Within some animal groups, long wavelength sensitivity has been shown to improve discrimination of objects, such as fruit, flowers or conspecifics, from the background (Mollon, 1989;Stieb et al., 2023;Sumner & Mollon, 2000;Wang et al., 2022), and for Papilio aegeus butterflies, a LWS photoreceptor can also improve identification of young leaves suitable for oviposition from old, unsuitable leaves (Kelber, 1999).Both Kelber (1999) andWang et al.'s (2022) studies find that a red photoreceptor can help to distinguish colours other than red if there is variation in reflectance at long wavelengths.In addition, colour vision models suggest that shifting LWS peak sensitivity to longer wavelengths can improve discrimination of resources or mates (Stieb et al., 2023;Wang et al., 2022), but improvements may be small (Lind et al., 2017).Most examples linking LWS photoreceptor peak sensitivity with visual task concern specific species and it remains unclear whether general patterns exist between broad categories of visual tasks, such as foraging or signalling, and LWS photoreceptor sensitivity.
We investigated visual tuning the of the LWS photoreceptor across a wide range of taxa that show variation in the presence and peak sensitivity of the LWS photoreceptor.This included insects, crustaceans, arachnids, amphibians, reptiles, fish, sharks and rays.
Birds and mammals were excluded a priori because these groups show limited variation in LWS sensitivity.We collated a list of species with physiological evidence for a LWS photoreceptor sensitive to red wavelengths (i.e.λ max ≥ 550 nm).This list substantially expands the species with λ max ≥ 550 nm independently compiled by Murphy and Westerman (2022b).For each species, we recorded abiotic and biotic factors that may be associated with increased peak sensitivity of the LWS photoreceptor.Specifically, we tested four predictions based on visual tuning to the light environment: (1) terrestrial species will have higher λ max than aquatic species; (2) within aquatic species those living in turbid, shallow waters will have higher λ max ; (3) within terrestrial species, those living in habitats with intermediate levels of canopy cover will have greater λ max than those associated with closed or open habitats and (4) diurnal terrestrial animals will have greater λ max than nocturnal species.To explore whether visual systems may be tuned to general visual tasks, we examined whether λ max is associated with proxies for signalling and foraging tasks.Specifically, we tested whether λ max is higher for species that have red morphological features, or are sexual dichromatic (proxies for interspecific and/or intraspecific signalling) or are associated with flowers or coral reefs (related to foraging ecology).Together these results provide insight into the evolution and function of red sensitivity.
| Literature search
We focused on animal groups with known variation in the presence and peak sensitivity of the long wavelength photoreceptor: insects, crustaceans, arachnids, amphibians, reptiles, fish, sharks and rays.
Birds and mammalian species were excluded a priori because these groups show limited variation in long wavelength sensitivity; birds commonly have a long wavelength photoreceptor but peak sensitivity is similar among species (λ max approximately 560 to 570 nm; Hart, 2001;Hart & Hunt, 2007) and mammals generally lack long wavelength photoreceptors, although there are some exceptions (e.g.some primates; Jacobs, 2009;Osorio & Vorobyev, 2005).For this study, we defined a LWS photoreceptor as one with a peak sensitivity (λ max ) ≥ 550 nm because this provides sensitivity to red wavelengths (>600 nm).
We filtered the search to only those under the zoology and ecology categories.After removing duplicates, additional exclusion criteria were applied.We included only studies where λ max was recorded through electroretinogram (ERG), microspectrophotometry (MSP), intracellular recording or partial bleaching.If multiple studies were identified for a species and the λ max differed between these studies, the most current and/or rigorous was recorded (i.e.preferentially intracellular recording, followed by ERG and MSP, or those with larger sample sizes).This search was completed in January 2022 and produced 34 articles identifying an additional 79 species with a LWS photoreceptor.
| Data extraction and processing
For each species identified we recorded: λ max of all photoreceptors; the method used to measure λ max ; species name (including population or sub-group information where applicable) and higher classification information; life stage (adult/immature) and sex.Several data points required some additional processing to determine the λ max of the LWS photoreceptor.First, where λ max was given as a range we recorded the mean of the range.Second, we identified four fish species with different λ max among populations found in different habitats.To account for these duplications, a random effect of species was included in the statistical models.Third, recordings that were from juvenile or immature life stages were removed (n = 18) and duplications due to recordings from both males and females were removed (LWS λ max was the same for both sexes).
Finally, in one fish, λ max values were reported based on the A1/A2 chromophore ratios, which correspond to different visual sensitivities (Escobar-Camacho et al., 2019).Many fish use a mix of A1 and A2 chromophores and can change this ratio in response to environmental conditions (Carleton et al., 2008;Enright et al., 2015).
Therefore, for this record we calculated a single λ max by multiplying the ratio of each pigment (A1/A2) with its corresponding λ max and summing these values.
To obtain information about habitat, morphology and behaviour, we searched the primary literature and online databases.For all species, we recorded broad scale habitat (terrestrial, aquatic or semi- based on recent descriptions of water clarity for species found in specific water bodies.For species with a larger range, turbid water was assigned to animals inhabiting rivers, lakes, ponds and estuaries, and clear water was assigned to open marine environments.We also documented activity period (diurnal or nocturnal) for terrestrial species.To examine spectral tuning to visual tasks, we recorded morphological characteristics including the presence of red colouration and sexual dimorphism in hue and/or colour intensity (human visible colours) using primary literature and photographs.For terrestrial species, we documented flower association (present or absent), based upon whether a species was a pollinator, florivore or if it was documented as a flower visiting species.For aquatic species we documented coral or reef association (present or absent), based upon information reported by FishBase and photo documentation.
Information obtained through primary literature or trusted databases (i.e.Animal Diversity Web (ADW), World Register of Marine Species (WoRMs) and FishBase) was prioritised over photographic information.If relevant information could not be found, that data point was excluded from relevant analyses.
| Statistical analysis
We used linear mixed models (LMM) to determine the relationship between the λ max of the long wavelength photoreceptor and habitat, morphology and behaviour.In such analyses, it is important to account for phylogenetic relationships to prevent pseudo-replication.
Given the extremely broad and patchy phylogenetic distribution of taxa in this study, branch length information for a derived phylogeny (e.g. from Open Tree of Life, tree.opent reeof life.org) is inaccurate.In our study, phylogenetic pseudo-replication is largely due to multiple closely related species within families, whereas most of the families are distantly related.Thus, phylogenetic non-independence can be accounted for by including 'family' as a random effect.This accounts for non-independence of species within the same family and assumes that evolutionary origins of wavelength sensitivity are largely independent among families, which is reasonable given the phylogenetic sampling within our dataset.For all models, the peak absorbance of the LWS photoreceptor (λ max ) was the dependant variable and for models involving aquatic organisms 'species' was also included as a random effect to account for multiple populations with different λ max .All analyses were conducted in R version 4.1.2(R Core Team, 2022) using the lme4 package (Bates et al., 2015) and significance of each predictor variable was assessed using marginal hypothesis tests, implemented using the ANOVA command from the car package (Fox & Weisberg, 2019).
We ran four LMMs on different subsets of the dataset.The first model was conducted using the full dataset and included broadscale habitat (terrestrial/aquatic/semi-aquatic), sexual dimorphism in body colouration hue (present/absent), sexual dimorphism in body colouration intensity (present/absent) and presence of a red morphological feature (present/absent).The second model included only aquatic species and the fixed effects were sub-habitat type (shallow turbid/shallow clear/deep) and coral reef association (present/absent).The third model included only terrestrial species and the fixed effects were terrestrial sub-habitat (open/intermediate/ closed) and activity time (diurnal/nocturnal).We also included an interaction between sub-habitat and activity time because sub-habitat may only influence peak sensitivity for diurnal animals.Only insects exhibited substantial variation in flower association, thus, the fourth model investigated flower association in insects and included flower association (present/absent) as a fixed effect.For some categories, we identified fewer than 10 records of species with a LWS photoreceptor.We ran these models with and without those groups to assess consistency of results.Specifically, model 1 (all data) was run with and without semi-aquatic species (n = 7), model 2 (aquatic species) was run with and without deep sea species (n = 8) and model 3 (terrestrial species) was run with and without species from closed habitats (n = 7).For all analyses, results were qualitatively similar, so we report results of the full models.
| Spectral tuning to the light environment
Of the species included in the analysis, there were 78 terrestrial, 67 aquatic and seven semi-aquatic records of species or populations with a LWS photoreceptor.We found that terrestrial species had a mean peak sensitivity of 592 nm (95% CI = 584, 600 nm), approximately 18 nm higher than aquatic species (574 nm, 95% CI = 565, 584 nm; χ 2 = 9.49, p = .009;Figure 2; Table 1).Semi-aquatic species did not differ significantly from either group (Figure Of the aquatic species with a LWS photoreceptor, we found 35 from shallow clear waters, 24 from shallow turbid waters and eight from deep waters.Aquatic species living in turbid environments had the highest mean λ max at 578 nm (95% CI = 569, 587 nm), 12 nm longer those living in clear water (566 nm, 95% CI = 559, 572 nm) and 16 nm longer than those living in deep water (558 nm, 95% CI = 540, 577 nm; χ 2 = 10.28,p = .006;Figure 2, Table 1).
Within terrestrial species, 67 species were diurnal compared to just 11 nocturnal species, of which the majority were moths or beetles.Diurnal species had a mean peak sensitivity of 595 nm (95% CI = 584, 606 nm), 20 nm longer than the nocturnal group with peak sensitivity averaging 575 nm (95% CI = 558, 593 nm; χ 2 = 4.83, p = .028;Figure 2 Table 1).Most terrestrial animals with a long wavelength photoreceptor were from open (n = 31) or intermediate habitats (n = 41), with only seven from closed habitats.There was no difference in the LWS photoreceptor peak sensitivity among species from these different habitats (Table 1).
| Spectral tuning to visual tasks
Of the species included in the analysis, roughly half of the species or populations possessed red colouration (72 with red colouration and 80 without red colouration).Of the 145 records where colouration information for each sex was available, eight species were sexually dimorphic only in hue, 12 species were sexually dimorphic only in colour intensity, and 43 species were sexually dimorphic in both hue and intensity.We detected no difference in peak sensitivity of the long wavelength photoreceptor related to the presence of red colouration or sexual dimorphism in hue or intensity (Table 1).
Of the 67 aquatic species, 13 were associated with coral reefs, and this association did not correlate with peak sensitivity of the LWS photoreceptor (Table 1).
Of the 49 insect species with a LWS photoreceptor, 36 were associated with flowers and this association did not correlate with LWS λ max (Figure 2; Table 1).The 13 insect species that were not associated with flowers tended to be predatory insects or did not feed at all during their adult life stage.Red sensitivity is relatively uncommon among animals.Of those with a LWS photoreceptor there is substantial variation in its peak wavelength sensitivity (λ max ), raising the question, why?We identified 164 species with a LWS photoreceptor (λ max ≥ 550 nm) across a range of taxa and found that several variables describing light environment were associated with peak sensitivity of the LWS photoreceptor.Specifically, terrestrial species had higher LWS λ max than aquatic species, diurnal terrestrial species had higher LWS λ max than nocturnal terrestrial species and aquatic species in turbid shallow habitats had higher LWS λ max than those in clear or deep waters.Contrary to expectations (Endler, 1993;Murphy & Westerman, 2022b), peak sensitivity of the LWS photoreceptor was not higher for terrestrial species in intermediate habitats compared to open or closed.We also found no evidence supporting visual tuning to visual tasks broadly related to signalling and foraging.These patterns align with an emerging consensus that visual systems are broadly tuned to the light environment and to perform diverse visual tasks.Visual systems may be tuned to perform specific visual tasks related to ecology or behaviour, but these are likely to be idiosyncratic (Lind et al., 2017;Osorio & Vorobyev, 2008), obscuring general patterns across species.
| Spectral tuning to the light environment
Variation in peak sensitivity of the LWS photoreceptor was associated with habitat, likely due to differences in the spectral composition of illumination across habitats.In most terrestrial environments long wavelengths of light are prevalent (Endler, 1993;Warrant & Johnsen, 2013), whereas in aquatic environments long wavelengths from sunlight are rapidly attenuated and very little red light remains beyond 10 m depth (Bowling et al., 1986;Marshall, Jennings, et al., 2003;Warrant & Johnsen, 2013).Our results reflect this difference in the presence of long wavelengths, with aquatic species having lower λ max of the LWS photoreceptor on average than terrestrial species.For aquatic species, the longest peak sensitivity was 614 nm, suggesting that further shifts in LWS sensitivity to longer wavelengths provides minimal improvements in discrimination or sensitivity.Several terrestrial species possessed LWS photoreceptors with peak sensitivity >615 nm, up to 660 nm.These LWS photoreceptors may function to improve discrimination of resources (e.g.oviposition sites, conspecifics, food; Kelber, 1999;Wang et al., 2022).The upper limit for terrestrial species may be due to limited improvements in discrimination of natural colours (Wang et al., 2022) and poor signal to noise ratio due a to increasing susceptibility of the photoreceptor to be activated by thermal noise (Koskelainen et al., 2000;Luo et al., 2011).
Within aquatic environments, peak sensitivity of the LWS receptor was greater for species from turbid water compared to those from shallow, clear water or deep water.Turbid waters tend to have a higher proportion of long wavelength light compared to clear water because the suspended particles attenuate shorter wavelengths (Jones et al., 2021;Sundarabalan et al., 2016).Numerous studies have documented that species inhabiting turbid waters tend to have red-shifted photoreceptors compared to species in clear waters (Carleton, 2009;Carleton et al., 2020;Corbo, 2021;Lythgoe et al., 1994;Nagloo et al., 2016) and many of these species achieve this by changing the chromophore used or the ratio of A1/ A2 chromophores (Corbo, 2021).Our findings indicate that similar patterns occur across a broad range of aquatic species, including fish, crustaceans, sharks and rays.We also identified eight deep water species with a LWS photoreceptor, despite little downwelling light penetrating beyond 500 m deep (Lythgoe, 1988;Warrant & Johnsen, 2013).Three of these were mysid crustaceans, which often vertically migrate to shallow waters to feed at night.The LWS photoreceptor may improve discrimination in shallow waters or assist in habitat choice, but this remains to be tested.The remaining five species were stomiid dragon fish that all have red bioluminescence (Bowmaker et al., 1988;Crescitelli, 1989;O'Day & Fernandez, 1974;Partridge & Douglas, 1995).Red bioluminescence is relatively uncommon and likely provides them with a 'secret signalling system' that potential predators cannot see (Douglas et al., 1999)
TA B L E 1
Statistical results for each model testing the association between predictor variables and peak sensitivity of the long wavelength photoreceptor.
deep-water species have visual systems specialised to detect bioluminescence (Frank et al., 2012(Frank et al., , 2016;;Turner et al., 2009;Warrant & Locket, 2004), providing an example of visual capabilities tuned to visual tasks such as intraspecific signalling or prey detection.
Unlike the trends across aquatic habitats, we detected no difference in peak sensitivity of the LWS photoreceptor across terrestrial habitats.This result differs from predictions that long wavelength sensitivity may be beneficial in forests with intermediate canopy cover because illumination is red shifted compared to open habitats (Endler, 1993).A recent review supported this hypothesis, finding that specialist species from closed or intermediate habitats have sensitivity to longer wavelengths of light than specialist species from open habitats (Murphy & Westerman, 2022b).Our results may differ due to the species included in our analysis and our focus on only the LWS photoreceptor, rather than maximum sensitivity of any photoreceptor.We identified 82 terrestrial species with an LWS photoreceptor, excluding birds and mammals, whereas Murphy and Westerman identified 27 terrestrial species with a LWS photoreceptor (λ max ≥ 550 nm), including 13 birds and four mammals.The terrestrial species in our dataset were predominantly insects and reptiles and the results align with previous studies finding no correlation between spectral sensitivity and the terrestrial photic environment in these groups (Briscoe & Chittka, 2001;Fleishman et al., 1997).We identified relatively few nocturnal terrestrial species with a LWS receptor (n = 11), however these species tended to have shorter LWS λ max than diurnal species.This shift to shorter wavelengths may improve the signal to noise ratio in low light conditions.The impact of thermal noise is greater for photoreceptors with higher λ max , due to the activation energy threshold decreasing with increasing λ max (Ala-Laurila et al., 2004;Barlow, 1957;Luo et al., 2011).Several studies have documented that nocturnal species have LWS photoreceptors with λ max shifted to shorter wavelengths compared to diurnal species (Eguchi et al., 1982;Ellingson et al., 1995;Hart & Vorobyev, 2005;Potier et al., 2020), and our results support these findings.
| Spectral tuning to visual tasks
Parameters related to the colour of resources or conspecifics were not associated with the peak sensitivity of the LWS photoreceptor.
Species associated with flowers or coral reefs, those with red colouration or those with sexual dimorphism in body colour or intensity did not possess sensitivity to longer wavelengths compared to species without these characteristics.This result is consistent with previous work suggesting that long wavelength photoreceptors have not evolved in response to signals but instead may have evolved in response to common colours within the background (Osorio & Vorobyev, 2005;Stieb et al., 2023;Sumner & Mollon, 2000;Surridge et al., 2003).For example, the LWS photoreceptor of terrestrial animals may be tuned to the reflectance of foliage (Lythgoe, 1979;Osorio & Vorobyev, 2005).In this case, a long wavelength photoreceptor could function to detect variation among leaves (e.g.species of plant, young vs. old leaves; Kelber, 1999;Lythgoe, 1979), identify resources against a foliage background (e.g.flowers, fruit, conspecifics; Sumner & Mollon, 2000;Wang et al., 2022) and detect differences between leaves and other natural objects (e.g.bark, soil, dead vegetation; Osorio & Bossomaier, 1992;Osorio & Vorobyev, 2005).
Furthermore, due to the diverse functions a visual system must perform beyond finding resources or conspecifics, it is perhaps unlikely to find associations between peak sensitivity and one visual task (Lind et al., 2017).
Despite finding no relationship between peak sensitivity and ecological variables, there are several species within our dataset that exhibit behaviours associated with long wavelength vision.Many animals show female preference for red colouration during mate choice, including the red dewlap colouration in Anolis carolinensis lizards (Sigmund, 1983) or the orange-red fin and body colouration in mollies (Poecilia spp; Endler, 1984;Houde, 1987).Similarly, in many species males display red colouration around the breeding season or at sexual maturation (e.g.Bakker & Mundwiler, 1994;Meinertzhagen et al., 1983;Vranken et al., 2020).Long wavelength sensitivity in butterflies may also improve discrimination of courtship signals (Ogawa et al., 2013) and has been shown to improve discrimination of young and old leaves for selection of oviposition sites (Kelber, 1999).In beetles, long wavelength sensitivity is relatively rare, but is present in some families (van der Kooi et al., 2021) and is more common in species with flower associations (Sharkey et al., 2021).These examples highlight that long wavelength sensitivity is important for specific tasks, even if tuning of peak sensitivity is not correlated with ecological variables more broadly.For many species, we still have limited
1
An illustrative example of the spectral properties of light in different aquatic environments.(a) A blue light dominated environment, such as shallow, clear reefs (Image: Lars Behnke).(b) A red light dominated environment, such as turbid waters (Image: Tom Tetzner, U.S. Fish and Wildlife Service).(c) A low illumination, deep water environment (note that the animal is illuminated by ROV lights; Image: NOAA Office of Ocean Exploration).
aquatic) and sub-habitat (terrestrial: open, intermediate or closed; aquatic: shallow-turbid, shallow-clear or deep).For terrestrial species, open habitat consisted of grasslands, scrublands, canopy dwelling species or habitats with few to no trees, closed habitats were dense forests such as rainforests and intermediate habitats included generalist species or open forests.For aquatic species, those regularly found at depths >500 m were classified as deep whereas those closer to the surface (<500 m) were shallow.Water turbidity was Effect of lighting environment and visual tasks on peak sensitivity (λ max ) of the long wavelength sensitive (LWS) photoreceptor.(a) terrestrial species have longer λ max than aquatic species.(b) Species from shallow, turbid water have longer λ max than species from shallow clear water or deep water.(c) Diurnally active species have longer λ max than nocturnally active species.(d) Flower association did not influence λ max of the LWS photoreceptor.Small grey points represent λ max from each species, and large red points represent estimated marginal means ± 95% confidence intervals.
Furthermore, lighting environment
photoreceptor from intermediate or closed habitats (n = 47) compared to open habitats (n = 31), and Murphy and Westerman (2022a) indicates 57% of species from closed or intermediate habitats have an LWS photoreceptor (λ max ≥ 550 nm) compared to only 27% of species from open habitats.Taken together, these results suggest that a LWS photoreceptor may be beneficial in intermediate or closed terrestrial habitats but there is limited evidence for consistent correlations between LWS peak sensitivity and habitat for terrestrial animals.
understanding of how they process visual information, whether the LWS photoreceptor contributes to colour vision or how vision guides behaviours.Further investigations into colour vision, behaviour and ecology of species with a LWS photoreceptor will likely provide new insights into the function of red sensitivity.AUTH O R CO NTR I B UTI O N S Bryony M. Margetts: Conceptualization (equal); data curation (equal); formal analysis (lead); investigation (lead); methodology (equal); software (equal); validation (supporting); visualization (equal); writing -original draft (lead); writing -review and editing (equal).Devi Stuart-Fox: Conceptualization (equal); formal analysis (supporting); investigation (supporting); methodology (equal); . Various Marginal hypothesis tests were conducted to test the significance of each predictor in each model.p-Values in bold indicate values <.05.
|
2024-02-02T05:09:55.724Z
|
2024-01-31T00:00:00.000
|
{
"year": 2024,
"sha1": "9a2d6565b2f8815b27c3785b9451806ae10c3779",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9a2d6565b2f8815b27c3785b9451806ae10c3779",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
61810685
|
pes2o/s2orc
|
v3-fos-license
|
PDGF-BB serum levels are decreased in adult onset Pompe patients
Adult onset Pompe disease is a genetic disorder characterized by slowly progressive skeletal and respiratory muscle weakness. Symptomatic patients are treated with enzymatic replacement therapy with human recombinant alfa glucosidase. Motor functional tests and spirometry are commonly used to follow patients up. However, a serological biomarker that correlates with the progression of the disease could improve follow-up. We studied serum concentrations of TGFβ, PDGF-BB, PDGF-AA and CTGF growth factors in 37 adult onset Pompe patients and 45 controls. Moreover, all patients performed several muscle function tests, conventional spirometry, and quantitative muscle MRI using 3-point Dixon. We observed a statistically significant change in the serum concentration of each growth factor in patients compared to controls. However, only PDGF-BB levels were able to differentiate between asymptomatic and symptomatic patients, suggesting its potential role in the follow-up of asymptomatic patients. Moreover, our results point to a dysregulation of muscle regeneration as an additional pathomechanism of Pompe disease.
changes in motor performance 13 . For these reasons, having a serum growth factor able to identify patients in which fibro-fatty substitution has begun, would be of great utility 14,15 .
The main aim of our research was to study the serum concentration of a group of growth factors related to muscle fibrosis, degeneration and inflammation, in a cohort of 37 symptomatic and asymptomatic AOPD patients. We compared the serum concentration of Pompe patients with a control group. We also studied whether there were differences in the serum concentration of these growth factors between symptomatic and asymptomatic patients. In parallel, we evaluated the patients using several motor function tests, spirometry, quantitative muscle MRI (qMRI), and patient-reported outcome measures (PROMs), in order to establish whether or not a correlation between serum concentration and the clinical situation of the patients exists.
Results
Description of the cohort. 37 AOPD patients were included in the study. Twenty-nine patients were symptomatic (18 women, 62.1%) and 8 were asymptomatic. Twenty-three of the 29 symptomatic patients were already receiving ERT when first sample was obtained. In the remaining 6 symptomatic patients, blood samples were obtained before ERT was started. Asymptomatic patients were studied in neuromuscular disorder units because high levels of hepatic enzymes or CKs were found in random checkup blood analyses (5 cases) or because they had relatives already diagnosed with Pompe disease (3 cases). The demographic and clinical data of these two groups are described in Table 1. Results of the motor function tests and muscle MRI of the Pompe cohort has been already reported 15,16 . We compared the ELISA results with serums obtained from age-and sex-matched controls (n = 45).
Growth factor serum levels in pompe patients compared to controls. Our first aim was to study whether there were differences in growth factor serum levels between Pompe patients and controls. We observed significant differences in PDGF-BB, TGF-β, PDGF-AA and CTGF levels as is shown in Fig. 1
PDGF-BB levels differentiate between symptomatic and asymptomatic Pompe patients. Our
second aim was to assess whether any of the growth factors studied was able to differentiate between asymptomatic and symptomatic Pompe patients (Fig. 2). We observed that PDGF-BB levels were significantly lower in symptomatic patients (Median: 1.565 ng/ml (IQR: 1.405-2.096) compared to asymptomatic Pompe patients (Median: 2.038 ng/ml (IQR: 1.907-3.803) (Mann-Whitney U test, p = 0.044) ( Fig. 2A). In contrast, we did not identify differences in TGF-β1, CTGF and PDGF-AA serum concentration between symptomatic and asymptomatic patients.
As there were significant differences in age between symptomatic and asymptomatic Pompe patients (Table 1), we decided to add a new group of young controls with a mean age of 23 years. We observed significant differences in PDGF-BB serum levels between symptomatic Pompe patients and all other groups, including young controls (mean age = 23 years old) (Mann-Whitney U test, p = 0.0012), all controls and asymptomatic Pompe patients. We did not observe significant differences between controls of different ages and between asymptomatic Pompe patients and young controls (Mann-Whitney U test, p > 0.05) (Fig. 3).
To further analyze whether PDGF-BB serum levels were useful for differentiating between symptomatic or asymptomatic patients, we used a receiver operating characteristics curve, or ROC curve, and analyzed area under the curve (AUC). The ROC curve (AUC: 0.737, p = 0.042, 95%CI: 0.539-0.935) (Fig. 4) confirmed that PDGF-BB levels were able to predict which patients were symptomatic and which were asymptomatic. Therefore, patients with lower values than the cut-off level (1.97 ng/ml) had a higher probability of being asymptomatic. Sensitivity and specificity were 75% and 76% respectively.
pDGF-BB levels decrease in pompe patients but not in other muscle dystrophies. Since the function of PDGF-BB seems to be related with muscle regeneration, we analyzed serum levels of this growth factor in other muscle dystrophies in which regeneration increases, such as Duchenne muscle dystrophy (DMD), Becker muscle dystrophy (BMD) dysferlinopathy (DYSF) and facioscapulohumeral muscular dystrophy (FSH). Clinical and demographic features of these groups are described in Table 2.
Correlation between pDGF-BB serum levels and results of muscle function tests and quantitative muscle MRI.
We used the Spearman test to identify if there were any correlations between PDGF-BB serum concentration and the results of the muscle function tests, spirometry, patient-reported outcomes and qMRI. As it is shown in Table 3, we did not find any significant correlation. However, we found a non-significant tendency between PDGF-BB levels and 6MWT, the MRC score, the Myometry score, MIP, and thigh fat fraction measured using 3 point Dixon MRI.
Discussion
In the present study, we found significant differences in the serum concentration of four growth factors related to the process of skeletal muscle degeneration and regeneration in AOPD patients compared to controls. However, only serum levels of PDGF-BB were significantly different when symptomatic patients were compared with asymptomatic patients. In fact, the diagnostic accuracy of the PDGF-BB concentration to distinguish between symptomatic and asymptomatic patients was assessed by ROC curves, determining an optimal cut-off value of 1.97 ng/ml. It is well known that chronic muscle damage leads to persistent inflammatory infiltration, muscle necrosis and activation of fibro/adipogenic progenitor (FAP) cells 17 , something that has been studied in dystrophic muscles. Eventually, muscle fibers are lost and substituted by fibro-adipose tissue 18 . Several growth factors have been related with this process, including those in the present study. TGF-β1 and PDGF-BB play an important role in satellite cell proliferation and fibrotic remodeling [19][20][21] . TGF-β1 is crucial in the initiation of fibrosis in skeletal CTGF influences the fibrotic process by inducing the expression and release of collagen type 1 by activated fibroblasts [26][27][28] . Although the process of muscle degeneration has been well established in animal models of muscular dystrophies such as Duchenne muscle disease, it is not yet completely known whether it happens in the same way as in Pompe disease. However, radiological studies show that skeletal muscle is gradually lost and substituted by fat tissue in patients with adult onset Pompe, mimicking what happens in patients with muscular dystrophies and suggesting a similar skeletal muscle degenerative process [29][30][31] . Based on this hypothesis, we decided to study the serum concentration of growth factors related with the process of muscle regeneration, degeneration and fibrosis. PDGF-BB, which is secreted by inflammatory cells and skeletal muscle regenerative fibers, has recently been related with the process of muscle regeneration through the activation of satellite cell proliferation and chemotaxis 32 . We observed lower levels of serum PDGF-BB in AOPD patients compared to controls. Moreover, PDGF-BB serum concentration was even lower in symptomatic patients, suggesting a correlation with disease progression. As PDGF-BB probably influences muscle regeneration, the lower levels found in AOPD patients might reflect impaired regenerative response in Pompe disease, something that has also been suggested by other authors. Impaired satellite cell activation has been described in muscle samples from Pompe patients [33][34][35] . Moreover, serum levels of insulin growth-factor-1 and myostatin, two molecules related with the process of satellite cell activation, are lower in serum of Pompe patients compared to controls 36 .
We did not observe a significant difference in serum levels of TGF-β1, PDGF-AA and CTGF in symptomatic compared to asymptomatic Pompe patients. These three factors have been related with the process of muscles fibrosis, as discussed early. The lower levels of TGF-β1 found in Pompe patients compared to controls, supports the idea that fibrosis is not a major issue in patients with Pompe disease. In fact, Dr. Palermo 37 and collaborators did not find an up-regulation of TGFB1 fibrosis-associated genes in skeletal muscle Pompe patients, which supports our findings.
The lower levels of growth factors related to fibrosis and regeneration suggested by the current study could be explained by the lack of sarcolemma damage in Pompe disease. In most muscular dystrophies in which the process of muscle degeneration and regeneration has been studied, muscle damage is produced because of the instability of skeletal muscle membrane. Membrane tears induce a series of responses, such as the release by muscle fibers of cytokines that recruit inflammatory cells and participate in the activation of satellite cells. Persistent inflammatory cells release profibrotic growth factors that lead to the expansion of fibrotic tissue. The process of muscle fiber degeneration in Pompe disease is probably different. There is no evidence of necrosis or inflammatory infiltration 38 . As opposed to muscle membrane instability, lysosomal rupture has been proposed as the main mechanism leading to muscle fiber necrosis. Glycogen progressively accumulates in lysosomes producing their rupture and the release of lytic enzymes to the sarcoplasm, probably activating the process of autophagy 38,39 . It is tempting to hypothesize that local cell response is different in Pompe disease, with no recruitment of inflammatory cells, no activation of satellite cells and no release of profibrotic factors. The fact that PDGF-BB levels were higher in serum samples from patients with other muscular dystrophies, and lower in Pompe disease, supports this hypothesis. The identification of growth factors useful for the follow-up of patients is considered one of the unmet needs in Pompe disease. ERT is being administered to those symptomatic patients, patients with muscle or respiratory muscle weakness. However, patients may develop mild motor disturbances, such as abnormal gait posture, due to the presence of mild axial involvement. Growth factors capable of differentiating between symptomatic and asymptomatic patients could then be a useful tool in follow-up. As we observed significant lower PDGF-BB levels in symptomatic Pompe patients than asymptomatic, we suggest PDGF-BB could be useful to monitorize progression of the disease and help to identify those patients in which the process of muscle degeneration has started without influencing muscle function yet. Other growth factors previously proposed, such as urine glucose tetrasaccharide Glc4 levels 40,41 , have utility in the diagnosis of Pompe disease, or in monitoring of the treatment but not for differentiating between symptomatic and asymptomatic patients 42,43 .
To summarize, we have identified a group of four growth factors related to the process of muscle degeneration and regeneration that are differently expressed in Pompe patients compared to controls. Interestingly, PDGF-BB levels were significantly different in symptomatic patients compared to asymptomatic. In our opinion, our results suggest that decreasing levels of PDGF-BB in asymptomatic patients should prompt us to tighten the follow-up of the patient, repeating muscle and respiratory function tests in order to consider starting ERT before muscle degeneration becomes irreversible.
Methods patients and study design.
This study is part of an ongoing prospective open-label study in which we are following up a group of symptomatic and non-symptomatic AOPD patients annually in our center using muscle function tests, muscle MRI and blood analysis. This study has been registered in Clinicaltrials.gov (identifier NCT01914536). The present research was performed in accordance with Spanish regulation for clinical trials and studies and following the recommendations described in the Declaration of Helsinki. The study was approved by The Ethical Committee of Hospital de la Santa Creu i Sant Pau (HSCSP) in Barcelona. All participants signed an appropriate informed consent form.
The diagnosis of Pompe disease was based on the presence of two mutations in the GAA gene. In cases where a single or no mutation was detected, diagnosis was based on reduced activity in at least two tissues, lymphocytes and skeletal muscle being the most common tissues studied, as has been recently suggested by the European Pompe Consortium 10 . All patients were considered adult onset since none of them developed symptoms before the age of 18.
We defined a patient as symptomatic when we identified muscle weakness in clinical examination using the Muscle Research Council score (MRC), or when Forced Vital Capacity (FVC), while seated, was lower than 85%. A total of 37 patients were included: 23 symptomatic patients treated with ERT, 6 symptomatic patients untreated with ERT and 8 asymptomatic patients. All treated patients received 20 mg/kg acid alpha-glucosidase intravenously every other week. Untreated symptomatic patients were seen before starting ERT. Clinical and genetic features of this group of patients have been previously published 15 . In summary, mean age at baseline visit of the 23 symptomatic patients was 49.8 years old. 9 of these patients used sticks or wheelchair for walking, with two of them being fully wheelchair bound. 12 patients used non-invasive ventilation at night. Mean age of the 6 non-treated symptomatic patients was 37.4 years old. Only one patient of this group used the stick for walking and one other patient required noninvasive ventilation at night. 8 presymtomatic AOPD patients were also included (mean age 21 years, 4 women). These patients were diagnosed of Pompe disease because they were relatives of patients with Pompe or because the presence of high CK levels in blood samples. As controls we included 45 patients whose age and sex matched our Pompe cohort (mean age 48 years, 29 women), and 10 more controls whose age and sex matched with asymptomatic Pompe patients (mean age 23 years, 6 women). The controls were volunteers, most of them relatives or caregivers of our Pompe patients that kindly agreed to participate in the study. CKs levels were normal in all control patients (Reference value for our laboratory is <174 U/L).
Growth factors identification.
Blood samples were collected at baseline visit before motor function tests were performed. Blood was centrifuged for 1600 g for 9 minutes at 4 °C in order to separate the serum. The serum was aliquoted and stored at −80 °C until analysis.
Serum platelet-derived growth factor BB (PDGF-BB) and transforming growth factor β1 (TGF-β1) levels were measured using commercial enzyme-linked immunosorbent assay (ELISA) kits (R&D, Minneapolis, MN, USA), according to the manufacturer's instructions. A platelet-derived growth factor AA (PDGF-AA) human ELISA kit was provided by ThermoFisher (Thermo Fisher Scientific, Nepean, Canada) and connective tissue growth factor (CTGF) by EIAAB Science Co (Wuhan, China). Minimum detectable cytokine concentrations for these assays were measured to be 1.7 pg/ml for TGF-β1, 15 pg/ml for PDGF-BB, 40 pg/ml for PDGF-AA and 0.18 ng/ml for CTGF. Samples were measured in duplicate and read on a microplate reader Beckman Coulter AD 340 (Beckam-Coulter, Brea, CA, USA) with AD-LD software. Muscle imaging. All patients were examined in a Philips Achieva XR 1.5 Teslas located at HSCSP. We used the same positioning protocol for all patients: supine position with legs stretched, the patella facing upward and the ankles in a neutral position. 3D 3-point Dixon images were acquired with the following acquisition parameters: TR/TE = 5.78/1.8, 4 ms, flip angle = 15°, FOV = 520 × 340 × 300 mm, voxel size = 1 × 1 × 3 mm.
Muscle function tests.
Analysis of the 3-point Dixon MR images was performed using the PRIDE (Philips Research Image Development Environment) tool, as has been reported previously 15,42 . ROIs were manually drawn on five slices of the following muscles: rectus femoris, vastus intermedius, vastus lateralis, vastus medialis, adductor magnus, sartorius, gracilis, semitendinosus and semimembranosus, and on three slices of biceps femoris long head, biceps femoris short head and adductor longus Data analysis. Non-parametric tests were used for the statistical analysis of the variables. The Mann-Whitney U test investigated whether there were significant differences in variables between groups (symptomatic vs asymptomatic and control vs Pompe). We used Spearman's rank correlation (coefficient reported as ρ) to investigate any correlation between the serum concentration of growth factors and the results of the muscle function tests, spirometry, quality of life scales and the thigh fat fraction obtained using qMRI. As we ran multiple correlations, a Bonferroni test was performed to avoid type 1 errors. Finally, a ROC curve was performed to study whether PDGF-BB levels were able to differentiate between symptomatic and asymptomatic Pompe patients with high sensitivity and specificity. The results of all statistical studies were considered significant if P was lower than 0.05. Statistical studies were performed using IBM SPSS ® Statistics software version 21. The datasets generated during the current study are available from the corresponding author on reasonable request.
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2019-02-15T14:58:25.782Z
|
2019-02-14T00:00:00.000
|
{
"year": 2019,
"sha1": "444051ee8738503ef564246682dcd079248a537f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-38025-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75a112f3bcd2154964d2b91a5e98f06f400ac2bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229439452
|
pes2o/s2orc
|
v3-fos-license
|
A survey on the development status and application prospects of knowledge graph in smart grids
With the advent of the electric power big data era, semantic interoperability and interconnection of power data have received extensive attention. Knowledge graph technology is a new method describing the complex relationships between concepts and entities in the objective world, which is widely concerned because of its robust knowledge inference ability. Especially with the proliferation of measurement devices and exponential growth of electric power data empowers, electric power knowledge graph provides new opportunities to solve the contradictions between the massive power resources and the continuously increasing demands for intelligent applications. In an attempt to fulfil the potential of knowledge graph and deal with the various challenges faced, as well as to obtain insights to achieve business applications of smart grids, this work first presents a holistic study of knowledge-driven intelligent application integration. Specifically, a detailed overview of electric power knowledge mining is provided. Then, the overview of the knowledge graph in smart grids is introduced. Moreover, the architecture of the big knowledge graph platform for smart grids and critical technologies are described. Furthermore, this paper comprehensively elaborates on the application prospects leveraged by knowledge graph oriented to smart grids, power consumer service, decision-making in dispatching, and operation and maintenance of power equipment. Finally, issues and challenges are summarised.
INTRODUCTION
The conventional electric power systems can no longer meet the huge requirements of information age due to the continuous improvement of the economy, increasing demands for electric power, public awareness of green energy, large-scale incorporation of intermittent renewable energy penetration, and wide application of information and communication technologies [1].To solve the energy crisis and environmental pollution, various renewable energy technologies have been developed rapidly [2][3][4].However, due to the randomness and intermittency of renewable energy sources, the large-scale applications of various renewable energy technologies have adversely placed great pressure on the safety and reliability of traditional power systems.Hence, smart grids (SGs) play a historic role in safely plunging renewable energy resources into the highly controllable grid to supplement the power supply ensured by smart communications, sensors and measurement devices [5].
Section 6 extensively summarises the related issues and challenges.Finally, a conclusion is made in Section 7.
RESEARCH STATUS OF ELECTRIC POWER KNOWLEDGE MINING
Various previous research on knowledge mining related to the electric power KG construction has been undertaken by both corporations and academia, which globally uses ontology learning, text mining, semantic understanding, and social media analysis technologies to excavate key information, achieve various binary relations expression and interoperability, and then support the SGs knowledge engineering formulation and the intelligentization of the electric power network.
In this section, a brief overview of existing electric power knowledge mining is provided.The objective of the review is to present the methods and developmental limitations in the existing studies of electric power knowledge mining and lay a foundation for the research on KG.
Ontology learning
Ontology refers to the formal, explicit, and shared conceptual explanation of real world.In other words, an ontology is a formal representation of a set of concepts and their relationships in a particular domain, which is suitable for semantic information representation and inference [29,30].Meanwhile, a well-constructed ontology can facilitate machine processable definitions and help develop the knowledge-based information search and management systems more effectively and efficiently [31].Ontology-based applications in SGs can be specifically classified into two aspects, that is, power substation and energy system.
Feng et al. investigated the transformer temperature modelling based on ontology and agent to conduct thermal analysis [32].Liao et al. [33] combined association rules with ontology to analyse substation alarm information, which transformed the structured and unstructured data into extensible markup language (XML) data that allowed logical reasoning and numerical calculation.For power system asset management, Yan et al. [34] introduced a kernel-based consensus clustering algorithm embedded with domain ontology to improve document repository of power substations.Based on the ontology and semantic framework, Wang et al. compiled the power equipment ontology dictionary to extract defect components and attributes of power equipment [35].
In terms of ontology-based applications in energy system, Gaha et al. [36] used common information model (CIM) ontology to solve the problem of semantic conflict in electric power systems.A basic universal method of electric power system knowledge expression based on the ontology and semantic web was proposed in [37], which adapted to various knowledge expression requirements of electric power big data.With the advent of electric power big data era, unstructured document extraction has become a new important issue for all energy utilities.Kumaravel et al. [38] utilised a multi-domain layered ontology model to excavate the key information from unstructured documents and build thermal power plant ontology, which could be extended to any industry by integrating appropriate domain ontology with the domains.To obtain the general description of complex systems, Jirkovský et al. [39] used the semantic sensor network ontology to describe the cyber-physical system (CPS) from the perspective of component.
To detect security intrusion or attacks of the power Internet of Things and cloud (IoT-Cloud), Choi et al. [40] devised an ontology-based security context reasoning method to improve the security service.To orchestrate the utility business processes, Ravikumar et al. [41] developed the CIMbased process ontology to model end-to-end process operations of power utilities.In renewable energy storage systems, Maffei et al. [42] added an ontology subsystem to achieve the unified data formatting and provided semantics to the middleware.
To sum up, there are various studies of ontology applied in power substation as well as energy systems, and ontology learning is playing a prevalent and essential role in developing electric power knowledge models and expression.However, existing ontology learning methods are mainly based on single domain and keywords integration, thus lacking in-depth semantic analysis, which is insufficient and ineffective for the expanding knowledge base of SGs and the imperious demands from business services.Particularly, the construction of each sub-domain ontology towards grid scenarios is independent, and semantic heterogeneity exists in the whole electric power knowledge flow [38].It is widely expected that many ontologies of different electric power domains in different languages will be developed.Meanwhile, there are great challenges in interoperability and interconnection during the creation and maintenance of the power ontology learning for SGs.
Power text data mining
Text mining, a part of data mining, can discover the underlying knowledge from textual data.In SGs, a large number of texts related to the operation and control of power grids are widely accumulated, including trouble and defect records, operating tickets, logs of operation and maintenance and so on.Text mining [43][44][45] plays a vital role in extracting critical information from electric power texts, such as equipment name, equipment type, location, the logical connection of equipment.Particularly, Chinese texts possess the general characteristics of obscure, ambiguous, and hardly segmenting [46].
Text data mining provides new and essential insights for asset management decision makers [47], condition assessment [48], deep analysis of power equipment defect [35,49], and analysis of power customer appeals [50].Xie et al. used HMM-based (hidden Markov model) text reprocessing to extract the key information from fault and defect elimination record texts to assess the operating condition of distribution transformer, combined with typical power-off tests and live line detecting results [48].To identify the causes of transformer failure, Ravi et al. studied 393 terms and 103 documents and found that 'leak', 'lightning', 'animal', 'cable' and 'temperature' were the main causes [49].However, these approaches lead to poor results since they failed to consider the semantic and contextual information and the characteristics of electric power knowledge.A few studies with focus on semantic analysis were presented in [35] and [50].
Wang et al. proposed a deep semantic scheme of defect mining based on 'Semantic Frame Slot
Filling' (SF-SF) [35].The unstructured information in historical defect texts was transformed into structured information, and defect components and attributes were extracted for further research.Sheng et al. [50] utilised a multi-algorithm-based model involved with Adaboost, Support Vector Machine (SVM), and Random Forest to obtain power customer appeals from 95598 voicetranscription texts for improving service quality.
Overall, text mining has been preliminarily applied in power systems, assisting in the early diagnosis of power equipment, power consumer appeals analysis as well as in capturing dispatching experience by extracting the key information from textual records such as power equipment asset failure and defect texts, power consumer complaints, and dispatching tickets [48].However, how to parse the texts from a semantic perspective so that key information can be extracted for defect analysis is still an unresolved issue.Furthermore, there are significant differences in the algorithms applied in the multilingual text.For example, English text is found to be incompatible and untranslatable in Chinese text since the structure of Chinese characters is more complex [51].In addition, studies of the power system domain pertaining to text mining from a semantic perspective are still rather limited.
Social media analysis
Social networking is the most popular online activity and 91% of netizens use social media regularly.Facebook, YouTube, and Twitter are the first, third, and tenth most-trafficked sites on the Internet [52], which are also considered as social sensors.The heterogeneous information with various formats from social sensors has been used in the real-time event extraction [53] such as disasters [54,55], power outage [56], traffic, and severe weather events.Sakaki et al. [54] developed a probabilistic spatio-temporal model leveraged by Kalman filtering and particle filtering to detect events and estimate the location.Moreover, an earthquake reporting system was constructed and applied in Japan.Lee et al. [55] applied the Naive Bayes classifier to identify the tweets related to the blackout, however the error rate was relatively high.
Power system analysis is different from earthquakes and hurricanes, hence these methods cannot directly be applied to monitor power outages [57].The integration of social media with power system analysis and operation is a relatively new topic, and the usefulness of social media in outage detection has been recognised by the power industry.Sun et al. [56] integrated the textual, temporal, and spatial information and conducted a detecting and locating method of power outages based on latent Dirichlet allocation.Bauman et al. [58] proposed a keywords relevant detecting method to identify power outages from the tweets for discovering emergency events, particularly focusing on 'small events' related to the general public.However, it should be noted that the success of applying the above methods to social media analysis depends on how well the domain experts could set the key parameters and the system inputs for those applications.
In this section, existing knowledge discovery domains of ontology, power text mining, and social media analysis are comprehensively surveyed.Since SGs knowledge is characterised by diversity, correlation, synergy, and hiddenness, previous research on these three issues is only superficial analysis and lacks deep semantic analysis, domain-specific knowledge model, and facts association for SGs scenarios.
Diversity.There are many types of power equipment and several sub-domains in SGs, which can in turn produce massive diverse data and knowledge.
Correlation.There exists association or dependency between an event and another event.This is because associated power knowledge widely exists in SGs, hence it is of great significance to excavate correlation knowledge during the process of knowledge reasoning.
The challenges brought by electric power big data are not only the huge scale of data in a pure sense, but also the improvement of big data analysis technologies to meet the increasingly diverse requirements for personalised services and knowledge navigation.The next thing need to be considered is how to extract and analyse valuable knowledge from massive data, which is among the top concerns of research on big data.Therefore, it is essential to understand the concepts and terminology of power grids as well as the relationships between them, study semantic heterogeneity, establish the specific knowledge database for electric power big data towards situational awareness analysis and further provide personalised services for power users according to the experiences, roles, and tasks of users.
Synergy.Decision-making often calls for comprehensive knowledge analysis because none of the knowledge can exist separately.
Hiddenness.A great deal of raw data and information in the power grids might be incomplete, noisy, fuzzy, and random.Although they have no practical significance themself, and the truly valuable knowledge is hidden behind these data and information, which needs to be discovered through certain means of knowledge discovery.
What is the knowledge graph
A KG [59] is a network of all kinds of entities, relations, and attributes related to a specific domain or topic, which provides a programmatic way to model a specific real-world domain with the assistance of subject-matter experts, data interlinking, and machine learning algorithms [60].A KG is typically built on top of the existing databases to link all data together at web-scale combining both structured and unstructured data, including objects, abstract concepts, numbers, and documents.It is a directed edge-labelled graph, whose nodes represent entities and properties of interest and whose edges represent relations between these entities.
Each link of two nodes is usually represented as a triple of SPO, which can be recognised as a piece of knowledge.Moreover, two endpoints allow the multiple links because there are more than one relationship between two entities.For example, (oil tank, is a part of, the transformer), (the operating condition of oil tank, seriously influences, the transformer).Formally, a directed graph can be modelled as a tuple [61][62][63], where is a set of entities (i.e.subjects or objects) and is a set of edges that map all the relation types, where .Each possible triple and binary random variable can be model as and .Then, all the triples in can be represented as a three-way array , whose entries are set such that Figure 1 illustrates the overall architecture of KG technology [64].The part in the dotted box is the process of constructing and updating KG.The KG construction is to extract knowledge elements (i.e.facts) from raw data by several automatic or semi-automatic techniques, and then extracted entities (concepts), attributes, and relationships between entities are stored in to knowledge base. ( The whole process is dynamic and iterative and relevant construction/update procedures consist of four steps: (1) information extraction [65]: extracting entity (concept) attributes and their interrelations from various data sources and forming ontology-based knowledge representation.
The key technologies involved: entity extraction [66], relation extraction [67] [69].The key technologies involved: entity linking, entity alignment, entity disambiguation, and knowledge reasoning [70]; (4) knowledge processing: the integrated knowledge will only be merged into knowledge base after it has undergone quality evaluation or manual judgment.The key technologies involved: quality evaluation, knowledge graph refinement, knowledge graph completion, and knowledge graph correction [25].
FIGURE 1 Open in figure viewer PowerPoint
The framework of knowledge graph
Why is the knowledge graph
As mentioned above, the KG refers to a way of storing, interlocking and organising the datasets and knowledge, which allows people and machines to better tag the content and properly relations into the connected datasets.To the best of our knowledge, KG is becoming ubiquitous, powering everything from recommendation engines and enhanced query to NLP and conversational applications.There are many reasons for making use of KG, which are as follows: Breaking down all those data silos.The KG makes the machines understand real-world context with a flexible data layer, which integrates the real-word facts, events and concepts from the perspective of a so-called 'unified view'.
Finding information faster.Fundamentally, a KG is a graph database which stores the knowledge and information in a graphical format, which means the relationships between any data points can be calculated far more quickly and with less compute power overheads.
The demand analysis of smart grids for knowledge graph
As smart grids thrive rapidly, massive advanced metering infrastructure and sensors are deployed in electric power systems.At the same time, an unprecedented amount of multi-source heterogeneous big data is accumulated.It is an urgent problem to construct a full-service unified data centre for intelligently analysing and managing large volume of data [71].To the best of our knowledge, the KGs provide a flexible way to establish semantic connections and obtain unified semantic-level data service, known as an ontology.The KG4SG is a huge semantic network that merges the multi-source heterogeneous big data, which leverages a new data integration paradigm that is applicable to the next generation of electric power artificial intelligence [11].Moreover, KG4SG can be widely applied in equipment failure analysis, consumer service, fraud data analysis and other fields.
KGs give electric power AI applications intelligence.For example, it can provide relevant facts and contextualised answers to your specific questions.In addition, electrical power AI benefits from KGs for their enhanced query and search, and then KG4SG help AI discover hidden facts and relationships through inferences in the integrated content that operators would otherwise be unable to catch on a large scale.
Making better decisions.The more enriched and in-depth search results can be captured with the help of networks of 'things' and facts.
Uncovering a whole lot of hidden insights.The existing AI technologies are extreme black-box models, which lead to the facts that the operators are unaware of internal knowledge flow and how the black-box algorithms make decisions.The KGs enriched with entities, relations and concepts help with AI explainability.
BIG KNOWLEDGE GRAPH PLATFORM FOR SGs
In order to provide a better understanding of the business applications and the development of SGs, an integrated enterprise-level information integration platform should be proposed to realise the smooth flow of electric power big data and information sharing between power consumers and electric power utilities.A well-integrated big knowledge graph (BKG) platform can organise, construct, manage and make use of large-scale KGs, which not only ensures efficient operation of power systems, but also benefits all the key stakeholders (e.g.power utilities, operators, consumers) [72].In such a case, a framework of KG for SGs is devised as shown in Figure 2 based on [91][92][93].The whole KG4SGs consists of four layers [94], namely, data acquisition layer, knowledge graph layer, knowledge computation and management layer, and intelligent application layer, which follows a hierarchical pattern according to electricity knowledge flow.This section focuses on the first three layers.The intelligent application layer will be elaborated on particularly in Section 5.
FIGURE 2 Open in figure viewer PowerPoint
The framework of knowledge graph towards smart grids The high-level design of the BKG platform for SGs is developed in Figure 2, which can provide a variety of functionalities aiming at SGs businesses and intelligent applications.That is, distributed power heterogeneous data generated from the operation and maintenance of the complex power grids, power equipment, and power consumers can be transformed into the electricity knowledge via knowledge extraction technologies.So that the entities (e.g.real objects, line failures, equipment failures, location of line failures, and power equipment), attributes (e.g.failure characteristics, English names of failure and types of failure), relations (e.g. the causes of failure, treatment methods, selection and application methods), and events (e.g.power outrage, thunder, severe weather) related to electricity knowledge are excavated from the digital data, images, textual defect records and so on by knowledge processing.Then, various types of relations between excavated entities can be integrated into a network (i.e.KG), in which an entity corresponds to a vertex and a relation corresponds to a directed edge.Moreover, each links of the two nodes related to the electricity knowledge is represented as SPO form, which is beneficial for the electric knowledge engineering to facilitate facts association.To store massive entities, attributes, relations and events of SGs and build more intelligent machine learning algorithms to enable knowledge flow capability and efficiency, the big data platform is designed for the implement of graph data models (HBase, Neo4j), graph algorithms, database engines (Spark, Hadoop) and database interface.In particular, the big knowledge platform also supports intelligent services towards diverse application scenarios, for example, semantic search, intelligent power consumer service system, auxiliary decision-making, intelligent operation and maintenance of power equipment.
Data acquisition layer
Considering the fact that electric power big data is collected from disparate data platforms by different monitoring infrastructures, the objective of data acquisition layer is to integrate all the data connected to the SGs into the BKG platform, including online open websites, third-party integrated service platforms, electric power knowledge databases and so on [95].According to the different data sources, the data in SGs can be divided into two categories.One refers to internal data of power systems, which is accumulated from production management system (PMS), energy management system (EMS), outrage management system (OMS), asset management system (AMS), condition-monitoring management system (CMS), etc.The other is external data, which consists of the meteorological information system (MIS), geographic information system (GIS), public service sector and internet.All these collected sub-domain hybrid corpora contain a large amount of information, which supports data and knowledge mining, the excavation of values, novel knowledge-driven algorithms and allows improvisation in existing operation and planning practices.Therefore, the effective and efficient integration of big data analysis are crucial for all aspects of the whole electric power knowledge chain, such as suppliers, operators, consumers, and regulators.[96].
A. Trouble and failure records
In the operation of power grids and power asset management, inspectors record trouble tickets in text and electronic forms.These unstructured data not only reflects the historical trend of the operating condition of power equipment and infrastructure, but also contains rich latent fault information [97].However, existing methods cannot exploit the trouble and failure records efficiently because of the complexity of text semantics and structures.For example, State Grid Corporation of China has stored large quantities of defect records in the PMS as shown in Table 1.
Records type Defect information
There are large quantities of power consumer service tickets in the long-term power transactions and services.Meanwhile, customer satisfaction continues to be among the top concerns of power utilities.Electricity consumption behaviour analysis and new hot spots of consumer attention extracted from service tickets are attracting the interest of many researchers.As for power utilities, power consumer service system can provide active proactive perception and service.For example, it intelligently reminds the users if they need to query the content that they often pay attention to and pushes attention notifications.In addition, various wide-area monitoring systems like AMI, PMS, and electric power marketing system continuously generate massive amounts of operation data, which concerns the safe operation and reliability of power equipment assets, power grids, and utilities.
C. Authoritative standards and guides
Many official organisations and electric power companies have set a series of standards, covering traditional electricity systems, new SGs, and other related discipline standards.
Research institutions comprise IEC, IEEE, CIGRE, EPRI, W3C, ISO, the State Grid Corporation of China and so on, in which there are abundant information and the authoritative knowledge of many experts.
D. Domain expert knowledge
Abundant valuable knowledge and experience have been generated during the operation, maintenance, marketing of SGs in the long-term production of electric power.For example, an experienced dispatcher can accurately judge the safety margin of the power system operation and a senior maintenance engineer can determine whether the transformer is operating well by listening to the sound.Effective management of these intangible knowledge assets is of great practical significance and economic value.
A. Social media
With the advent of the digital information era, apps such as Facebook and Twitter enable power consumers to engage in the operation and economic dispatch of SGs.The above discussions in Section 2 show that the data from social sensors can be used to identify the location and extent of an outage without other measurement and communication instruments [56].
B. Open professional literature in electric power field
On the Internet, there is much publicly available literature.Some academic literature search engines have provided professional retrieval of keywords based on NLP technology now.However, these search engines are mainly based on traditional regular matching techniques as they have no professional thesaurus and corpora.At present, the KGs led by Google are in full swing and have already been applied in the Internet and medical fields.
C. Power information websites
There are some open professional websites recording valuable information as shown in Table 2.
Although the documents of these restricted topics are relatively sparse and their quality levels are varied, they can provide the auxiliary foundation for information retrieval and public opinion monitoring.
Knowledge graph construction
Similar to the construction of conventional domain KG, the creation of the electric power KG adopts the bottom-up pattern, during which the key entities, relations, and attributes derived from large-scale heterogeneous power data are progressively processed, linked and added into a knowledge base in view for the actual demands in SGs scenarios [98].The framework of electric power KG construction consists of three layers: knowledge acquisition layer, sub-domain knowledge graph layer, and electric power knowledge graph layer (Figure 3).
FIGURE 3 Open in figure viewer PowerPoint
Framework of electric power knowledge graph [98]
Knowledge acquisition layer
In knowledge acquisition layer, numerous entities, entity attributes, and relations between entities from the distributed heterogeneous power resources are achieved by named entity recognition (NER), attribute extraction technology, relation extraction, and event extraction [67], and electric power knowledge base is established to store the electric power knowledge.Meanwhile, a thirdparty domain-specific expertise knowledge base built by experts can also be integrated into the constructed electric power knowledge base.Figure 4 describes an example of extracting entities from a defect record of transformer (oil storage cabinet of #1 main transformer has oil leakage, and the speed is 5 drop per minute) [35].
FIGURE 4 Open in figure viewer PowerPoint
An example of entity/attribute extraction [99]
Sub-domain knowledge graph layer
In sub-domain KG layer, KGs of different business scenarios or power domains in SGs are constructed, such as power customer service system, decision-making in dispatching, operation and maintenance of electric equipment.For each sub-domain KG, it merely corresponds with one domain-specific demand.Specifically, entity matching, co-reference resolution, and relation analysis are adopted to eliminate the disambiguation and errors and reduce the redundancy of concepts and entities.To obtain a well-networked and structured electric power topic KG, the processes of knowledge fusion [69] (e.g.entity linking, and entity resolution) and knowledge inference are subsequently carried out to adjust and modify the obtained results.Since new knowledge and the results of knowledge mining might be incomplete and error-prone, quality assessment is particularly essential to discard knowledge with low confidence before adding new or extracted knowledge into an existing sub-domain graph.
A reflection on KG of defect records is depicted in Figure 5
Electric power knowledge graph layer
The entire electric power KG is constituted by merging a mass of sub-domain KGs towards different business scenarios or power domains in SGs.In other words, disparate domain KGs related to SGs application scenarios are integrated into a knowledge base eventually by linking correlative nodes.The electric power knowledge is updated, revised, and enriched dynamically by quality evaluation, knowledge updating and knowledge reasoning technology [100], then the updated KG will be stored in graph base, which is a dynamic and iterative process.
Taking the above two defect records as an example, oil conservator has the same noun meaning with oil storage cabinet, and the oil conservator is a component of the transformer body.The two sub-reflections are integrated after knowledge fusion, knowledge reasoning, and knowledge updating, and the relation between the oil storage cabinet and the transformer body can be clearly depicted in Figure 5(c).
From the perspective of the KG modelling in Section 3.1, a property graph like Figure 5(c) can be defined as a tuple [61], where is a set of bode ids, is a set of edges ids, is a set of labels, is a set of properties, is a set of values, maps an edge id to a pair of node ids, maps a node or edge id to a set of labels, maps a node or edge id to a set of propertyvalue pairs.
Returning to Figure 5, the refection on knowledge graph of defect records can be depicted as follows: the set V contains transformer, transformer body, oil storage cabinet, and oil leakage;
FIGURE 6 Open in figure viewer PowerPoint
The integration of knowledge graph and smart grids [101]
FIGURE 7 Open in figure viewer PowerPoint
The knowledge graph towards transformer body defect
Knowledge computation
The knowledge computation module manages the computing framework and algorithms, including distributed big data processing framework (Hadoop, Spark), graph computation, NLP, machine learning, deep computation and so on.Relevant electric power data is often stored in several physical repositories, but they have to be computed aiming at some dedicated subdomain services before effective use.In particular, it is worth highlighting two kinds of data processing methods: graph computation and deep learning.In general, any knowledge structure (head entity, predicate, tail entity) of the electric power KG can be considered as a graph.Graph computation has a great advantage with regard to processing, analysing, and visualising massive data with graph structures and complicated relationships.Moreover, graph computing can explore the topologies and properties of KG4SGs via linked vertices and edges [102,103].To the best of our knowledge, deep learning has been widely applied in SGs in recent years [104][105][106][107].In terms of the KGs, deep learning has been leveraged by relational reasoning [108], knowledge representation [22] and so on.
Hadoop MapReduce is deployed to process the massive electric power big data for its being good at batch processing of big knowledge computing and the superiority of distributed parallel framework.Apache Spark GraphX, a new component in Spark for graph and graph-parallel computation, is implemented in the BKG platform for graph computing.In addition to a highly flexible API (application program interface), GraphX provides users primitives for elementary graph operations.Apache Spark and Spark Streaming are used as the in-memory and streaming computing framework [109].Since the electric power data and knowledge are mostly represented by using resource description framework (RDF), SparQL, a specialised language for RDF, is applied to perform semantic knowledge-based querying over massive information.
Above all, the BKG platform can compute massive amounts of data and knowledge, capture valuable information, and provide decision-making grounds for electric utilities, system operators, and power consumers.
Knowledge management
The most important module of knowledge management in the BKG platform is the knowledge storage module, which provides knowledge sources and allocates storage space for the electric power knowledge computation module.The BKG platform mainly manages four types of electricity knowledge: (1) RDF triplets; (2) abstract textual information; (3) images; (4) digital data [92].Meanwhile, the knowledge storage module supports knowledge computing, graph computing, machine learning, deep learning, knowledge-based querying, etc.The general electric power sub-domain KGs are stored in graph databases, for example, HBase, Neo4j.Neo4j [110] is a high-performance NOSQL (Not Only Structured Query Language) graph database that stores structured data on a network rather than in tables.
The massive electric power knowledge is represented as SPO triples by mainstream representation technologies and ontology learning.The binary relationship between grid ontologies is integrated into the RDF graph model, hence, the knowledge base of each electric power sub-domain KG contains a huge quantity of RDF triples related to the electricity knowledge.The BKG platform mainly uses the Spark SQL (structured query language) [111] to support electric power knowledge operations for the reason that it is more efficient than Hadoop distributed file storage system [112], Hive [113], and Shark [114].SparQL-Spark SQL query transformation is also deployed to achieve efficient RDF querying.
Another key part of knowledge management is the information security.Three groups of security techniques are ensured [92].
Future, the KG is expected to incorporate with blockchain to provide more secure, synchronised, and guaranteed systems for SGs [69].
Visitor: Each visitor is authorised to access the range and quantity of data and confidence levels according to his/her authorisation classes; Knowledge: Knowledge storage distribution is applied to store knowledge with different confidence levels in different areas of the BKG, which is protected at different security levels.In addition, the new knowledge must be carefully checked before it is integrated into the knowledge database; System: System security is ensured from two perspectives, namely, critical nodes encrypting and back-up.
PROSPECTS OF INTELLIGENT APPLICATION
KG technology offers a vital methodology for expressing, organising, managing, and utilising massive, multi-source, heterogeneous, and dynamic data and information in an easy-tounderstand, easy-to-use, and easy-to-maintain manner [108].At present, KG technology is mainly used in recommendation systems [115], language modelling [116], question answering [117], or image classification [118], which has penetrated into the financial, medical and industrial sectors.
In the medical field, KG has provided an emerging paradigm in medical information systems, including clinical decision support system [119], medical intelligent semantic search engine [120], and medical question answering system [121], making these systems more intelligent and user friendly.In SGs, we devise KG4SGs as a unified platform to represent and manage massive heterogeneous power data acquired by smart measurement and metering during the electric power production and operation at a semantic level.In this regard, the KG could be immense potential for power system smart business scenarios based on AI technologies like deep learning, NLP, machine learning.In this section, three typical application scenarios are taken as examples to expound that KG in SGs is a promising area and has real application value in terms of power customer service system, decision-making in dispatching, operation and maintenance of power equipment.In addition, applications of KG in other grid business scenarios are briefly surveyed.
Background
A featured initiative in the future smart grids is active participation of electricity consumers in the ancillary service [122][123][124].Customer value analysis can provide differentiated service for customers and implement benefit maximisation for power utilities.Liu et al. [125] proposed a quality inspection sampling algorithm using modified C4.5 algorithm to classify service calls and work sheets with or without defects.Sheng et al. [50] proposed a power customer appeals recognition model based on Adaboost, SVM and Random Forest.Lindén et al. [126] utilised historic consumption patterns to categorise electricity customers.Zhang et al. [127] studied the clustering algorithm-based electricity customers classification.
However, there are some deficiencies in power customer services in recent years: (a) the traditional power customer service methods such as 95,598 hotline telephones, business halls, and manual service make the communication costs, training fees, human resources, and other costs not affordable for most companies; (b) there also exists conditional constraints such as time (non-24-h service), venue (centralised customer service office), which hinder high-quality services; (c) given that customer service staff have disparate grasps on the business problems, the quality of service is uneven, which may let customers wait too long and even get no satisfactory answers; (d) customer service staff are prone to lose their enthusiasm in long-term response to repetitive problems; (e) if power enterprises want to build a large and sophisticated knowledge base for customer service systems, a huge amount of manpower and material resources must be expended.It is difficult to maintain and update the knowledge base later because of the complex relationship between electricity knowledge.All these problems hamper the quality and efficiency of personalised customer services.
At present, the information retrieval-based method, the most popular approach of question answering, is mainly based on keyword matching.However, this way considers only the shallow similarity of keywords related to the queried question, which overlooks the in-depth semantic information [108,128].SGs customer service question and answer system based on KG can map the consumer service knowledge into a knowledge base in the expression of natural language, which enhances the performance of power consumer service.Hence, it is an important issue to automatically classify knowledge, problems, and experiences in power supply services, obtain the optimal answers, and provide consumer-oriented interaction based on the massive work tickets data and the power business knowledge system for responding to the increasing number of SGs customers and the huge demand for consulting.
Intelligent customer service robot system
In order to deeply discover power customer service knowledge and improve the performance of electric power enterprise for intelligent knowledge management and application, KG technology can be applied to organise and manage information related to the power enterprises through the in-depth semantic analysis.In this regard, the architecture of the customer-centre customer service robot system based on KG, NLP, machine learning, and semantic web technology is proposed to extract the customer demands, achieve high-quality service, and improve the efficiency of the power system operation through deep semantic understanding in Figure 7.The framework not only provides 24-h online self-service interactive response service to reduce the pressure and repetitive work of online customer service staff, but also bridges the communication between power companies and customers.
The overall framework of the intelligent customer service robot system contains six key parts, including data layer, intelligent service engine, consumer service robot system, multi-channel accesses for social media, robot operation framework, and unified management platform.The core modules for each part are as follows: The data layer is used to extract the key semantic knowledge from domain corpora (e.g.electricity laws and regulations, common questions, electricity common sense, power thesaurus, WeChat history records, Twitter messages), and then construct an easy-to-understand semantic feature analysis model and an intelligent question answering knowledge base that can be understood by the robot.The knowledge base comprises the common knowledge base, business knowledge base, and rule base.The common knowledge base contains phrases, sensitive words, stop words, sensitive words, and common parts of speech involved in customer questions.The business knowledge base contains customer service business questions, power thesaurus, and customer service knowledge base.As for the rule base, special instructions are set in it to do some special scene tasks, and robots are enabled to make quick answers to different consulting scenarios by semantic understanding, KG matching, and knowledge reasoning.
Intelligent service engine supports various engines and function modules for other parts of intelligent customer service robot system, including sentence segment, semantic analysis, chat conversation, answer processing, knowledge search management, etc.In addition, an intelligent routing distribution mechanism is implemented into intelligent customer service robot system, which can intelligently identify user priorities, reasonably allocate customer service, and improve the efficiency of manual customer service in terms of user membership levels, source channels, demand categories, key behaviours, business nodes, customer service division, response time, user satisfaction, conversion rate, and workload, etc.Consumer service robot system can accurately and quickly identify and predict customer intentions, cover various scenarios, and promote non-blocking human-machine communication.Moreover, the intelligent customer service interaction system can record the content that cannot be answered and classified by the current KG, which can help add the new user concerns to the existing knowledge map by manual intervention later.Proactive perception and active services are developed for continuously improving service quality, such as reminding the users whether they need to query the content that they often pay attention to, pushing the attention information for the user and so on.
Multi-channel accesses for social media can receive massive customer consultation data from various channels such as chat sessions, online messages, customer evaluations from Twitter, WeChat, APP, websites.Most common problems can be replied automatically by the consumer service robot with semantic understanding.
Robot operation framework supports communication with power consumers based on interactive business logic and provides service interface for the consumer service robot system, multi-channel accesses for social media, and unified management platform.This working framework can be further improved in the future. ( The intelligent customer service robot system based on KG has the following advantages, which are therefore suitable for intelligent customer service in the era of the AI: Until now, there have been several KGs and research focusing on intelligent question answering engines [136][137][138][139].However, there are few explorations on KG applied to intelligent customer service towards SGs.Tan et al. [140] proposed a hybrid domain features' KG smart question answering system to reduce the ambiguity of Chinese language questions and the cost of online service operation and maintenance.The entity was identified by long short-term memory model and the semantic enhancement method based on the topic comparison was proposed to find external knowledge.However, the answer was obtained by using heuristic rules.
At present, there appears an increasing number of demands for high-quality and reliable electric power and services.However, large-scale incorporation of distributed renewable energy permeates into the power systems, affecting the stability and quality of power production to a certain extent.The involvement of power consumers may promote the interaction and responsiveness of the customers in different ways via emerging handsets and other connected devices, which will transform the enterprise-customer relationship and make power customers as participants [141,142].In particular, with the advent of mobile social media era, mobile applications such as Twitter, WeChat provide consumers with more chances to get involved in electricity services to resolve their requests more efficiently and conveniently.In the future, power utilities will obtain the long-term trust of customers as power service advisors.In particular, intelligent customer service robot system can fulfil strong customer-centre expectations, such as personalised services, proactive electrical energy saving tips, improving customer engagement, and expanding power customer experience initiatives in a user-friendly way.
Background
In the existing dispatching mechanism, operation rules are made offline by experts and the "empirical + analytical" model is still taking dominant position in dispatching business processing.However, this cannot satisfy the requirements for online application with power system enlarged, the tightness of network interlinks, and enhanced complexity degree of system.
There are some challenges in SGs operation: Problems in the formulation of operating rules.The operating rules are mainly based on offline analysis and manual induction, and the results depend on developed expert experience.This method is time-consuming and labour intensive, and can only contrapose the typical operation modes, which cannot adapt to all online situations.With the expansion of power grids and the complexity of network structure, the acquirement of operating rules becomes more difficult.
Unified management platform implements robot management, marketing management, knowledge management, authority management, and user management, which is the basis of the whole intelligent consumer service robot system.
Reducing customer service costs;
Achieving high-quality and more timely responses; Fulfilling strong customer-centre expectations; Improving the explainability of the answers. ( Problems in the application of operating rules.The obtained rules are usually unchanged for a long time because of the huge workload of formulating rules, which causes great difficulty in matching the online operation modes.As a result, electric power dispatcher has to adopt conservative operating limit values, corresponding with rough rules and poor economics.On the other hand, the rules are optimistic for not considering a few extreme operating modes and accidents in offline analysis, which may threaten power system security.
In the large-scale incorporation of intermittent renewable energy, frequent occurrence of natural disasters caused by climate change, and increasingly time variation and complexity of the power grids operation modes, traditional scheduling decision-making mechanisms can no longer meet the online operation, which has posed several challenges to the operation of complex power grids.
With the development of AI technology, numerous research on decision-making in dispatching has been carried out.Genc et al. [143] applied decision tree to develop the preventive and corrective controls.Zhu et al. [144] utilised the imbalance learning to assess power system dynamic stability.Zheng et al. [145] developed a multi-objective group search optimiser with adaptive covariance and Lévy flights to optimise the power dispatch of the large-scale integrated energy system.Liu et al. [146] proposed an energy network dispatch optimisation under emergency of local energy shortage with web tool for automatic large group decision-making.Li et al. [147] studied a two-stage methodology by combining multi-objective optimisation with integrated decision-making.However, large-scale incorporation of intermittent renewable energy has posed great threats to the economic dispatch of smart grids [148][149][150][151].To address the high levels of uncertainty associated with the intermittency of resources [152], one issue is to break through the computational complexity.This may be solved by the implemented forecasting system, such as load and renewable resources generation forecasting [153][154][155].Sparse Bayesian classification and Dempster-Shafer theory-based wind generation [156], Markov chain-based stochastic optimisation of economic dispatch [157], spatio-temporal wind power forecasting [158], fuzzy prediction interval model-based renewable resources and loads forecasting [152], and wind and locational marginal price forecasting-based dispatch scheduling [159] are studied detailedly, which lead to another problem.That is, several studies are based on black-box models, which are less interpretive and difficult to be accepted by field operators.
On the other hand, there are a great deal of texts related to scheduled routine operations, which make fault handling for power grids dispatching, knowledge and experience learning based on unstructured Chinese language become obstacles for the collaboration of machine and human being.Therefore, it is imperative to establish automatic operator (AO) for decision-making in dispatching by utilising AI and NLP to extract knowledge from long-term operating experience, dispatching rules, and tickets for the demand of accommodating the SGs dispatching business.
Automatic operator for decision-making in dispatching
The AO for decision-making in dispatching equips the machine with cognitive capabilities and dispatching knowledge engines are implemented by knowledge retrieval, knowledge fusion, and knowledge reasoning, thus improving the automation and intelligence of dispatching.The overall framework (Figure 8) comprises four key steps: establishing a corpus and semantic model of dispatching professional lexicons targeting at the characteristics of text terms and complex scene distribution in SGs dispatching; extracting information from dispatching rules and tickets by NLP to form a computer recognised machine language and deeply digging the critical interfaces of SGs, key factors affecting limit transmission capacity of interfaces and their quantitative relationships [160]; learning textual fault disposal rules automatically; and finally building a largescale knowledge base of dispatching rules and fault processing schemes for supporting AO decision-making.Instead of rough and offline dispatching rules, fine rules can be acquired online by AO.As a result, AO promotes intelligent decision-making in dispatching and automatic accident disposal.When a dispatcher interacts with AO, AO can identify the dispatcher intention, capture the critical information by entity recognition, syntactic analysis, and semantic analysis, match and retrieve the processing results from the KG of dispatching rules and fault processing schemes, and then return the disposal strategy or accident warning of requests.In this way, it not only expands the recommended results and improves the accuracy, but also enhances humancomputer interaction services [108].
FIGURE 8 Open in figure viewer PowerPoint
Intelligent customer service robot system architecture towards SGs The AO for decision-making based on the KG has the following advantages: Extracting the relationship between the operation modes and the power flow interfaces from the textual tickets of SGs operation modes by NLP technology; Updating the interface stability limits automatically in accordance with the real-time operation state of SGs, which avoids the situation in which the interface limits are not updated timely due to slow manual judgment; Until now, there are few pieces of research and achievements of KG construction in power grids dispatching.Li et al. combined 'top-down' with 'bottom-up' to construct a KG of the power dispatching automation system [161].The constructed KG of remote measurement helped understand the business relationship of the whole system and facilitated fault analysis when the system fault occurred, which proved that KG could be well applied in the intelligent auxiliary decision-making.Shan et al. introduced the key technologies and technical routes of intelligent assisted decision-making technology based on KG.It proved that the KG technique can be applied to the inference and analysis of dispatching rule knowledge by taking the fault disposal technology as an example [162].State Grid Corporation of Hangzhou designed and deployed an AI-based virtual dispatching assistant named 'Pach', which realised fault judgment, plan issuing, and repair command via more than 5000 h of speech training and a large number of safety regulations, work cases and professional papers learning based on KG [163].
As a knowledge map of the industry, the KG for SGs dispatching has its unique professional characteristics and the accuracy can only be improved by integrating comprehensive power knowledge.In the future, intelligent decision-making for SGs dispatching tends to be based on the sub-domain KG (e.g.graphs for power equipment, concepts, operation rules and fault cases) and electric power field professional lexicons.And quicker disposal strategies will be obtained from AO, which will further help dispatchers capture the current status and development trends of SGs actively, quickly, comprehensively and accurately.The AO for decision-making will learn automatically and KG will be promoted iteratively corresponding with the continuous changes of the SGs.Moreover, AO can reduce the risk of manual handling errors and expend intelligent applications in power grids dispatching.
Background
In SGs, electric power equipment is playing an important role in the generation, transformation, transmission, and distribution of electricity energy.The enormous investment and increasing demands for electricity energy motivate utilities to accurately assess and diagnose the condition of power equipment assets [164].As a result, condition-based maintenance (CBM) comes into being.CBM can alter scheduled maintenance, prolong the service life of power equipment, and save the time and cost of maintenance, thus making maintenance work more scientific.In general, CBM decision-making is gradually applied in SGs [165].
At present, CBM for power transmission and transformation equipment analyse and judge the health condition mainly through one or a few state parameters and unified diagnostic criteria, which fails to comprehensively take full advantage of defects, maintenance history, family quality history, etc.This work procedure is difficult to meet the demands of differentiated, meticulous, and personalised condition assessment and fault diagnosis.As a result, there are inevitable Capturing the key parameters that affect the limit transmission capacity of the power flow interfaces and its quantitative relationship, which has strong interpretability and high accuracy; Providing real-time decision-making information for interface control, improving the interface transfer capacity, and overcoming the shortcoming of the poor interpretability of traditional black-box models to some extent. ( excessive maintenance and insufficient maintenance, which may lead to enormous waste of manpower and resources.In short, existing research on operation and maintenance of power transmission and transformation equipment are not holistic, systematic, or optimal, whose defects are listed as follows: The informatization of operation and maintenance is quite poor.Although massive equipment status information that scatters in various departments of the power systems is collected, multiple heterogeneous data cannot be managed uniformly due to the lack of unified protocols and standards; Most of the existing equipment evaluation and diagnosis technologies are mainly based on a single or a few monitoring parameters, which cannot comprehensively reflect the various operation information of different equipment, resulting in poor results of diagnosis and evaluation; The current methods of condition assessment and alarming still rely on experts' operating experience to a large extent, and the theories and methods of equipment fault feature selection and condition assessment models cannot meet the requirements of SGs scenarios; Intelligent decision-making and management have not yet been coming into being.
The fast developments and applications of AI provide new opportunities for the operation and maintenance of power equipment, such as defect recognition, prediction, and fault diagnosis.For example, regression prediction [166], SVM [167], artificial neural network [168,169], grey model [170], and combinational model [171,172] have been widely applied in operating condition prediction of power transformer.
As for operation and maintenance of power equipment, advanced measurement infrastructures have been deployed in smart grids, and equipment state data has gradually emerged as largevolume, multi-type, and fast-growing, thus paving the way for the application and development of big data, artificial intelligence, and other technologies in operation and maintenance.Hence, integrating big data and knowledge is valuable for differentiated and comprehensive CBM.
Since massive heterogeneous data and information exist inside and outside the plants, it is of great importance and necessity to exploit new techniques and maintenance engineering to build a unified understanding of health condition and fault features, develop multi-dimensional information fusion technology, conduct personalised condition assessment, set up decisionmaking strategies for maintenance, and support intelligent operation and maintenance for electrical power assets managers.
Intelligent operation and maintenance based on multi-modal knowledge graph
Given that there are massive heterogeneous data collected by the AMI and existing research cannot integrate the comprehensive information of power equipment, a framework is proposed to establish a multi-modal KG as shown in Figure 9, which integrates structured data, images, Wikimedia, defect records, and maintenance tickets.This model supports personalised condition
FIGURE 9 Open in figure viewer PowerPoint
Automatic operator for decision-making in dispatching based on knowledge graph
FIGURE 10 Open in figure viewer PowerPoint
A framework of multi-modal knowledge graph for power equipment In contrast with existing methods, intelligent operation and maintenance based on multi-modal KG for power equipment have four advantages: Over the last few years, KG relevant techniques have been extensively used to develop more accurate condition assessment and diagnostic tools.Liu et al. adapted KG to retrieve defect records of power equipment, which could significantly improve the retrieval effect of defect records [69].Wang and Liu [173] utilised KG technology to construct an error recognition model of power equipment defect records.Moreover, Tang et al. [11] applied KG into merging heterogeneous data of all the power equipment to enhance the management of power equipment, which could query the basic information, classify relevant information of products, maintain the integrity of information, demonstrate the relationship between products, and achieve real-time updates on the content of any product.Cui [174] introduced KG to integrate the data of the IoT system for detecting the abnormal state of electric device.In addition, Su et al. utilised the KG to integrate multi-source information from power terminal equipment in the Ubiquitous Power Internet of Things based on relational databases and ontologies [175].In general, the KG technique has been applied preliminary in the operation and maintenance of power equipment.The research on the multi-modal KG for intelligent operation and maintenance of power equipment should be heightened in the future, such as multi-modal attribute expression, complex Merging data of transformer equipment from the data layer perspective.Existing information fusion of power equipment operating parameters is mainly concentrated on feature layer fusion and decision layer fusion; Integrating with the early-developed expertise for the condition assessment, diagnosis, and maintenance in the long-term electric power production, transformation, transmission, and distribution; Achieving differentiated, refined and comprehensive assessment results by combining the data from different sources such as power grids operation, equipment status, meteorological environment, and analysing the current and historical state changes of the power equipment; Providing the theoretical basis for equipment operation and maintenance decision-making.
multi-modal relationship mining, unified representation, and incremental updating of multimodal KG.
Other applications
There is several research on the studies of KG applied in other power fields, such as low-voltage distribution network topology verification [176], visual query method for large blackouts [177], and the generation of secondary security ticketing [178].In [176], data in multiple low-voltage distribution network information systems was integrated, and the KG for low-voltage distribution network topology was built.The household transformer relationship could be well verified and identified by the constructed KG.In order to analyse the causes of large blackouts based on large volume of Chinese text, Zhang et al. [177] utilised the web crawler, deep learning, and knowledge graph technology to capture the event entity, relation and attribute.Moreover, a large blackouts knowledge graph was built and visual queries of nodes, relations, and paths for large blackouts were implemented.In terms of the generation of secondary security ticketing, Wang et al. [178] adapted the KG to construct the search engines of intelligent substations, which provided the unified data integration and enhance operational efficiency.
ISSUES AND CHALLENGES
As we mentioned in Section 5, KG is a promising and fast-developing research topic in the customer service robot system, power semantic search engines, question answering engines, decision-making, automatic operator, and other applications towards SGs.The combination with better KG and SGs would offer new opportunities for AI service providers, which in turn presents many challenges.To move on an important step towards KG4SGs, there is a need for a better understanding of the issues and challenges in the development of electric power KG and its technologies, which are as follows.
Making use of heterogeneous electricity knowledge
With rapid development of SGs technology and wide deployment of the measurement devices, an unprecedented amount of electric power big data has been obtained.Electric power big data is characterised by various sources (e.g.PMS, EMS, GIS), high volume (thousands of terrabytes), wide varieties (e.g.digital data, textual defect records, images, social media), varying velocity (e.g.online monitoring, daily inspections, quarterly/yearly maintenance), veracity (e.g.missing data, redundancies, malicious information), and values (e.g.operational, technical, economic) [176,177].
How to efficiently and effectively utilise these multi-source heterogeneous information is becoming a critical and challenging problem [70].However, there have been no unified data representation and logical structure for the fragment electricity data and knowledge so far.
Moreover, complex power network structure and diversification of access to information lead to increasingly prominent data heterogeneity and 'information island'.Furthermore, the relationships of multi-source heterogeneous data become more complex and evolve over time.To extract valuable knowledge from massive heterogeneous power data depending only on the domain knowledge of traditional expert systems is neither efficient nor sufficient.Hence, it is essential and beneficial for all the stakeholders in their power sector to develop more efficient models to better make use of multi-modal electricity knowledge and unlock underlying information and relations by the cross-fertilization of the multi-source power data [181].
Constructing dynamic professional lexicons in the electric power field
The construction of general KG can be improved by the massive semantic information in semantic web, in which all the data is formal, structured, and shareable.However, as for general domainspecific KG, merely few formal, structured, and shareable information could be obtained on the Internet, especially the definite closed electric power field.All needed data and information are closely linked to SGs, which poses great challenges to the knowledge mining and the construction of electric power KG.The construction of professional lexicons in the electric power field could improve this issue to some extent.The quality and quantity of professional lexicons not only determine the accuracy of word segmentation and sentence context comprehension in text preprocessing, but also affect the performance of eliminating ambiguity and constructing KGs.
Meanwhile, professional lexicons in the electric power field are not static and change over time due to electric power big data characterised by massive, heterogeneous, and multiple sources [62,182].
On the other hand, there are many sub-domains in SGs, each of which has a different requirement for professional lexicons of the electric power knowledge.For example, GIS refers to gas-insulated switchgear in power systems.However, GIS is geographic information system from the perspective of computer science and technology.In addition, the emergence of new concepts such as the Energy Internet, integrated energy services, and ubiquitous power IoT, has produced plenty of new vocabularies, including a great number of highly professional industry words.With the development of SGs businesses, more and more new concepts and vocabularies will continue to emerge, and traditional knowledge mining methods can no longer adapt to these challenges.
Therefore, it is of great significance to construct dynamic high-quality lexicons to improve the accuracy of text mining, entity extraction, and knowledge processing in this underexplored domain.
Improving the quality of KGs
One of the main challenges in electric power KG applications is the quality of large-scale KGs themselves.Existing research on measuring the integrity and legitimacy of the generated KGs cannot meet the demands for SGs scenario applications due to the lack of an effective electric terminology validation and verification model.Thus, newly constructed KGs for SGs inevitably suffer from noises, conflicts, and incompleteness.The incorrect and missing knowledge lead to error propagation in SG scenario applications.In this regard, the issues of knowledge fusion, knowledge reasoning and quality evaluation calls for future research.Specifically, electricity facts from different power data sources and sub-systems have to be carefully checked and those with high levels of 'confidence' can be integrated an unified knowledge base.Then the constructed KG should be refined by add missing knowledge and identifying and removing errors [25].
In terms of knowledge quality assessment, it is the future research goal of this field to build a perfect quality assessment technical standard or index system.Hence, it is urgent to establish the evaluation criteria to quantify the credibility of domain knowledge for achieving accurate construction of huge KGs for SGs, automatically detect the conflicts and errors and add missing knowledge [22].The crowdsourcing techniques have already been applied in entity linking and entity resolution, and human-computer cooperative crowdsourcing algorithms can improve the quality of knowledge fusion.The design of the crowdsourcing algorithm requires a trade-off among the amount of data, the quality of knowledge base alignment and manual annotations.It is expected to make influencing breakthroughs in combining the crowdsourcing platform with the knowledge base alignment model organically and effectively judging the quality of other workers' annotations [183][184][185].
Multilingual KGs of different sub-domains
An entire electric power KG contains a vast mount of power facts, events, and entities excavated from different electric power scenarios.Each electric power domain-topic KG contains a huge amount of knowledge featuring complicated structure and data, which pose certain challenges to the accuracy and efficiency of the knowledge fusion (e.g.entity link, entity alignment, entity resolution) and knowledge reasoning.In this regard, future research should focus on merging knowledge of different electric power sub-domain KGs.Moreover, multi-language KGs corresponding with different SGs application scenarios will be constructed eventually in the future [186], which are closely associated with the trait of natural language and electricity knowledge.The complementary capabilities of multilingual knowledge bases provide more possibilities for real applications of KGs, presumably power consumer service system, automatic operator for decision-making in dispatching, power knowledge intelligent search, and smart question answering system [187].
Knowledge updating of large-scale KGs for SGs
Logically, the update of knowledge bases for SGs consists of the concept modelling layer and the data layer.In the concept layer, novel and complex concepts related to electric power knowledge are abstracted and decomposed.This layer assists in understanding, managing, and constructing knowledge base technologies aiming at the semantic description of human-computer interaction.
Obviously, the updating of concept might affect all direct or indirect sub-concepts, entities, and properties.The main objective of the data layer is to add, modify, and delete entities, relationships, and attribute values related to electricity knowledge.In the data layer, multiple factors such as the reliability, uncertainty, consistency (i.e.conflicts or redundancy) of the electric power data and knowledge should be involved and considered comprehensively.
Existing knowledge updating technology relies heavily on manual intervention, which is timeconsuming and labour intensive [188].This means that existing methods may lead to difficulties in full utilisation and further development in actual SGs scenarios due to the limitations of their models and computational complexity.Hence, it is crucial to design a novel framework of knowledge updating which can carry out online learning and update new electricity knowledge incrementally and automatically for various applications of KGs in SGs.Incremental updating technology [189,190] is the future research hot spot in the field of knowledge updating, which utilises the existing knowledge to achieve rapid updating of knowledge and consumes fewer resources.Moreover, how to ensure the effectiveness and efficiency of automatic updating is another major challenge.1)
CONCLUSIONS AND FUTURE DIRECTIONS
Over the past few years, the SGs have grown from an idea into a world-wide recognised topic.The gathered massive data contain much knowledge, and much of electric power knowledge cannot be expressed, managed and analysed sufficiently and effectively, which is rarely applied in power grids scenarios.However, the KGs provide a feasible and practical means to combine the electric power big data processing with robust semantic technologies, making first steps towards the new generation of artificial intelligence in SGs.Moreover, KGs offer the unified knowledge representation way to achieve the reflection of electric power concept and facts, events, and entities of power systems by ontology learning, professional power vocabularies, etc.Hence, operational activities, such as querying, decision-making, appeals analysis, and high-quality personalised services can be performed by KG4SGs.It is increasingly significant to build an integrated platform for electric power knowledge acquisition, mining, representation, fusion, and SGs business applications.
In this paper, we have reviewed the current techniques of knowledge mining, particularly the typical methods of electric power information leveraged by KGs.After that, the definition and advantages of KG and the motivation of KG applied in SGs are discussed.Then, a BKG platform towards SGs which discovers, integrates, manages, and analyses the massive electric power knowledge from a semantic perspective is proposed.Specifically, the overall framework, specific module design, and some more advanced techniques of the BKG platform towards SGs are elaborated on.As for electric power KG construction, three layers and related technologies are presented in detail, that is, electric power knowledge acquisition, sub-domain KGs of different business scenarios or power domains, and the entire electric power KG.Finally, this paper explores three typical applications of KG in SGs scenarios, namely, intelligent customer service robot system, automatic operator for decision-making in dispatching, intelligent operation and maintenance of equipment based on multi-modal KG.
The proposed BKG platform based on KG improves the knowledge expressing, understanding and sharing and supports the SGs scenario applications for knowledge retrieval, decision recommendation, and data visualisation aiming at business services.It is of significance to verify the proposed KG platform in the future.The combination of the KG technique, NLP and electric power knowledge will promote the intellectualization of the power networks.Although many basic attempts have been made to obtain the electric knowledge and manage operation and maintenance expertise, they were neither perfect nor in-depth.With increasingly complicated power systems and the continuous incorporation of intermittent renewable energy penetration, the exploration, expansion and automation of KG4SGs might lead to fruitful results in this field.In the future, the control and operation strategies of the power grids will be automatically obtained and recommended by KG4SGs according to the panoramic perception of environment and state information, supporting the autonomous and automatic operation of the power system network.
There are several ongoing or future research directions of utilising KG in SGs, which are follows.
Combination of model, data, and knowledge.Operation and control of the power system depend on the physical mechanism and modelling analysis, however, knowledge map technology belongs to the knowledge analysis method, which brings accessible, explainable, and structured information from the semantic perspective.In addition, a great deal data However, when it comes to electric power systems, there are relatively sparse web documents related to SGs scenarios, and pattern-based methods cannot directly be applied in SGs.
Moreover, many rules and experiences are hidden in the extensive professional background knowledge, which leads to great difficulties in knowledge extraction and mining for a electric power domain.Hence, how to extract entities and relations related to professional electricity knowledge accurately is particularly crucial [186].Furthermore, how to effectively conduct the knowledge fusion of manually defined knowledge and knowledge obtained by machine automatic learning in SGs is of great research value and difficulty, especially in terms of the unified representation of knowledge, the resolution of knowledge conflicts, the updating of knowledge, etc.In the future, it is dispensable to construct the large-scale professional lexicons to provide more semantic correlation information and improve the accuracy of knowledge extraction, knowledge representation, and knowledge reasoning towards SGs.Meanwhile, an important step is to study new techniques for electricity knowledge based on existing KG technologies to tackle these issues.
KGs towards SG scenarios.Since big data in SGs is characterised by big volume, wide varieties, varying velocity, veracity, and values [177,191], this task will encounter several difficulties: the information and knowledge related to SGs scenarios are characterised by diversity, correlation, synergy, and hiddenness; owing to the unique characteristics and complexities of different SG scenarios, it is still difficult to effectively and efficiently apply KG technologies in SGs, and; the existing KG technologies are too cumbersome to deploy in the actual SGs scenario applications [22].Therefore, in terms of different business demands for power systems, future work should focus on both effective and efficient methods to implement and establish largescale sub-domain KGs towards SGs scenarios.
Interpretability of existing models.Give that most machine learning models applied in SGs (especially DNNs) are typical black-box models, and the algorithms cannot give convincing explanations for their results.This issue may be explored by KG, which is an AI technology with rich semantic and logical expression ability.Once the electric power KG is established, all electricity knowledge is heavily interconnected by the nodes and edges.Machine learning models on graph allow improvisation in the interpretability of transforming computer-aided experiments into decision-making analysis to some extent [62].
(a), which illustrates the entities and attributes of 'main transformer', 'oil storage cabinet', 'oil leakage', and visual relations between them.It is obvious that 'oil storage cabinet is a component of main transformer', 'oil storage cabinet has oil leakage', and 'the speed of oil leakage is 5 drop per minute'.The information of the transformer can be propagated on this graph.Figure 5(b) displays another transformer defect record on KG, oil conservator of #1 main transformer body has serious oil leakage.
FIGURE 5 Open
FIGURE 5 the set E contains contains and symptom; the set L contains power equipment, transformer component, transformer body component and transformer defect; Until now, knowledge graph towards smart grids can be established through three abovementioned key steps as shown in Figure6, which consists of four vital knowledge graph/base, namely, entity knowledge graph of power equipment, concept knowledge graph, fault case knowledge base, and business logic knowledge base.The entity KG consists of all kinds of primary or secondary power equipment in SGs.Meanwhile, the voltage, power, frequency, other attributes, and connection relationships of the equipment can be updated according to the real-time operation data of the power grid.Concept KG is built by abstracting the entity KG, which is more in line with human beings.It can be used to normalise and refine the fact expression.The business logic knowledge base is the integration of fault disposal/maintenance plan, scheduling rules, operation and maintenance principles, cause analysis, disposal points and other information, in which the relevant information can be well inquired and reasoned by incorporating path algorithms.When a new accident occurs, the fault knowledge base is updated by recording and saving the related fault information.Moreover, disposal history and operational suggestions of similar cases can be pushed by calculating the similarities of the cases in conceptual layer.In addition, the decision-making strategies related to fault diagnosis and disposal are provided by the logical operation base and rule base, which are constructed manually or semi-automatically by extracting from historical defect records, and expert experience.An example of the knowledge graph towards transformer body defect is depicted in Figure7.theset P contains ids, transformer part, body part, speed, and severity; the set U contains 1, 1, 1, 5 drop/min, and serious; the mapping e gives, for example, e(contains) = (Transformer, Transformer body); the mapping l gives, for example, l(Transformer) = Power equipment and l(Transformer body) = Transformer component; the mapping p gives, for example, p(Transformer) = (ids, 1) and p(Oil leakage) = (speed, 5 drop/min).
4. 3
Knowledge computation and managementThis part of the BKG platform towards SGs undertakes several critical tasks[92]: (a) Allocating storage space of different electric power information and intelligent function units, and updating these dynamically.(b) Taking control over the running of all functions of intelligent applications and the whole platform.(c) Taking responsibility for platform security.(d) In particular, monitoring and controlling the life cycle of the platform evolution.
assessment and diagnosis, and CBM decision-making strategies towards the operation and maintenance of power equipment.In text processing flow, the textual defect records and maintenance tickets are processed by knowledge extraction and semantic analysis in view of text context content mining methods and numerous entities, relations, and attributes related to transmission and transformation equipment are obtained.In visual processing flow, visual semantic information of each image of power equipment is acquired by a classic deep neural network (DNN) model and corrected by human intervention.In the process of constructing the cross-modal KG, the key is to establish a unified knowledge description model of heterogeneous data, blend the visual semantic information, structured data, and context semantic information of text content and discover the comprehensive and effective extension concepts, as well as extract concept relationship and semantic hierarchy.Considering the emerging visualisation applications based on cross-modal retrieval with the image-text-structured-data knowledge, linked multimodal KG can be widely applied to the operation and maintenance of power equipment, including personalised condition assessment, diagnosis, and CBM decision-making strategies.
machine learning algorithms have been well applied in SGs for its superiority of mining vital information from massive data, such as load and new energy resources forecasting, equipment fault diagnosis.The performance of machine learning may be improved by incorporating the additional knowledge into the training process.Therefore, in the actual fault handling or dispatch, it should be combined with physical mechanism-based modelling analysis, machine learning-based recognition, prediction and fault diagnosis, KG technology and expert knowledge and experience, so as to capture underlying, ambiguous, and complex relations between entities hidden in electric power big data and realise the comprehensive analysis and treatment of grid fault.New techniques for electricity knowledge.A majority of existing research on knowledge extraction, knowledge representation, and knowledge reasoning is based on web data.
TABLE 1 .
Defect records information of power equipment B. Service tickets and monitoring data Equipment nameplate Equipment number, name, equipment type, commissioning time, commission age, manufacturer number, etc. Manufacturer Manufacturer number, manufacturer name, production date, etc. Equipment defect Equipment number, component/part, occurrence time, defect description, technical reason, responsibility for defect, defect treatment, etc.
|
2020-12-10T09:08:05.711Z
|
2020-12-03T00:00:00.000
|
{
"year": 2022,
"sha1": "7ccc01dc90439bcab05d4ab07c1d2bc902bbaeb9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/gtd2.12040",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "e88dde55a36c71cc1fac9829d688d84484b52b68",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258961962
|
pes2o/s2orc
|
v3-fos-license
|
Glycyrrhinic acid and probiotics alleviate deoxynivalenol-induced cytotoxicity in intestinal epithelial cells
Deoxynivalenol (DON) is one of the most prevalent mycotoxin contaminants, which posing a serious health threat to animals and humans. Previous studies have found that individually supplemented probiotic or glycyrrhinic acid (GA) could degrade DON and alleviate DON-induced cytotoxicity. The present study investigated the effect of combining GA with Saccharomyces cerevisiae (S. cerevisiae) and Enterococcus faecalis (E. faecalis) using orthogonal design on alleviating IPEC-J2 cell damage induced by DON. The results showed that the optimal counts of S. cerevisiae and E. faecalis significantly promoted cell viability. The optimal combination for increasing cell viability was 400 µg/mL GA, 1 × 106 CFU/mL S. cerevisiae and 1 × 106 CFU/mL E. faecalis to make GAP, which not only significantly alleviated the DON toxicity but also achieved the highest degradation rate of DON (34.7%). Moreover, DON exposure significantly increased IL-8, Caspase3 and NF-κB contents, and upregulated the mRNA expressions of Bax, Caspase 3, NF-κB and the protein expressions of Bax, TNF-α and COX-2. However, GAP addition significantly reduced aforementioned genes and proteins. Furthermore, GAP addition significantly increased the mRNA expressions of Claudin-1, Occludin, GLUT2 and ASCT2, and the protein expressions of ZO-1, Claudin-1 and PePT1. It was inferred that the combination of GA, S. cerevisiae, and E. faecalis had the synergistic effect on enhancing cell viability and DON degradation, which could protect cells from DON-induced damage by reducing DON cytotoxicity, alleviating cell apoptosis and inflammation via inhibiting NF-κB signaling pathway, improving intestinal barrier function, and regulating nutrient absorption and transport. These findings suggest that GAP may have potential as a dietary supplement for livestock or humans exposed to DON-contaminated food or feed. Supplementary Information The online version contains supplementary material available at 10.1186/s13568-023-01564-5.
Introduction
Mycotoxins are a series of toxic secondary metabolites produced by fungi that frequently contaminate feed, cereal crops and foods worldwide, causing cell damage, sickness and even death for domestic animals, as well as cancer for human (Richard 2007). Insufficient understanding of mycotoxin contamination due to undeveloped mycotoxin detection technologies (Schelstraete et al. 2020), has led to a serious underestimation of harm to human health and animal production (Pitt and Miller 2017). According to the DSM World Mycotoxin Survey in 2021, deoxynivalenol (DON), fumitremorgin, and zearalenone are the most prevalent mycotoxin contaminants in raw cereal grains in China, with DON being the most common, accounting for 87% (https:// www. biomin. net/ solut ions/ mycot oxin-survey/). DON is a trichothecene B mycotoxin, also known as "vomitoxin" because of its emetic effect on organisms, especially swine. DON has acute and chronic toxicity, including cytotoxicity (Wang et al. 2016;He et al. 2021), immunotoxicity (Pestka and Smolinski 2005;Faeste et al. 2022), intestinal toxicity (Pinton and Oswald 2014;Huang et al. 2021). Animals exposed to DON usually suffer from nausea, vomiting, anorexia, abdominal pain, diarrhea and other symptoms Shen et al. 2021). Moreover, longterm exposure to DON can lead to immune suppression, malnutrition and slow growth. Therefore, it is crucial to develop effective measures to reduce DON residue in animals and mitigate the harm it causes. Developing an effective substance to prevent these damages is an urgent issue that needs to be addressed.
At present, there are a few methods available to achieve the safe and efficient detoxification of DON, nutritional regulation and probiotics being the most common ones. Glycyrrhinic acid (GA) is an extractive from glycyrrhiza that has proven to have anti-inflammatory, immunomodulatory, and anti-oxidative properties (Bentz et al. 2019;Afkhami-Poostchi et al. 2020;Akutagawa et al. 2019). Studies show that GA can improve the growth and meat quality of piglets (Alfajaro et al. 2012) and regulate autophagy to alleviate acute lung injury caused by lipopolysaccharides (Qu et al. 2019). In our previous study, we found that GA could alleviate DON-induced oxidative stress, inflammatory response and apoptosis through TNF and NF-κB signaling pathways in IPEC-J2 cells (Xu et al. 2020c). On the other hand, probiotics are considered as a substitute for antibiotics in farming, due to its benefits to gut barrier and immune system (Garcia et al. 2018). Our primary study has shown that the combination of compound probiotics with berberine could improve the health of piglets, enhance immunity, and reduce diarrhea rates (Xu et al. 2020d). Studies have reported that probiotics such as Lactobacillus, Enterococcus faecalis (E. faecalis), Bifidobacteria and yeast, as well as some compound probiotics, can effectively degrade mycotoxins (de Souza et al. 2020;Alassane-Kpembi et al. 2018). Our primary research also confirmed the alleviative effects of Saccharomyces cerevisiae (S. cerevisiae) in DON-induced inflammation (Chang et al. 2017). However, the combination effect of GA and compound probiotics in alleviating DON-induced cytotoxicity is still uninvestigated.
Overall, this study aimed to find the best combination and ratio of GA, S. cerevisiae and E. faecalis to effectively reduce the toxicity of DON in animal feed. By using an orthogonal design, researchers hope to optimize the compatibility of GA and probiotics to create a safer feed for animals. This study will provide useful information for the production of animal feed that is safe and healthy for consumption.
Probiotics preparation
Enterococcus faecalis (E. faecalis, CGMCC1.2135) and Saccharomyces cerevisiae (S. cerevisiae, CGMCC 2.1542) used in the experiment were purchased from China General Microbiological Culture Collection Center (CGMCC), Beijing, China. E. faecalis and S. cerevisiae were incubated in MRS and YPD liquid media according to the previous report, respectively ). The fermentation liquid of above probiotics were harvested after 36 h culture and determined by plating serial dilutions and measured as colony forming units (CFU), and then centrifuged at 8000 r/min for 5 min, the supernatant was absorbed, sterilized by 0.22 μm Minisart high-flow filter and stored at 4 °C for further use. The centrifuged cells were resuspended in equal volume using High-glucose DMEM medium without serum and antibiotics. The fermentation liquid, supernatant and cells were diluted to the different concentrations (viable counts of 1 × 10 2 , 1 × 10 3 , 1 × 10 4 , 1 × 10 5 and 1 × 10 6 CFU/mL) with Highglucose DMEM without serum and antibiotics.
Cell culture
The cells were cultured in complete media, which comprised of High-glucose DMEM supplemented with 10% FBS and 1% penicillin-streptomycin in a humidified incubator at 37 °C with 5% CO 2 .
Cell viability
IPEC-J2 cells were seeded into 96-well plate at a density of 1 × 10 4 cells/ well (100 μL per well) and incubated for 24 h. Then the culture medium was removed, and the cells were washed twice with PBS. Next the cells were incubated with GA at concentrations of 50, 100, 200, 400 and 800 μg/mL, and the supernatant, cells and fermentation liquid of E. faecalis and S. cerevisiae were added at different concentrations of viable counts of 1 × 10 2 , 1 × 10 3 , 1 × 10 4 , 1 × 10 5 and 1 × 10 6 CFU/mL with or without 0.5 μg/mL DON for 6 h, respectively. GA and DON were diluted with High-glucose DMEM without serum and antibiotics. After all treatments, the cells were washed and incubated in serum-free media containing 0.5 mg/mL MTT at 37 °C with 5% CO 2 for 4 h. Subsequently, the supernatant was removed, and each well was added with 150 μL DMSO and gently shaken for 15 min. The absorbance was measured at 490 nm with an ELx 800 microplate reader (BIO-TEK Instruments Inc., Winooski, VT, USA).
Orthogonal experimental design and repeatability test validation
Based on the results of single-factor experiments, the viable count of E. faecalis and S. cerevisiae, and the concentration of GA were selected as experimental factors. L 9 (3 4 ) orthogonal design was selected to optimize the compound of the three substances. Here, L represented the orthogonal table; 9 was the total groups of experiment; 3 was the number of factors; 4 represented the maximum allowed number of factors. The design of factors and levels was shown in Table 1.
ELISA assay
IPEC-J2 cells were seeded at a density of 5 × 10 5 cells/ well in 6-well plate until the cell fusion rate reached 80%, and then incubated different treatments for 6 h. Thereafter, the cell supernatants of different treatments were collected and centrifuged at 12,000 rpm for 5 min. The concentration of IL-8, Caspase 3 and NF-κB were measured using enzyme-linked immunosorbent assays (ELISA) according to the manufacturer's instructions. The absorbance was determined at 450 nm using an ELx 800 microplate reader (BIO-TEK Instruments Inc., Winooski, VT, USA).
Quantitative real-time PCR and western blotting analysis
IPEC-J2 cells (5 × 10 5 cells/well) were seeded in 6-well plate and allowed to culcure for 24 h, and then incubated four treatments for 6 h. Total RNA or protein were extracted with Trizol reagent (Takara) or RIPA buffer (EpiZyme Biotechnology, Shanghai, China) according to the manufacturer's instructions, and then subjected to qRT-PCR or western blotting as previously described (Xu et al. 2020a). The detail primers were summarized in Additional file 1: Table S1.
Statistical analysis
All data were expressed as mean ± standard deviation (SD). Differences between groups were determined by one-way ANOVA using SPSS 20.0 software, and Duncan's multiple range test was used for multiple comparison. P < 0.05 indicates significant difference, while P > 0.05 indicates no significant difference.
Effects of supernatant, cells and fermentation liquid of S. cerevisiae on cell viability in DON-induced IPEC-J2 cells
As shown in Fig. 1a-c, the supernatant, cells and fermentation liquid of S. cerevisiae had no toxicity to IPEC-J2 cells. Compared with the control group, the cell viability was significantly increased when the cells of S. cerevisiae were 1 × 10 4 , 1 × 10 5 and 1 × 10 6 CFU/mL (P < 0.05), and the supernatant of S. cerevisiae had no significant effect on cell viability (P > 0.05). In addition, compared with DON alone group, 1 × 10 6 CFU/mL cells and fermentation liquid of S. cerevisiae addition could significantly increase cell viability (P < 0.05), while the supernatant had no significant effect (P > 0.05). Therefore, the cells of S. cerevisiae were selected as 1 × 10 4 , 1 × 10 5 and 1 × 10 6 CFU/mL in the subsequent experiments. increased (P < 0.05) when the supernatant, cells and fermentation liquid of E. faecalis were 1 × 10 6 CFU/mL, respectively, compared to the control group. Furthermore, compared with DON alone group, 1 × 10 6 CFU/ mL cells, 1 × 10 5 and 1 × 10 6 CFU/mL supernatant of E. faecalis additions could prominently enhance cell viability (P < 0.05). Hence, the cells of E. faecalis were selected as 1 × 10 4 , 1 × 10 5 and 1 × 10 6 CFU/mL for the subsequent orthogonal experiment. Figure 3 showed that different concentrations of GA could significantly increase cell viability (P < 0.05), and the cell viability reached the maximum when GA concentration was 400 µg/mL, compared with the control group. Compared with DON alone group, 200 µg/mL and 400 µg/mL GA addition significantly increased cell viability. Therefore, 200, 400 and 600 µg/mL GA concentrations were selected for the subsequent orthogonal experiment.
Optimization of S. cerevisiae, E. faecalis and GA on cell viability and DON degradation rate
According to the results of Additional file 1: Tables S2 and S3, the order of orthogonal factors in increasing cell Tables 2 and 3 showed that the optimal level of combination was A2B3C1, indicating 400 µg/mL GA, 1 × 10 6 CFU/mL S. cerevisiae and 1 × 10 4 CFU/mL E. faecalis; while the best combination of range analysis was A2B3C3, indicating 400 µg/mL GA, 1 × 10 6 CFU/mL S. cerevisiae and 1 × 10 6 CFU/mL E. faecalis. Through verifying the above two results and their interactions (Table 4), it was found that the combination of 400 µg/mL GA, 1 × 10 6 CFU/mL S. cerevisiae and 1 × 10 6 CFU/mL E. faecalis could significantly increase the cell viability and alleviate the toxicity of DON (P < 0.05). In addition, the degradation rate of DON by this combination was 34.7%, which was significantly higher than that of other combinations (P < 0.05). Therefore, 400 µg/mL GA, 1 × 10 6 CFU/mL S. cerevisiae and 1 × 10 6 CFU/mL E. faecalis were selected as the optimal combination for the subsequent experiments.
Effects of GA and compound probiotics (GAP) on IL-8, Caspase 3 and NF-κB contents in DON-induced IPEC-J2 cells
As shown in Fig. 4a-c, compared with the control group, DON alone group significantly increased the contents of IL-8, Caspase 3 and NF-κB (P < 0.05). Compared to the DON alone group, GAP supplementation significantly decreased IL-8 and NF-κB contents (P < 0.05), GPD group significantly decreased the NF-κB content (P < 0.05), while there was no significant difference in the content of Caspase 3 (P > 0.05).
Effects of GAP on apoptosis, tight junction protein and nutrient transport-related gene expressions in DON-induced IPEC-J2 cells
It was shown that the relative mRNA abundances of Bax, Caspase 3 and NF-κB in the DON group were significantly upregulated, compared with the control group (P < 0.01); whereas they were significantly downregulated by GAP addition (P < 0.05). In addition, DON exposure remarkably downregulated the expressions of Bcl-2 and Claudin-1, compared with the control group (P < 0.01); while GPD group significantly upregulated the expressions of Claudin-1 and Occludin, compared with DON alone group (P < 0.05) (Fig. 5a-f ). Figure 5g-i showed that DON alone group dramatically downregulated PepT1 expression compared with the control group (P < 0.05), while GAP addition significantly upregulated its expression (P < 0.05). Although there was no significant difference in the expressions of GLUT2 and ASCT2 between the DON alone group and control group, GPD group significantly increased their expressions (P < 0.05).
Effects of GAP on inflammation, apoptosis, tight junction protein and nutrient transport-related protein expressions in DON-induced IPEC-J2 cells
The results in Fig. 6a-d indicated that compared with the control group, DON alone addition significantly increased the protein expressions of Bax, TNF-α and COX-2 (P < 0.05), and significantly decreased the protein expressions of ZO-1 and Claudin-1 (P < 0.05), but there was no significant difference in PePT1 protein expression (P > 0.05). Compared with DON alone group, GAP group significantly decreased the protein expressions of Bax, TNF-α and COX-2 (P < 0.05), and significantly increased the protein expressions of ZO-1, Claudin-1 (P < 0.01) and PePT1 (P < 0.05). The protein expressions of Bax DON): 400 µg/mL GA, 1 × 10 6 CFU/mL S. Cerevisiae, 1 × 10 6 CFU/mL E. faecalis and 0.5 μg/mL DON for 6 h. All the values are expressed as the mean ± SD (n = 3). Compared with the control group, *P < 0.05, **P < 0.01; compared with the DON group, # P < 0.05, ## P < 0.01 and COX-2 were significantly decreased in GPD group (P < 0.05), while the protein expressions of ZO-1, Claudin-1 and PePT1 were significantly increased (P < 0.05).
Discussion
The contamination of DON has caused extensive damage to animal healthy and production. In recent years, plant extracts and probiotics including yeast and lactic acid bacteria, exert an increasingly important role in the animal production. In the present study, the orthogonal design was adopted to optimize the ratio of GA, S. cerevisiae, and E. faecalis to obtain the best combination of these three substances to degrade DON and alleviate its cytotoxicity.
Probiotics have been widely used in livestock and poultry diets as good alternatives to antibiotics due to their prominent advantages of safety, non-pollution, and lack of residues (Pandey et al. 2015). Probiotics mainly have the characters of inhibiting the growth and reproduction of pathogenic bacteria in the gastrointestinal tract, strengthening the mucosal barrier, improving the function of the gastrointestinal tract, regulating the microecological balance of the gastrointestinal tract, enhancing the immunity of the body, purifying the farming environment, degrading mycotoxins, and finally promoting animal production (Gaggia et al. 2010;Jha et al. 2020). Yeast and lactic acid bacteria are the most widely used probiotics as animal feed additives. Studies have shown that they can alleviate DON-induced porcine intestinal damage Fig. 5 Effects of GAP on inflammation, apoptosis, tight junction protein and nutrient transport-related gene expressions in IPEC-J2 cells induced by DON. a-i Protein expressions of Bax, Bcl-2, Caspase 3, Claudin-1, Occludin, NF-κB, PePT1, ZO-1, GLUT2 and ASCT2. All the values are expressed as the mean ± SD (n = 3). Compared with the control group, *P < 0.05, **P < 0.01; compared with the DON group, # P < 0.05, ## P < 0.01 (Weaver et al. 2013;Ma et al. 2022;Maidana et al. 2021). At the same time, our previous research also found that S. cerevisiae has a certain repair effect on DON-induced IPEC-J2 cell damage, which can increase cell viability and protect cell integrity . Furthermore, S. cerevisiae was shown to protect against DON-induced inflammation by reducing the expression of downstream inflammatory cytokines and the activation of the p38 mitogen-activated protein kinase (p38 MAPK) pathway (Chang et al. 2017). E. faecalis is a facultative anaerobic gram-positive bacterium, which can improve animal growth performance, intestinal microflora, nutrient absorption and immunity (Thacker 2013;Maake et al. 2021;Zhang et al. 2019). In addition, E. faecalis exerts anti-inflammatory effects by modulating NF-κB, MAPK, and PPAR-γ1 pathways Oc et al. 2018). Previously, our team found that E. faecalis had a certain effect on degrading DON in vitro. In the present study, we investigated the effects of the supernatant, cells, and fermentation liquid of S. cerevisiae and E. faecalis on cell viability. The results showed that the microbes used in this study were non-toxic to cells, and a certain counts of viable bacteria (1 × 10 6 CFU/mL) could significantly promote cell proliferation and reduce the toxic effects of DON. However, the effect of supernatant for cell viability was not significant. Research has shown that Lc. paracasei LHZ-1 isolated from yogurt achieved a 40.7% reduction of DON by the cell wall. In contrast, only 10.5% and 8.9% were reduced by culture supernatant or cellular lysate, respectively (Zhai et al. 2019), which indicates that All the values are expressed as the mean ± SD (n = 3). Compared with the control group, *P < 0.05, **P < 0.01; compared with the DON group, # P < 0.05, ## P < 0.01 the supernatant of lactic acid bacteria for cell viability under DON treatment was limited. This study demonstrated that S. cerevisiae and E. faecalis have protective effects on cells.
As mentioned above, yeast and lactic acid bacteria play important roles in animal production, and their combination provides better benefits than individual addition. The interaction between mycotoxins and the functional groups of the cell surface results in mycotoxin adsorption on the cell wall structure. Yeast cell walls, which contain many different adsorption sites represented by polysaccharides, proteins, and lipids, play a crucial role in the detoxification process (Holanda et al. 2020;Faucet-Marquis et al. 2014). Since the mycotoxin adsorption is physical (based on ion exchange and complexation) (Huwig et al. 2001), mycotoxin contamination has been proven to bring little influence on yeast activity (Nathanail et al. 2016). Lactic acid bacteria, on the other hand, mainly rely on peptidoglycan and extracellular polysaccharide of the cell wall to adsorb toxins, thereby reducing the toxicity of mycotoxins. Our previous studies have indicated that GA can promote cell proliferation and reduce DON cytotoxicity (Xu et al. 2020c), and both S. cerevisiae and E. faecalis have certain effects on degrading DON. Therefore, the combination of compound probiotics and plant extracts could potentially have a higher efficacy in DON degradation and animal production. In this study, we optimized the combination of S. cerevisiae, E. faecalis and GA using an orthogonal experiment and explored the effects of this combination on the degradation of DON and alleviation of DON-induced cytotoxicity. The results showed that a certain amount of S. cerevisiae and E. faecalis could significantly promote IPEC-J2 cell proliferation, and there was a synergistic effect among different concentrations of S. cerevisiae, E. faecalis and GA. Specifically, the optimal efficiency was obtained under the combination of 400 µg/ mL GA, 1 × 10 6 CFU/mL S. cerevisiae and 1 × 10 6 CFU/ mL E. faecalis. This combination significantly improved cell viability, reduced the toxicity of DON, and maximized the degradation rate of DON. These findings are consistent with other studies that have demonstrated the significant increase in detoxification of mycotoxins with the combined use of compound probiotics compared to individual addition (Huang et al. 2018).
To further illuminate the alleviative mechanism of GAP on DON, we quantified changes in gene and protein expression related to inflammation, apoptosis, tight junction, and nutrient transport. Results revealed that DON significantly increased the contents of IL-8, Caspase3 and NF-κB, and upregulated the mRNA expressions of Bax, Caspase 3, NF-κB and the protein expressions of Bax, TNF-α and COX-2. However, GAP addition significantly reduced aforementioned genes and proteins, indicating that GAP might alleviate DON-induced inflammation and apoptosis by inhibiting of the NF-κB signaling pathway. In addition, DON exposure affected intestinal barrier function by downregulating ZO-1 and Claudin-1 proteins, whereas GAP significantly upregulated their expressions, which was in accordance with the previous report ). Thus, we assume that the combination of GA and compound probiotics can alleviate the cytotoxicity induced by DON. Probiotics can reduce the damage caused by pathogens, drugs and other factors and increase intestinal tightness (Petrova et al. 2022). Compound probiotics can prevent cell inflammation and apoptosis by maintaining the stable expression of Claudin-1. In the present study, the combination of GA and compound probiotics increased the expression of Claudin-1, indicating that GAP could protect intestinal epithelial cells from DON damage. PepT1, GLUT2 and ASCT2 are the common and representative nutrient transporters. PePT1 is an oligopeptide transporter that mainly exists on the brush border membrane of small intestinal epithelial cells. It holds the function of transporting and absorbing dipeptide and tripeptide of protein degradation products, which plays an important role in maintaining the stability of the organism internal environment and the absorption of drugs in the gastrointestinal tract (Mertl et al. 2008). GLUT2 and ASCT2 were mainly responsible for glucose absorption and neutral amino acid transport of intestinal, respectively (Xu et al. 2020b). We found that GAP significantly increased the expression of PePT1, GLUT2 and ASCT2, which was beneficial to the transport and absorption of nutrients in the intestine, and alleviated the damage of DON to nutrient transport. The results revealed that GAP could enhance the intestinal barrier function and improve nutrient transport and absorption to mitigate the DONinduced cytotoxicity.
In conclusion, our study suggests that the combination of GA and compound probiotics can enhance the synergistic effect of cell viability and DON degradation, and protect IPEC-J2 cells from DON damage by reducing DON cytotoxicity and alleviating inflammation and apoptosis via inhibiting NF-κB signaling pathway, as well as improving intestinal barrier function and regulating nutrients transport and absorption. This study provides a theoretical basis for the acting mechanism of GA and compound probiotics as potential protective agents to reduce DON-induced cell damage, and also provides a reference for the use of GA and compound probiotics to prevent intestinal injury in humans and animals in the future.
|
2023-05-30T13:11:42.155Z
|
2023-05-30T00:00:00.000
|
{
"year": 2023,
"sha1": "d3e9069e4c4d3f4637e2ba05540a19e18becf649",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "914ec8de6de22efe787676a0c8fce05c417b51f9",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236291067
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Studies of Drying Pineapple with An Active Indirect Solar Tunnel Dryer in Malaysia
Abundant sunshine and tropical climate of Malaysia have made pineapple a suitable fruit to be grown in this country. However, to have longer shelf-life, lighter weight for transportation and less storage space, drying of pineapple has been a common preservation method in this country. Open-sun drying used to be most common method of preserving agricultural products. Nevertheless, due to the disadvantages of open sun drying method, solar drying technology has become an alternative method of drying vegetables, fruits, spices, herbs etc. The main purpose of this paper is to evaluate the performance of active solar tunnel dryer (ASTD) for drying sliced pineapples. The air circulation system in this dryer is based on forced convection system. In active solar tunnel dryer inlet airflow temperature was gained by corrugated absorber plate. During the experiment minimum, maximum and average of absorber thermal efficiency were 13.1% and 24.4%, and 19.8 % respectively. The inlet temperature range was between 260C to 380C, the escalated temperature range was between 340C to 750C on the absorber outlet. Relative humidity (RH) experienced changes due to irradiance intensity, the RH reduced when passed through the absorber plate. The average inlet humidity was 54% while average outlet humidity was 36%. During 9 hours of drying process, pineapple moisture content reduced from 89% to 12.5% and its weight decreased from 5 kg to 0.6 2kg. The peak sun hours were 5.7 hours, and loading density was 1.51 kg/m2.
Introduction
About thirty percent of food products are lost or damaged globally, the wasting chain is from the producer to the consumer. Reducing postharvest losses (PHL) of agricultural products is one strategic method to escalate income [1]. Drying crops, which is defined as the process of moisture removal due to simultaneous heat and mass transfer, is the most common preservation method that has been used for decades. Drying agricultural products has several advantages such as longer shelf life, reduced mass and volume (convenience in transportation), access to wide range of products out of country of origin. Nonetheless, drying process is considered as a costly and energy intensive procedure, as it needs 10% to 15% of overall energy utilized in the industry [2]. Fossil fuel, natural gas, biomass, electricity, and solar thermal energy are common energy sources that are consumed for drying process. Solar energy can reduce fossil fuel cost up to 27%-80% for drying food products. The solar drying is known to be the inexpensive technique for drying and preserving food products. Using solar energy to dry agricultural products has become potentially a viable substitute for fossil fuel in countries where solar radiation is abundant [3][4][5]. Pineapple considers as one of the important local fruits of Malaysia has waste during harvest and shipping. Since the shelf life of fresh fruits is short, dried pineapple reserve duration is long.
Solar Dryer Classification
Proper design of solar dryers plays a key role in order to meet particular drying requirements of agricultural products. Various types of solar dryer have been designed and developed all over the world to optimize the drying process and reduce the operation cost. Figure 1 shows the classification of solar dryer. Solar dryer is mainly divided into open sun dryer (OSD) and controlled solar dryer [6]. The open sun dryer which is known to be one of the oldest, cheapest, and simplest type of dryer has several disadvantages [7]. According to Figure 1 the controlled solar dryer is classified into airflow convection mode, and exposure to insolation. Natural convection and forced convection are the two main types of airflow convection mode [8]. In natural mode there is no motor or fan while in forced convection system a blower or fan is an important component of the dryer. Exposure to insolation can be direct incident or indirect incident. In direct exposure to insolation, the load is directly exposed to solar radiation. For drying agricultural products which are sensitive to direct radiation, indirect drying system is a suitable alternative [9,10].
Solar Tunnel Dryer (STD)
The solar tunnel drying system is used for drying a wide range of food grade and non-food grade of agricultural products. STD is convenient for transportation due to its small scale and is suitable for remote area. The semi-cylindrical shape of STD increases the radiation absorption and reduces reflection. The structure of STD is not complex and mainly consists of thermal absorber, crops trays, inlet and outlet air vent, and fan. The drying chamber is usually covered by the plastic sheets such as polycarbonate which is UV resistant or glasses [11]. In addition, the thermal collector element is generally in matt black color to maximize radiation absorption. Figure 2 and 3 demonstrate the solar tunnel dryer in different capacities and the main components of this type of dryer. Radiation incident on solar tunnel dryers is typically indirect and air convection system is active mode. Numerous experimental studies have proved that the use of the solar tunnel drier leads to considerable decrease of drying time in comparison to open sun drying [12]. As it illustrated in Figure 2 the collector can be placed before drying chamber for small scale dryers. However, for larger scale dryers, the collector is placed along the length of the dryer. [13] Solar tunnel dryer was experimented for drying agricultural products such as pineapple in numerous studies. An experimental study at Bangladesh Agricultural University used solar tunnel dryer with loading capacity of 120-150 kg for drying sliced pineapples. The drier used in the study consisted of a transparent plastic covered flat plate collector and a drying tunnel connected in a series to supply hot air directly into the drying tunnel using two dc fans operated by a solar module. The absorber outlet temperature was between 34.1˚C to 64˚C and the maximum solar irradiance intensity was recorded at 580 w/m2. The total fresh pineapple weight was 150 kg, the initial and final moisture contents were 87.32% and 13.13% respectively. Total drying duration was 3 days, each day from 9 a.m. to 4 p.m. The 10 mm thickness of pineapple slices were treated by sulfur dioxide for 30 minutes. The researchers found out that drying time reduced considerably using solar tunnel dryer compared to sun drying [14,15].
Objective of Study
Abundant sunshine and tropical climate of Malaysia have made pineapple a suitable fruit to be grown in this country. Plantation areas of pineapple in Malaysia have been expanded to meet the increasing demand for its products. The export value of pineapple has increased 109% by 2020, the value raised up from RM155 million to RM320 million. Despite huge economic profit, the export of this high moisture content fruit has a significant problem [16][17][18].
Malaysia like other tropical countries has access to plentiful solar radiation. Solar thermal energy as a form of renewable energy has countless applications in equatorial regions. Despite the endless of solar energy sources, solar thermal energy is not fully harnessed and utilized in various sectors including but not limited to agriculture, heating system, industry, etc. Agricultural product drying for preservation has been one of the main applications of solar thermal energy for a few decades [19,20]. In order to dehydrate pineapple via solar thermal energy, the solar tunnel drying system is the proper method for micro-scale size to increase shelf lifetime [21].
Methodology
The active solar tunnel dryer, which was used in this experimental study, was designed, fabricated, and tested to dry sliced pineapples in the open-air solar laboratory of Solar Energy Research Institute, Universiti Kebangsaan Malaysia. The experiment latitude, longitude, and altitude are 2.5513 N, 101.4618 E and 44 m above sea level respectively.
Active Solar Tunnel Dryer (ASTD)
The active solar tunnel dryer (ASTD) used in this study is based on indirect exposure insolation and forced convection system. The dryer structure frame is made of an aluminum extrusion profile. The ASTD dimensions are 446 cm long, 122 cm wide, and 80 cm height. The ASTD consists of two main sections, the head section is known as Fluid Terminal Section (FTs) and the chamber section, is known as the Drying Chamber section (DCs). The thermal absorber plate and crops bed trays are placed in the drying chamber section, solar exhaust fan, inlet vent, and transmission tube are placed in the Fluid Terminal section. Figure 4 displays the position of DCs, FTs and inlet air valves whereas, Figure 5 illustrates the inner components as well as airflow direction. The solar tunnel dryer is self-generated energy via a photovoltaic module for fan consumption. As mentioned before, the two-section of the drying system is consists of several internal and external parts, in order to clarify their application and description, Table 1 explains in detail the components.
Airflow Through ASTD
ASTD is designed to absorb the solar radiation by a solar collector and use forced convection to pass the hot mass of air from solar collector into drying chamber. The solar exhaust fan creates the airflow movement in ASTD. The ambient air is pulled into inlet of absorber via inlet valve and transmission tubes. The temperature of the ambient air increases when it passes through the absorber as the absorber plates are heated by the solar radiation. As the airflow temperature goes up, its relative humidity decreases. When the heated air passes through the perforated trays, it absorbs and carries the moisture of the load to exhaust fan outlet and dehydrates the load.
Measurement Systems
Temperature probes; humidity meters, and irradiance intensity meter are the main data logger acquisition sensors that were used in this experimental study to collect data. Table 2 illustrates the function, location and specification of sensors function. Correct position of sensors helped to obtain more accurate data. Figure 6 displays the place of sensors and variables symbols. Figure 6 demonstrate the thermocouples, hygrometers and pyranometer sensors position. The ambient air temperature and humidity are equal to absorber inlet temperature and humidity. In addition, the absorber outlet temperature and humidity are equivalent to tray area temperature and humidity.
Drying Pineapple
Pineapple has high moisture content thus drying this tropical fruit requires a reliable and capable dryer type. Active solar tunnel dryer is a suitable dryer type as it operates with solar power and due to its curvy structure has very low solar incident reflection. The experiment was conducted from 9 a.m. to 6 p.m., during which 5 kg of fresh pineapple were dried. Pineapples were peeled, cored, trimmed, and cut uniformly into slices with thickness of 2-3 mm (according to FAO recommendation). The sliced pineapples were placed on load trays with proper space for air circulation. There was not any pretreatment during the experiment. Figure 7(a) illustrates how sliced pineapple were arranged on load tray before the drying process whereas Figure 7(b) shows the dried pineapples.
Drying Performance
Drying efficiency ( ) is computed by using Eq. (1). The value demonstrates the ratio between amounts of energy used for drying based on evaporation over energy available from solar energy source.
= (1)
where : Latent heat of water evaporated, : weight of moisture evaporated, : solar insolation on collector surface and : area of collector. Drying rate or evaporative capacity demonstrates the capability of dryer to extract water from the drying sample within a specific period. It is determined by three parameters as shown in Eq. (2) and the unit is kg/h.
where : Drying time, : initial weight, : final weight, kg. Moisture content (MC) is one of the significant dryer's loads parameters. The MC percentage is the percent of water in crops.
where : Load mass at any time, : dried load mass. Thermal efficiency shows the efficiency of the thermal absorber. Eq. (4) was used to calculate thermal energy of the absorber.
where ̇: Mass flow rate, : specific heat of air, : temperature of absorber outlet, : temperature of absorber inlet, : solar insolation on collector surface, and : collector area.
An enormous number of solar drying systems have been fabricated and utilize all around the globe. In terms of load capacity, solar dryers divide into micro, medium, and industrial scale. Table 3 illustrated the specification of the solar tunnel drying system, assisted heat pump system, and open sun drying system. Among the details of the drying systems, the solar tunnel dryer is more reliable because the crops dehydrate in enclosed space, airstream uniformity, zero-cost energy, medium capital, and low maintenance. Since the solar tunnel dryer fully operates with solar energy, it is suitable for remote and sunny regions [22,23].
Results and Discussion
The drying process of sliced pineapples in tropical climate of Malaysia took 9 hours. The contributing factors such as relative humidity and temperature were recorded before and after the thermal absorber using a data logger. The ambient temperature or absorber inlet temperature and absorber outlet temperature trends are plotted in Figure 8 the temperature of absorber inlet ranges between 26˚C and 38˚C while the ranges of temperature for absorber outlet are 34˚C to 75˚C. Table 4 shows some data regarding the load and performance of the dryer. The Figure 8 shows the inlet and outlet temperature of the thermal absorber, the absorber outlet airflow quickly penetrated to drying tray area, above the crops. On the other hand, the absorber outlet temperature was equal to the drying chamber temperature. The average temperature of the drying chamber was 57.6 ˚C. As illustrated in Figure 9 when the irradiance raised, the relative humidity decreased at 10 a.m. the inlet humidity, 31% reduced that is the highest value change. Sufficient airflow velocity is one of the main requirements of drying process. In ASTD the airflow velocity depends on solar irradiance intensity because the fan is powered by solar module. Figure 10 demonstrates that inlet airflow velocity and irradiance have similar patterns. Figure 12 shows the descent value of pineapple moisture content. Once the pineapple exposed to a hot stream of air, the product started to evaporate moisture and reduce weight. There is a direct relation between moisture content percentage and drying load weight. The initial moisture content of the pineapple was 89%, it means 11% percent of initial pineapple weight was solid content and 89% of the initial weight of pineapple was water. The product wight carve obtain by moisture content carve.
Conclusions
Active solar tunnel dryer was used in this study to dry 5 kg of pineapples. The quality of the dried pineapples was very high and the product was fully protected from rain, dirt, dust and other pollutants. The active solar tunnel dryer was able to extract the moisture of sliced pineapple from 89% to 12.5% within 9 hours. The solar irradiance peak was 830 (W/m2) at 12 p.m. with the average of 578 W/m2 and the PSH was 5.7. The maximum and minimum temperature difference between absorber outlet and inlet was 37˚C and 36˚C which occurred at 12 p.m. and 6 p.m. The ambient or absorber inlet relative humidity ranged between 46% to 63% and the outlet of absorber ranged between 20% to 50%. Results proved that drying rate was high and active solar tunnel dryer is a suitable type of dryer for drying pineapples in tropical regions.
|
2021-07-26T00:06:28.569Z
|
2021-06-04T00:00:00.000
|
{
"year": 2021,
"sha1": "6004b0ead2405bd024e10b65fb7a66bfbd305af1",
"oa_license": "CCBYNC",
"oa_url": "https://akademiabaru.com/submit/index.php/arfmts/article/download/2369/2895",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d732179815be47e62ea44a94855f872b589272d4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
237493161
|
pes2o/s2orc
|
v3-fos-license
|
Aleukemic Extramedullary Blast Crisis as an Initial Presentation of Chronic Myeloid Leukemia with E1A3 BCR-ABL1 Fusion Transcript
Right neck swelling and pain occurred in a 49-year-old man. A Blood count showed a slight increase in platelet count without leukemoid reaction. After a biopsy of the cervical mass and bone marrow aspiration, a diagnosis of extramedullary blast crisis (EBC) of chronic myeloid leukemia (CML) was made. Fluorescence in situ hybridization (FISH) analysis showed a BCR-ABL1 fusion signal, but results of real-time polymerase chain reaction (RT-PCR) for major and minor BCR-ABL1 transcripts were negative. We identified a rare e1a3 BCR-ABL1 fusion transcript. Administration of dasatinib resulted in disappearance of the extramedullary tumor. This is the first reported case of CML-EBC with e1a3 transcript. An aleukemic extramedullary tumor can be the initial presentation of CML.
Introduction
The Philadelphia (Ph) chromosome, which results from reciprocal translocation of chromosomes 9 and 22, is the hallmark of chronic myeloid leukemia (CML) (1). As a result of the translocation, the BCR-ABL1 fusion gene is formed, and the BCR-ABL1 fusion protein causes constitutional tyrosine kinase activation and drives cell proliferation. There are variations in the break points of BCR and ABL transcripts. Major BCR-ABL1 (a fusion of BCR exon 13 or 14 and ABL1 exon 2) is positive in most cases of CML, while minor BCR-ABL1 (a fusion of BCR exon 1 and ABL exon 2) is positive in some cases of CML and in some cases of acute B-cell lymphoblastic leukemia (B-ALL) (2). We experienced a case with cervical lymphadenopathy, which turned out to be an extramedullary blast crisis (EBC) of CML. Fusion of BCR exon 1 and ABL1 exon 3 (e1a3) was detected in this case. So far, 26 cases of e1a3 BCR-ABL1positive leukemia have been reported in the literature with ALL in 17 cases, CML in 8 cases, and acute myeloid leukemia (AML) in 1 case. A leukemic extramedullary mass has not been reported so far, and our case seems to be a very rare case. Here we report a case of CML presenting with an e1a3 fusion variant with EBC as an initial presentation. Major BCR-ABL1 at the first visit showed a slightly elevated platelet count but no other obvious abnormal findings (Table 1). A computed tomography (CT) scan revealed several swollen lymph nodes in his neck and supraclavicular fossa. Enhanced scanning showed that the lymph nodes had a low-density area on an image and appeared to be necrotic (Fig. 1A). 18 Ffluorodeoxyglucose positron emission tomography-computed tomography ( 18 F-FDG PET-CT) showed FDG uptake (standardized uptake value: 4-6) at the sites of lymphadenopathy (Fig. 1B). A biopsy of the cervical mass was performed. Histological examination revealed dense aggregates of atypical mononuclear cells surrounded by massive coagulation necrosis, suggestive of hematolymphoid malignancy (Fig. 1C). An immunohistochemical examination showed that the mononuclear cells had the following phenotypes: CD43+, CD56+, CD68+, CD123+, MPO+, and Lysozyme+. A histopathological diagnosis of myeloid neoplasm was made. Bone marrow (BM) aspiration was then performed, and examination of the BM revealed hypercellular marrow with serial increase of maturating granulocytes without evidence of lymphoma ( Fig. 2A) (Fig. 2B), and fluorescence in situ hybridization (FISH) test results were positive for BCR-ABL1 fusion signal (82.4%, 412/500) (Fig. 2C). Real-time polymerase chain reaction (RT-PCR) of the BM was negative for major and minor BCR-ABL1, but primer set for minor BCR-ABL1 amplified a smaller band than the expected 320bp band (Table 2 and Fig. 2D), and the Sanger sequence of the PCR product revealed an e1a3 BCR-ABL1 fusion transcript (Fig. 2E, F). The G-band of the cervical mass was not obtained due to poor proliferation, but the results of the FISH test showed 70% BCR-ABL1-positive cells. Based on these findings, a definitive diagnosis of CML was made. The percentage of blast cells in peripheral blood increased to 10% about a month after the first visit and the blast count criteria met the accelerated phase criteria. On the other hand, the extramedullary leukemic mass formation was categorized as blast crisis in the European LeukemiaNet criteria (3). The patient was administered dasatinib at a dose of 140 mg/day. After the start of treatment, shrinkage of the cervical mass and disappearance of blast cells in peripheral blood were observed, and the percentage of BCR-ABL1 transcripts was cautiously followed. Six months after the start of treatment, BCR-ABL1 determined by FISH decreased to 0.8%. According to the European LeukemiaNet criteria, it was classified as Complete Cytogenetic Response (CCyR) and Optimal at the moment.
The e1a3 BCR-ABL1 transcript lacks ABL1 exon 2 and lacks the SRC homology 3 (SH3) domain encoded by ABL1 exon 2 (30). Since the SH3 domain negatively regulates the SH1 domain, which is a kinase region, it is thought that a deficiency of the SH3 domain promotes tumorigenesis (31). However, only a few cases of CML lacking exon 2 have been reported so far, and the characteristics and prognosis of the clinical course have not been clarified.
Also, a concern for cases with these rare BCR-ABL1 transcripts is that they can be overlooked in routine tests. By using the primer corresponding to the ABL exon 2 sequence, it may not be possible to detect a BCR-ABL1 transcript having a cut point in ABL exon 3 as in this case. If major BCR-ABL1 or minor BCR-ABL1 is negative despite the existence of the Ph chromosome, it is necessary to consider the presence of a rare BCR-ABL1 transcript variant. In this case, it was possible to identify the e1a3 BCR-ABL1 transcript by using Sanger sequencing. Clarifying a fusion transcript is important to find a minimal residual disease marker, although a method for quantitative evaluation has not yet been established.
In conclusion, Ph-positive leukemia with an e1a3 fusion transcript is a very rare disease, but there may be more potential cases and an accumulation of cases will deepen the understanding of the characteristics of the disease. EBC of CML should be considered as a differential diagnosis even in a case that shows almost normal CBC.
The authors state that they have no Conflict of Interest (COI).
|
2021-09-14T06:16:40.206Z
|
2021-09-11T00:00:00.000
|
{
"year": 2021,
"sha1": "0c02ad93d27c7e1467307bc1d9aa2789b7efdcd1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_8319-21/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7274fdbb3314c6d7c170c5e2f9ef91f03cbe5ded",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268441470
|
pes2o/s2orc
|
v3-fos-license
|
The Radon and Hilbert transforms and their applications to atmospheric waves
The Radon and Hilbert transform and their applications to convectively coupled waves (CCWs) are reviewed. The Hilbert Transform is used to compute the wave envelope, whereas the Radon transform is used to estimate the phase and group velocities of CCWs. Together, they provide an objective method to understand CCW propagation. Results reveal phase speeds and group velocities for fast waves (mixed Rossby‐gravity, westward and eastward inertio‐gravity, and Kelvin) that are consistent with previous studies and with Matsuno's equatorial wave dispersion curves. However, slowly‐propagating tropical depression‐like systems and equatorial Rossby waves exhibit wave envelopes that propagate faster than the individual wave crests, which is not predicted by dry shallow water theory.
| INTRODUCTION
The phase velocity of any wave indicates its phase propagation, whereas the group velocity indicates the energy dispersion of the wave.Physically, the group velocity represents the propagation speed of wave envelopes.As convectively coupled waves (CCWs) play a crucial role in bridging weather and climate (e.g., Kiladis et al., 2009), it is important to understand both their phase propagation and energy dispersion features.
The Radon transform (RT) is a mathematical technique that is commonly used for image reconstruction and analysis.It was first introduced by Radon (1917) and has since found many applications in various fields including atmospheric sciences.In this field, the RT has been used to calculate the phase speed of oceanic and atmospheric waves.One of the earliest applications of the RT in atmospheric science was in the study of gravity waves by Lindzen and Holton (1968).They used the RT to analyze the phase propagation of atmospheric waves by projecting the wave field onto a series of lines at different angles.By measuring the slope of the resulting projections, they were able to estimate the phase speed of the waves.Since then, the RT has been used in several studies to estimate the phase speed of CCWs (Mayta & Adames, 2021;Mayta et al., 2021;Yang et al., 2007b).In a study by Yang et al. (2007b), the RT was used to investigate the horizontal phase speed of convectively coupled waves.They found that the RT was able to provide accurate estimates of the phase speed even in complex wave fields.
On the other hand, few studies have sought to estimate the group velocity of tropical disturbances.Early studies by Wheeler et al. (2000) estimated the group velocity of CCWs by examining time-longitude diagrams.In Adames and Kim (2016), the authors calculated the group velocity of the Madden-Julian Oscillation (MJO) in terms of the propagation of the local extrema in the individual wave crests.Later on, Chen and Wang (2018) calculated the wave envelope of the observed MJO-related precipitation anomalies by obtaining the absolute value of the Fourier transform coefficients, as documented by Hayashi (1982).
An alternative way to calculate the wave envelope is through the application of the Hilbert transform (Liu, 2012), a mathematical tool used to calculate the analytic signal associated with a given signal.In atmospheric science, this method has been extensively used to extract key characteristics of mid-latitude transients.For instance, Zimin et al. (2003) pioneered the utilization of the Hilbert transform in the context of mid-latitude Rossby waves.By isolating the analytical signal, the authors were able to identify the distinct properties of Rossby wave packets, such as their amplitude, phase, and spatial-temporal evolution.Subsequent studies (e.g., Souders et al. (2014); Wolf and Wirth (2017); Schoon and Zülicke (2018); among others) further improved the accuracy of Rossby wave packet calculations.In Mayta and Adames (2021), the authors used the Hilbert transform to calculate the wave envelope of twoday waves and understand their coupling to Amazon squall lines (see their fig.1).
Although the Radon and Hilbert transforms each have distinct advantages in extracting relevant features and analyzing wave propagation patterns, they have not been utilized together to calculate group velocity.The Hilbert transform is a computationally simple tool for wave envelope estimation, while the Radon transform objectively calculates wave propagation.Together, they can provide an objective estimate of phase and group velocities.Thus, the goal of this study is to provide a succinct discussion of the two transforms and showcase their usefulness in studying atmospheric waves.The structure of this paper is as follows: In Section 2, we discuss the data and theoretical basis of the Radon and Hilbert transforms.In Section 3, we examine the methods applied to all convectively coupled waves over the Indo-Pacific warm pool well-documented in Wheeler et al. (2000) and Kiladis et al. (2009).A concluding discussion is offered in Section 4.
| CLAUS brightness temperature
Satellite-observed brightness temperature (T b ) data is used as a proxy for tropical convection.The data is obtained from the Cloud Archive User System (CLAUS) satellite data (Hodges et al., 2000), which has eight-times-daily global fields of T b from July 1983 to June 2009 and extended through 2018 using the Merged IR dataset from NOAA (see Sakaeda et al. (2020) for more detail).In this study, we use CLAUS data at 4 times daily temporal resolution and 0.5 Â 0.5 horizontal resolution.T b CLAUS data extends from January 1984 through December 2018.
| Wave-type filtering of CLAUS T b
To isolate the signal of CCWs, we used filters based on the Fourier space-time decomposition following the method proposed by Wheeler and Kiladis (1999) and using the same frequency-wavenumber boxes documented in Kiladis et al. (2009).This is accomplished in the wave number-frequency domain by retaining only those spectral coefficients within a specific range corresponding to the spectral peaks associated with a given mode.We also incorporated filters for a specified range of equivalent depth, as indicated in Table 1, for the mixed Rossbygravity waves (MRGs), eastward inertio-gravity wave (EIG), and westward inertio-gravity wave (WIG).This consideration ensures a clear separation of MRG and EIG signals within the antisymmetric component, as previously outlined by Kiladis et al. (2016).Furthermore, we aim to prevent the mixing of MRG and TD-type signals within the westward propagation domain.MRG waves typically have lower frequencies and have longer zonal wavelengths compared to TD-type.In the case of lowerfrequency disturbances, such as Kelvin and equatorial Rossby waves, we do not account for equivalent depth, as these waves encompass a broader range of equivalent depths (Kiladis et al., 2009;Takayabu, 1994;Yang et al., 2007a).The filter settings for the period, the wavenumber k, and the equivalent depth h e are detailed in Table 1.
| Hilbert transform: Calculation of the wave envelope
To study the energy dispersion of the CCWs, the wave envelope E λ,t ð Þ is calculated by using the Hilbert transform.This method produces similar results to those obtained following Hayashi's (1982) method, but is arguably more computationally simple.Given a longitude and time-dependent field f λ, t ð Þ, the wave envelope is obtained via the following formula: where: is the Hilbert transform in space of f .While the real part of Equation ( 1) is the original time series of the wave, the imaginary is a copy of the original input time series with each of its Fourier components shifted in phase by 90 .This is why the Hilbert transform is often referred to as a 90 phase shifter.Considering that longitude varies between 0 and π (Figure 1), we compute the fast Fourier transform (FFT) of the series and keep the complex coefficient, which is much more efficient numerically.Once the wave envelope E λ,t ð Þ is computed (represented by the contours in the schematic depiction of Figure 1), we used its time series to calculate the group velocity applying the Radon transform detailed in the next section.
| Radon transform: Calculation of the phase speed (c p ) and group velocity (c g )
The Radon transform is employed in this study to calculate phase speed and group velocity given a time-longitude diagram.To better understand this technique, a schematic depiction of the Radon transform is shown in Figure 1.The Radon transform is applied to f for phase speed and to E for group velocity.For the sake of simplicity, the derivation is shown only for the phase speed.Thus, f λ, t ð Þ is the integral of f along the line L oriented at angle θ, with a range of angles from 0 to 180 .Thus, the Radon transform is defined as a projection of f λ, t ð Þ on L as follows: where u is the direction orthogonal to L, and s is the coordinate on L. Therefore, for a given θ, the Radon transform is a function of the line coordinate s.Rewriting Equation (3) above in terms of coordinates λ and t, when the lines are perpendicular to the alignment of crests and troughs of the wave (top of Figure 1), the projection will give the number of image pixels along their projection lines.The zeros of the original and rotated coordinates are where E and f have the maximum amplitude.Therefore the angle (θ p and θ g ) perpendicular to that projection gives the direction of propagation of the wave and its dispersion energy in the time-longitude diagram and thus the phase speed and group velocity, respectively.It is worth noting that the magnitude and direction of wave propagation are associated with the maximum variance, which is represented by the dashed lines at the bottom of Figure 1.Furthermore, uncertainty can be incorporated by taking into account a 95% probability of the occurrence of maximum variance, which is represented by the shadings in the schematic figure.
Finally, the phase speed and group velocity is computed by using the value of θ, in which R p 2 s, θ ð Þds is a maximum (θ max ) as follows, where a is the earth's radius and 2πa cos ϕ 360 ∘ is the length of the unit degree at latitude ϕ.For instance, when the latitude average ranges from 2.5 S to 2.5 N (WIG), gives a value of ϕ ¼ 0 ∘ .Δx and Δt are the temporal and spatial ( ∘ ) resolutions of the data grid, respectively.
| PHASE SPEED AND THE GROUP VELOCITY OF CCWs
Figure 2 gives an example of the application of the combined Hilbert and Radon transforms for MRG and EIG waves.For both waves, we used the base point (7.5 N, 177.5 E), and T b is averaged for the same latitudes (2.5 -12.5 N) as in Wheeler et al. (2000).To allow comparison with was found in previous studies (e.g., Wheeler et al. (2000); Kiladis et al. (2009)).
Figure 2a shows the longitude-time diagram of T b of westward-propagating wave signals corresponding to the MRG wave.The contours depict the corresponding longitude-time evolution of the wave envelope.Regressions were computed in a 2.5 -12.5 N range and lag days À10 to +10.To depict the propagation speed of the wave envelope, we applied the RT to the longitude-time diagram of the envelope, as shown in the schematic depiction in Figure 1.The MRG wave moves westward over the Western Pacific at a phase speed of c p ≈ 29:2 m s À1 .This phase speed is obtained from θ max p = 95.0 using Equation ( 5).An examination of the propagation of the MRG wave envelope yields a value of θ max g = 73.5 , which results in an eastward group velocity of c g ≈ 8:6 m s À1 .These results are somewhat different when a line from a longitude-time diagram is used to calculate c p and c g (see fig. 12a in Wheeler et al. (2000)).In addition, the phase speed for the MRG waves is slightly different compared to other regions (e.g., Western Hemisphere), where the wave propagates slower (Mayta & Adames, 2023).The propagation features of the MRG vary depending on the basic state, shape, and intensity of the vertical heating profile, as well as the spatial location of the convective heating with respect to the wave dynamical fields (e.g., Yang et al. (2007b)).
The propagation features of the EIG wave are shown in Figure 2b.Compared to MRG waves, EIG waves have a slightly faster phase speed.The RT applied to this disturbance over the western Pacific yields a value of θ max p = 82.2, which results in an eastward phase speed of c p ≈ 30:5 m s À1 .A similar phase speed was also documented in Wheeler et al. (2000) over the same domain (see their fig.16) and Mayta and Adames (2023) Western Hemisphere.However, as illustrated in Figure 2b, the propagation of the EIG wave envelope is in the same direction as the propagation of the wave.When RT is applied to the envelope yields θ max g = 77.3, which results in an eastward group velocity of c g ≈ 11:3 m s À1 .It is worth noting that the c g is twice as fast as that found by Wheeler et al. (2000).
The remaining CCWs: Kelvin waves, westward inertial gravity, tropical depression-type (i.e., Easterly Waves), and equatorial Rossby waves were widely documented in previous studies (e.g., Kiladis et al., 2006;Kiladis et al., 2009;Mayta & Adames, 2023;Mayta et al., 2021;Mayta et al., 2022;Wheeler et al., 2000;Yang et al., 2007b;and references therein).Table 2 summarizes the propagation properties of these remaining CCWs.To explore WIG waves, we considered the base point at 0 , 155 E, and T b is averaged for the latitudes 2.5 S-2.5 N.Over this region, the wave and its corresponding envelope propagate westward over time with θ max p = 95.0 and θ max g = 109.2giving a c p ≈ 29:3 m s À1 and c g ≈ 7:4 m s À1 , respectively.Previous studies also documented similar phase speeds for WIG waves over Indo-Pacific region (e.g., Yu et al., 2018).For Kelvin waves, we consider the base point at 0 , 90 E, and T b is averaged for the latitudes 5 S-5 N. The wave, as in other regions, has an eastward phase speed ranging from 15 to 20 m s À1 .By using the referred base point and latitudinal average, we found a value of θ max p = 79.7 and c p ≈ 14:2 m s À1 (Table 2).This is in agreement with previous studies, that show that over the Indian Ocean Kelvin waves propagate slower than in other regions (Mayta et al., 2021;Roundy, 2008;Yang et al., 2007b).As theory suggests, one characteristic feature of the Kelvin wave is that it is non-dispersive, that is, the phase velocity of the wave is equal to the group velocity of the wave energy (Matsuno, 1966).We can see that the wave envelope propagates in the same direction and with almost the same magnitude θ max g = 80.3 and c g ¼ 15:1 m s À1 .Slowly-propagating disturbances, such as TD-type waves and equatorial Rossby waves, have complex propagation features (Adames, 2022;Ahmed, 2021;Sobel et al., 2001;Yang et al., 2007b).These features will depend, for instance, on whether the wave is propagating over land or over the ocean, even if it moves over warmer SST or cooler SST conditions (Kiladis et al., 2006;Mayta & Adames, 2023;Vargas Martes et al., 2023).The propagation features of TD-type disturbances over the entire oceanic tropics have recently been documented in Mayta and Adames Corraliza (2023).They found that TD-type waves move westward at phase speeds ranging from 6 to 8 m s À1 .African easterly waves, which exist over land, have a phase speed of approximately 10 m s À1 (Kiladis et al., 2006;Vargas Martes et al., 2023).Here, we used the same base point as in Mayta and Adames Corraliza (2024) over the eastern Pacific (10 N, 100 W) to calculate c p and c g of the TD-type wave.For the referred domain, we found a value of θ max p = 115 that yields a c p ≈ 7:2 m s À1 (Table 2).The wave envelope of TD-like waves propagate in the same direction as the crests but slightly faster (θ max g = 106.3and c g ≈ 10:5 m s À1 ; Figure 3a).The Rossby waves propagation features are showed in Figure 3b.The RT applied to this wave results in θ max p = 121.3 that yields a c p ≈ 4:2 m s À1 (Table 2).Similar phase speeds were also found in Wheeler et al. (2000), even outside the warm pool Indo-Pacific region (Mayta & Adames Corraliza, 2004;Mayta et al., 2022).The wave envelope, as can be seen as contours in Figure 3b, propagates westward at about c g ≈ 7:1 m s À1 .As occurs in TDlike waves, the results show that c g ≥ c p ≈ 7:1 m s À1 .This implies that the wave envelope propagates almost twice as fast as the wave signal, a feature that has not been previously documented.
We also included uncertainty in the calculation of c p and c g (Table 2).It is important to note that the group velocity can vary significantly (e.g., AE 6.3 for MRG waves).As depicted in the schematic Figure 1, the RT T A B L E 2 Characteristics of convectively coupled waves.
Wave
Base point Lat average Note: For most CCWs, the base point corresponds to the same as in Wheeler et al. (2000) to allow comparison.The phase speed and group velocity are calculated for the longitude where the wave exhibits significant values.The phase speed (c p ) and group velocity (c g ) are calculated for the maximum value of the distribution of the sum-of-squares of the RT θ max p and θ max g , respectively.
approach uses a single envelope to estimate the right θ max g for the group velocity, explaining the wide variations.The opposite occurs with the phase speed.At a 95% probability, there is a narrow variation in θ max p (gray shaded region in Figure 1), indicating a closely clustered distribution around the maximum variance (e.g., AE 0.7 for ER waves).While calculating c g can be seen as a limitation of this approach, it remains the most accurate method for calculating propagation features.
| SUMMARY AND CONCLUSIONS
This study reviews the calculation of phase speed and group velocity of Convectively Coupled Waves (CCWs).We applied the combined Hilbert and Radon transforms to analyze the wave propagation characteristics.To allow comparison with previous methods, which have been widely documented in previous studies, we used the same basis points and domains documented in Wheeler et al. (2000).
For mixed Rossby-gravity waves (MRG), we found a westward phase speed of $29.2 m s À1 and an eastward group velocity of $8.6 m s À1 (Figure 2a).The phase speed of MRG waves, however, can varies depending on the region (e.g., Mayta & Adames, 2023;Yang et al., 2007b).The propagation features of eastward inertio-gravity waves (EIG) are also examined, revealing a slightly faster phase speed of $30.5 m s À1 and an eastward group velocity of $11.3 m s À1 (Figure 2b).
WIG waves exhibit a westward phase speed of $29.3 m s À1 and a westward group velocity of $7.4 m s À1 .Similar propagation features were also found in Wheeler et al. (2000), even in other regions such as the Amazon Basin (Mayta & Adames, 2021).Kelvin waves have an eastward phase speed ranging from 15 to 20 m s À1 , and the wave envelope propagates in the same direction and with the same magnitude of the phase speed (i.e., as a non-dispersive wave).Overall, Kelvin, MRG, and WIG waves exhibit phase speeds and group velocities that are consistent with shallow-water dry theory with "reduced" equivalent depths that account for convective coupling (Kiladis et al., 2009).
It is well known that midlatitude Rossby waves exhibit j c g j > j c p j, a result that can be readily obtained from the Rossby wave dispersion relation under the presence of mean westerly winds (Chang (1993); Grimm and Dias (1995); Fragkoulidis and Wirth (2020); among others).In the tropics, the mean winds are usually easterlies.Using dry theory, one would obtain j c g j < j c p j in equatorial Rossby waves (see Appendix for a detailed derivation of c g ).Equatorial Rossby waves exhibit a westward phase speed of $4.2 m s À1 and a group velocity of 7.1 m s À1 .Thus, j c g j > j c p j in equatorial Rossby waves (see Figure 3) and therefore do not follow dry theory (Figure 3).
Similarly, TD-type waves showed a westward phase speed of $7.2 m s À1 , in agreement with recent works that found a phase speed ranging from 6 to 8 m s À1 (Mayta & Adames Corraliza, 2024).Their group velocity is 10.5 m s À1 .Thus, j c g j > j c p in these waves as well.TD-type waves do not exist in Matsuno's theory, but are sometimes thought of as a type of Rossby wave (see chapter 9 in Riehl (1954)), in which case the same arguments of the equatorial Rossby wave would apply.Hence, dry theory is unable to explain the propagation characteristics of neither equatorial Rossby or TD-type waves.These results highlight the limitations of dry theory when applied to these waves.Moisture mode theory (Adames, 2022
APPENDIX DRY ROSSBY WAVE DISPERSION
We begin with Matsuno's dispersion for equatorial Rossby waves, with the inclusion of a constant zonal flow: From the dispersion relationship shown in Equation (A.1), the phase speed of the equatorial Rossby waves can be written as follows: If u is westward, it follows that c p will always be negative and the wave propagates at a speed that is slightly faster than the zonal mean flow.The group velocity (c g ∂ω=∂k) describes the movement of the wave envelope shown in Figures 1 and 3: ðA:3Þ Using Equation (A.2) we can write the group velocity in terms of the phase speed: ðA:4Þ If we combine the last two terms on the right-hand side we arrive at the following relation: |fflfflfflfflffl ffl{zfflfflfflfflffl ffl} east : ðA:5Þ Hence, since the two terms in Equation (A.5) are of opposite signs, it follows that c g should be smaller than c p .This result is inconsistent with the observations shown in Figure 3, which shows that c g is faster than c p .
The filter settings for the period, wavenumber k, and the equivalent depth h e in meters used to extract the wave signal (seeKiladis et al. (2009) andKiladis et al. (2016) for more details).F I G U R E 1 Schematic depicting how the Radon transform is applied to estimate MRG phase speed and group velocity in CLAUS T b .(top) Hovmoller diagram projects onto a plane that has an angle θ with the x-axis.The units for time and longitudes are days and degrees, respectively.(bottom) Density distribution of the sumof-squares of the Radon transform as a function of projection (θ).The dominant direction with the maximum value equals 1 is marked by a gray dashed line (phase speed) and red dashed line (group velocity).The light red and gray shading represents the uncertainty at a 95% probability of the occurrence of maximum variance.
over the F I G U R E 2 Longitude-time diagram of T b anomalies (shading) associated with the T b variation of (a) Mixed Rossby-Gravity waves (MRG) and (b) Eastward Inertial-Gravity waves (EIG) at the base point 7.5 N, 177.5 E. Black contours show the longitude-time evolution of the wave envelope.T b is averaged for the latitudes of 2.5 -12.5 N. The phase speed (c p , dashed) and group velocity (c g , solid) lines are also shown.
F
I G U R E 3 As in Figure2, but associated with the T b variation of (a) tropical depression (TD) and (b) equatorial Rossby waves (ER).The base point for TD is 10 N, 100 W and ER 10 S, 150 E. T b is averaged for the latitudes of 5 -12.5 N and 12.5 -2.5 S for TD and ER, respectively.etal., 2001) may be a more reasonable starting point to understand the propagation features of these waves.AUTHOR CONTRIBUTIONS Víctor C. Mayta: Study conception and design; analysis and interpretation of results; and manuscript preparation.Angel Adames-Corraliza: Conceptualization; investigation; supervision; writingreview and editing.Qiao-Jun Lin: Formal analysis; methodology.How to cite this article: Mayta, V. C., Adames Corraliza, Á. F., & Lin, Q.-J.(2024).The Radon and Hilbert transforms and their applications to atmospheric waves.Atmospheric Science Letters, 25(5), e1215.https://doi.org/10.1002/asl.1215
|
2024-03-17T16:04:17.987Z
|
2024-03-13T00:00:00.000
|
{
"year": 2024,
"sha1": "f6c3632ec15654dcb2945f025ca3a68d78251194",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/asl.1215",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "472dc7c5c211f72eecaae1e18b2bfc594f59e147",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
}
|
122583740
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of Parameters of Generalized Inverted Exponential Distribution for Progressive Type-II Censored Sample with Binomial Removals
We obtained the maximum likelihood and Bayes estimators of the parameters of the generalized inverted exponential distribution in case of the progressive type-II censoring scheme with binomial removals. Bayesian estimation procedure has been discussed under the consideration of the square error and general entropy loss functions while the model parameters follow the gamma prior distributions. The performances of the maximum likelihood and Bayes estimators are compared in terms of their risks through the simulation study. Further, we have also derived the expression of the expected experiment time to get a progressively censored sample with binomial removals, consisting of specified number of observations from generalized inverted exponential distribution. An illustrative example based on a real data set has also been given.
Introduction
The one parameter exponential distribution is the simplest and the most widely discussed distribution in the context of life testing.This distribution plays an important role in the development to the theory, that is, any new theory developed can be easily illustrated by the exponential distribution due its mathematical tractability; see Barlow and Proschan [1] and Leemis [2].But its applicability is restricted to a constant hazard rate because hardly any item/system can be seen which has time independent hazard rate.Therefore, the number of generalizations of the exponential distribution has been proposed in earlier literature where the exponential distribution is not suitable to the real problem.For example, the gamma (sum of independent exponential variates) and Weibull (power transformed distribution) distributions are the most popular generalizations of the exponential distribution.Most of the generalizations of the exponential distribution possess the constant, nonincreasing, nondecreasing and bathtub hazard rates.
But in practical real problems, there may be a situation where the data shows the inverted bathtub hazard rate (initially increases and then decreases, i.e., unimodal).Let us take an example, in the course of study of breast cancer data, we observed that the mortality increases initially, reaches to a peak after some time, and then declines slowly, that is, associated hazard rate is inverted bathtub or particularly unimodal.For such types of data, another extension of the exponential distribution has been proposed in statistical literature.That is known as one parameter inverse exponential or one parameter inverted exponential distribution (IED) which possess the inverted bathtub hazard rate.Many authors have proposed the use of IED in survival analysis; see Lin et al. [3], and Singh et al. [4].Abouammoh and Alshingiti [5] have proposed two parameters generalization of IED called as generalized inverted exponential distribution (GIED) and they have showed that GIED is better than IED for real data set on the basis of likelihood ratio test and - statistics.They have also discussed the maximum likelihood and least square methods for the estimation of the unknown parameters of GIED.Krishna and Kumar [6] have studied the reliability estimation based on progressive type-II censored sample under classical setup.
In life testing experiments, situations do arise that the units under study are lost or removed from the experiments while they are still alive; that is, generally we get censored data from the life testing experiments.The loss of units may occur due to time constraints, giving type-I censored data.In type-I censoring scheme, the number of observations is random as the experiment is terminated at a prespecified time.Sometimes, the experiment has to be terminated after a prefixed number of observations and the data, thus obtained, is referred as type-II censored data.Besides these, there are many uncontrolled causes, resulting to loss of intermediate observations; see Balakrishnan [7].One such censoring procedure named as progressive type-II censoring scheme can be described notationally as follows.Let the lifetimes of identical units/items be studied.At the first failure 1 , called first stage, 1 units are removed from the remaining ( − 1) surviving units.At second failure 2 , called second stage 2 units from remaining − 2, − 1 units are removed, and so on, till th failure is observed; that is, at th stage all the remaining ( − 1 − ⋅ ⋅ ⋅ − −1 − + 1) units are removed.It may be mentioned here that the number of units dropped out from the test at th stage should be less than − − ∑ −1 =0 in order to insure the availability of observations.In many practical situations, 's may be random and cannot be predetermined, for example, in clinical trials.Considering to be random, Yuen and Tse [8] have discussed progressive censoring scheme with binomial removals.They assumed that the number of random removals at the stage is random and follows binomial distribution with probability .It may be noted here that in clinical trials, the assumption that 's are less than − − ∑ −1 =0 looks unrealistic, but, in life testing experiments, it should not pose any problem as it is used to decide the value of 's only.Thus, 1 (at 1th stage) is to be considered to follow the binomial distribution with parameter − and , that is, binomial ( − , ), and in the same way, 2 (at 2th stage) follows binomial ( − − 1 , ).In general, the number of units removed at th stage follows the binomial distribution with parameter − − ∑ −1 =0 and for = 1, 2, 3, . . ., − 1.
For further details on progressive censoring and its further development, readers may be referred to Balakrishnan [7].The estimation of parameters of several lifetime distributions based on progressive censored samples has been studied by many authors; see Childs and Balakrishnan [9], Balakrishnan and Kannan [10], Mousa and Jaheen [11], and Ng et al. [12].The progressive type-II censoring with binomial removal has been considered by Tse et al. [13] for Weibull distribution, Wu and Chang [14] for exponential distribution.Under the progressive type-II censoring with random removals, Wu and Chang [15], Yuen and Tse [8], and Singh et al. [16] developed the estimation problem for the Pareto distribution, Weibull distribution, and exponentiated Pareto distribution, respectively.
The objective of this paper is to obtain the MLEs and Bayes estimators of the unknown parameters of GIED under symmetric and asymmetric loss functions and compare the performances of the competing estimators.Further, we have also investigated the total experiment time of experiment on the basis of numerical study.The rest of the paper has been organized in the following section.Section 2 provides a brief discussion about the progressive type-II censoring scheme with binomial removals.In the next section, we have obtained MLEs and Bayes estimators of the model parameters.The expression for the expected experiment time for progressive type-II censored data with binomial removals has been derived in Section 4. The algorithm for simulating the progressive type-II censored data with binomial removal has been described in Section 5.The comparison study of the MLEs and Bayes estimators has been given in Section 6.In Section 7, the methodology is illustrated through a real data set.Finally, the conclusions have been provided in the last section.
Classical and Bayesian Estimation of Parameters
Thus, MLEs of and can be obtained by simultaneously solving the following nonlinear normal equations which are as follows: From ( 15), we obtain the MLE of as a function of , say α(), where Putting α() in ( 14), we obtain ln 1 (α () ; ) = ln () + ln (α ()) Therefore, the MLE of can be obtained by maximizing (18) with respect to .Once λ is obtained, α can be obtained from (17) as α( λ).Therefore, it reduces the two-dimensional problem to a one-dimensional problem which is relatively easier to solve with fixed point iteration method.For details about the fixed point iteration method, readers may refer Rao [17].
The log of 2 () takes the following form: The first order derivative of ln 2 () with respect to is Setting ( ln 2 ())/ = 0 and solving, we get the MLE of as 3.2.Bayes Estimators.In order to obtain the Bayes estimators of the parameters and based on progressively type-II censored data with binomial removals.We must assume that the parameters and are random variables.Following Nassar and Eissa [18] and Kim et al. [19], we assume that these are independently distributed.The random variables and have prior distribution with respective prior pdfs respectively.It may be noted that the gamma prior 1 () and 2 () are flexible enough to cover wide variety of the prior believes experimenter.Based on the assumptions stated above, the joint prior pdf of and is Combining the priors given by ( 22) with likelihood given by (8), we can easily obtain joint posterior pdf of (, ) as where Hence, the respective marginal posterior pdfs of and are given by Usually the Bayes estimators are obtained under square error loss function (SELF) where φ is the estimate of the parameter and the Bayes estimator φ of comes out to be [𝜙], where denotes the posterior expectation.However, this loss function is symmetric loss function and can only be justified if over estimation and under estimation of equal magnitude are of equal seriousness.But in practical situations, this may not be true.A number of asymmetric loss functions are available in statistical literature.Let us consider the general entropy loss function (GELF) proposed by Calabria and Pulcini [20] defined as follows: The constant , involved in (28), is its shape parameter.It reflects departure from symmetry.When > 0, then over estimation (i.e., positive error) causes more serious consequences than under estimation (i.e., negative error) and converse for < 0. The Bayes estimator φ of under GELF is given by provided that the posterior expectation exits.It may be noted here that, for = −1, the Bayes estimator under loss (27) coincides with the Bayes estimator under SELF 1 .
Expressions for the Bayes estimators α and λ for and , respectively, under GELF can be given as Substituting the posterior pdfs from (26) in ( 30) and (31), respectively, and then simplifying then, we get the Bayes estimators α and λ of and as follows: It may be noted here that the integrals involved in the expressions for the Bayes estimators α and λ cannot be obtained analytically and one needs numerical techniques for computations.We, therefore, have proposed to use Markov Chain Monte Carlo (MCMC) methods.In MCMC techniques, we considered the Metropolis-Hastings algorithms to generate samples from posterior distributions and these samples are used to compute Bayes estimates.The Gibbs is an algorithm for simulating from the full conditional posterior distributions while the metropolis-hastings algorithm generates samples from an arbitrary proposal distribution.The conditional posterior distributions of the parameters and can be written as respectively.For the Bayes estimators, the following MCMC procedure is followed.
where 0 is the burn-in-period of Markov Chain.Substituting equal to −1 in step (V), we get Bayes estimates of and under SELF.
Here, [] denotes the largest integer less than or equal to .Then, the HPD interval of and are that interval which has the shortest length.
Expected Experiment Times
In practical situations, an experimenter may be interested to know whether the test can be completed within a specified time.This information is important for an experimenter to choose an appropriate sampling plan because the time required to complete a test is directly related to cost.Under progressive censoring with a fixed number of removal, the time is given by .Following Balakrishnan and Aggarwala [21], the expected value of is given by where ] and is the number of live units removed from experiment (number of failure units).Using the pdf and cdf of GIED, the equation will be where After simplification it reduces to Putting this value in (36), the expected test time is given by The expected test time for progressively type-II censored data with binomial removals is evaluated by taking expectation on both sides (36) with respect to the .That is, where ( ) = − − 1 − ⋅ ⋅ ⋅ − −1 and ( = ; ) is given in (10).For the expected time of a complete sampling, case with test units is obtained by taking = and = 0 for all = 1, 2, . . ., , in (39).We have and the expected time of a type-II censoring is defined by the expected value of the th failure time; then, The ratio of the expected experiment time (REET) denoted as REET is computed between progressive type-II censored data with binomial removals (PT-II CBR) and the complete sampling.We define 1, for larger values of , the ratio is approaching to 1 quickly.Hence, up ≤ 0.5 is more significant for the reduction of expected test time.So, the expected termination time for binomial removal with = 0.5 is to be taken for further calculation.
Algorithm for Sample Simulation under PT-II CBR
We need to simulate PT-II CBR from specified GIED and propose the use of the following algorithm.
(I) Specify the value of .
(II) Specify the value of .
(III) Specify the value of parameters , , and .
Simulation Studies
The estimators α and λ denote the MLEs of the parameters and , respectively, while α , λ and α , λ are the corresponding Bayes estimators under SELF and GELF, respectively.We compare the estimators obtained under GELF with corresponding Bayes estimators under SELF and MLEs.The comparisons are based on the simulated risks (average loss over sample space) under GELF.Here, (( ), ( )) and (( ℎ ℎ ), ( ℎ ℎ )) represent confidence interval (CI) and HPD of and , respectively.Also (avg. , avg. ) and (avg. ℎ , avg. ℎ ) represent the average length of CI and HPD of and , respectively.It may be mentioned here that the exact expressions for the risks cannot be obtained as estimators are not found in nice closed form.Therefore, the risks of the estimators are estimated on the basis of MonteCarlo simulation study of 5000 samples.It may be noted that the risks of the estimators will depend on values of , , , , , and .The choice of hyperparameters and can be taken in such a way that if we consider any two independent information as prior mean and variance of and , then, ( ), respectively whereas 1 and 2 are considered as true values of the parameters and for different confidence in terms of smaller, moderate, and larger variances, On the basis of this information, the hyper parameters of and (say ( 1 , 1 ) and ( 2 , 2 )) can be easily evaluated from this relation, ( 1 = 1 / 1 , 1 = 2 1 / 1 ) and ( 2 = 2 / 2 , 1 = 2 2 / 2 ), respectively.In order to consider the variation in the values of , , and V, we have obtained the simulated risks for = 9, 12, 15, 2 that for almost all the considered values of , the risks of the estimators α and λ have minimum as compared to the considered competitive estimators, respectively, under GELF.To know the effect of variation in the value of other parameters on the risks of the estimators of and , we arbitrarily fixed = 0.5 (for the case when overestimation is more serious than underestimation) and = −0.25 in reverse situation.Tables 3 and 4 represent the risks of estimators of and under both losses in under and overestimation situations when the prior mean is same as the true value of the parameters = 2 and = 2 for smaller, moderate, and larger values of the prior variance of the parameters in order to consider the hyperparameters ( 1 = 2 = 5, 1 = 2 = 10), ( 1 = 2 = 2, 1 = 2 = 4) and ( 1 = 2 = 1/5, 1 = 2 = 2/5), respectively.When the effective sample size increases, the risks of all the estimators of and under both losses decrease for = (−0.25,0.5) and the simulated risks of α and λ are smaller than those of (α , λ ) and (α , λ ) for all the considered cases, including those where under estimation is considered to be more serious than over estimation or viceversa.Under different prior variances along with variation of effective sample size , the 95% HPD and CI intervals are obtained.In Table 5, it is observed that the average length of CI and HPD interval decreases when the effective sample size increases and average length of HPD interval is always less than that of CI, which are represented in Figure 2 also.
Real Data Analysis
Here, we consider the real data set presented in Lawless [22] which represent the number of revolutions to failure for 6.Table 7 shows the MLEs and Bayes estimators of and under SELF and GELF, CI/HPD interval based on complete real data set.For this real data set, Abouammoh and Alshingiti [5] indicated that the GIED provides satisfactory fit.This real data set was originally discussed by Lieblein and Zelen [23].In Section 3, MCMC algorithm and mathematical treatments are given, for long run, we take noninformative prior for this purpose, and the value of the hyper parameters and are taken as ( 1 = 0.00001, 1 = 0.0001) and ( 2 = 0.00001, 2 = 0.0001), respectively.Hence, on the basis of Table 6, we use noninformative prior under different degrees of censoring; the MLEs and the Bayes estimators and CI/HPD interval of and under SELF and GELF for = ±0.5 are presented in Tables 8 and 9, respectively.Hence, finally, the study of Tables 8 and 9 observed that the MLEs and Bayes estimators and length of CI/HPD interval of and decrease as degree of censoring decreases, respectively.
Conclusion
(1) On the basis of simulation study, we observed that the maximum likelihood and Bayes methods are used for estimating the parameters under GELF and SELF of GIED based on PT-II CBR.Bayes estimators have been obtained on the basis of MCMC method.These methods are applied to real data set based on the number of revolutions to failure for each of 23 ball bearing in a life test.
(2) It has been noticed, under consideration of different prior believes, from tables, that the estimated risks of estimators decrease as effective sample size increases and Bayes estimates have the smallest estimated risks as compared with their corresponding MLEs.Hence, the proposed estimators (α , λ ) perform better than (α , λ ) and (α , λ ) for different degree of censoring, when under estimation is serious than over estimation and vice-versa.The CI/HPD interval is also obtained.We found the Bayes estimates are superior than those of the corresponding MLEs.
(3) We have obtained the expected test times under PT-II CBR and complete sampling to compare it.In Table 1, the numerical results indicate that the expected test time depends very much times on the values of removal probability.When the probability of removal is large, a slight reduction in the expected test time can be achieved only by increasing the total number of test units .
Figure 1 :
Figure 1: REET under PT-II CBR to REET under complete sample.
Figure 2 :
Figure 2: Average HPD and CI for different prior variances for fixed confidence coefficient.
[1]−)+1] ), ..., ( [] , [])) and (([1], [(1−)+1] ), . .., ( [] , [] Here, we have considered = 6, 10, and 15 and the choices of are listed in Table1.The various values of removal probability considered are = 0.05, 0.1, 0.3, 0.5, 0.7, and 0.9.The results thus obtained are summarized in Table1.It is noted from the results that for a fixed value of effective sample size , the values of REET decrease as increases.For fixed , the values of REET and expected termination time under PT-II CBR and complete sampling test increase as increases.Moreover, from Table1, the expected test time is influenced by the value of the removal probability from effected sample size .So, is an important factor on the expected test time and when is large, ( − ) units are removed at the earlier stage of life test out of units.Hence, this result gives an idea to the observed lifetimes much closer to tail of the failure time distribution.Thus, the expected test time of PT-II CBR is closed to that of complete sample.Figure1represents the ratio of the expected test time under PT-II CBR to the expected test time under complete sample versus for = 8 and different values of the removal probability .Finally, we observed that form Figure and numerically calculated, the expected experiment times under PT-II CBR and complete sample, which are derived in equations (40) and (41), respectively.The results are presented in Table 1.As mentioned earlier, analytical comparison of the expected test times under PT-II CBR and complete sampling test is very difficult.Hence, it is calculated for different values of , , and .
Table 3 :
Risks of estimators of and under SELF for fixed, = 15, = 2, = 2, and = 0.5.(saypriorvariance of ) and = (−0.25,0.5)(sayGELF loss parameter).Generating the samples of PT-II CBR as mentioned in Section 5, the simulated risks under SELF and GELF have been obtained for selected values of , , , , , and .The results are summarized in Tables2-4.It is noted from Table
Table 7 :
Bayes and ML estimates based on real data set for = 23, = 0.05.
Table 8 :
Bayes and ML estimates, CI and HPD intervals for with fixed = 23 and = 0.5 under PT-II CBR.
Table 9 :
Bayes and ML estimates, CI and HPD intervals for with fixed = 23 and = 0.5 under PT-II CBR.
|
2019-01-02T21:15:35.850Z
|
2013-12-29T00:00:00.000
|
{
"year": 2013,
"sha1": "933732a92e8a42a0ee4ea60a71bc030a40184696",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jps/2013/183652.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "933732a92e8a42a0ee4ea60a71bc030a40184696",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
134653257
|
pes2o/s2orc
|
v3-fos-license
|
Grain Size information Extracting from Sand Conglomerate Reservoir based on FMI Data:A case study of sha4 formation in Dongying Sag
The evaluation of rock phase and reservoir property is the key problem to be solved in the exploration of deep sand conglomerate. The study of rock phase information is only in the qualitative stage according to the FMI logging data with the results are mainly image-lithologic models. It is urgent to extract gravel information based on image analysis. FMI image can be transformed into a vertical core gravel image after the operation of grayscale, blind area filling, image filtering, image segmentation and gravel extraction. The maximum gravel size is selected as the rock phase marker, and the statistical relationship of the conventional logging curves with resistivity, GR and density is established. It can provide a useful supplement for petrographic classification.
Introduction
As a complex formed by rapid accumulation of multi-period fan bodies, glutenite reservoirs often show characteristics such as messy internal structure, strong heterogeneity and complicated reservoir forming rules (Yan Jianping, 2011). Exploration practice of glutenite bodies in the north of Dongying Sag, Jiyang Depression shows that the source rocks adjacent to the fourth member of Shahejie Formation are abundant and the lateral sealing of glutenite bodies provides the conditions for effective trapping. Therefore, exploration is restricted and the main challenge at work is effective reservoir identification. For a given area, except for structural, fluid and diagenetic factors, the main factors influencing the effectiveness of deep glutenite reservoirs are lithofacies types and reservoir physical properties under their control (Li Junliang, 2008). Exploration practice shows that gravel Reservoirs are generally undeveloped, reservoirs pebbled sandstone and gravel sandstone are well developed. FMI imaging logging data, as a sharp weapon for the study of facies and sedimentary features of glutenite, has become an indispensable tool in the study of glutenite facies belts. In the past, the research of lithofacies information based on the electrical imaging logging data only stayed in the 2 1234567890 ''"" Ganghua, 2001; Zhang Longhai, 2006). Taking the exploration of the Yanjia glutenite body in the north belt of Dongying for example, taking the vertical image processing of a single well as an example, the vertical gravel particle diameter curve of the glutenite body was extracted and the conventional logging with imaging scale was optimized. The resistivity, density and combination gamma ray neutron laterolog and other three curves to build a function of the grain size and sensitive logging curve in the study area, which can assist the lithofacies division of glutenite body and provide the basis for detailed interpretation and evaluation.
Regional Geological Survey
The steep slope belt in the northern part of Dongying Depression is a near-east-west tectonic belt dominated by the Chennan shovel fault. Due to the long-term tectonic movement and weathering, the steep slope belt is the main development zone of the deep glutenite body in Dongying Sag. The Yanjia oilfield is structurally located in the eastern section of the northern steep slope with the southern side of Chenjiazhuang convex, the paleogeomorphology of ditch and beam controlling the development and distribution of various fan bodies (Lu Guoming, 2010). Provenance and geomorphology, sedimentary environment and other factors make the steep slope show the characteristics of complex changes in lithology, low maturity of structure and composition, complex logging response and little obvious rock electric law. The the fourth member of Shahejie of the Yanjia Oilfield mainly develops deep-lying nearshore subaqueous fans. The burial depth ranges from 3,000 to 4,000 meters. The main lithologies are conglomerate, gravel sandstone, pebbled sandstone and medium-fine sandstone. The observation of thin slices shows that the main mineral components of gravel are quartz and feldspar, which are influenced by the parent rock of gneiss. The content of potassium feldspar in glutenite of the Yanjia is relatively high. According to physical property analysis, the porosity is mainly distributed between 3% and 15%, and the permeability is between 0.1mD and 10mD, which belongs to the reservoir with low porosity and low permeability [8]. Figure 2 shows the oil level under different lithology statistic charts, we can see that the oiliness of gravel sandstone and pebbled sandstone is the best, followed by conglomerate. Fig. 3 shows the relationship between porosity and permeability under different lithology conditions. From the statistics, it can be seen that the porosity of the gravel reservoir gradually becomes smaller as the lithology becomes thicker. Physical properties and rock structure are related. At the same time with the analysis of Figure 2 shows that the physical properties of reservoirs have an important impact on oiliness, with better physical properties, oil levels gradually increased. Since the identification of effective reservoirs is a constraining problem for deep glutenite bodies and the development of effective reservoirs is closely related to the lithofacies facies types and the properties of specific facies belts, the crux of the problem lies in the division of lithofacies. The key to the phase is to get the gravel particle size curve.
Gravel particle size extraction principles and methods
The FMI, a downhole conductivity image measured by Schlumberger Electroimaging Instrument, measures the well's circumference either array-wise or rotational-sweep through 192 button electrodes of a downhole tool to acquire longitudinal, circumferential, radial stratigraphic information, and then obtain the two-dimensional image of the well bore or the three-dimensional image within a certain depth of investigation around the borehole by means of image processing techniques (depth correction, velocity correction, balance) (Zhou Lunxian, 2008). FMI commonly used color palette for the blackbrown -yellow -white, subtle changes in color represents the lithology and physical changes, the color from dark to dark, respectively representing the conductivity from high to low.
In general, gravel diameter gradually decreases from fan-to-fan and then to fan-end, and the heterogeneity gradually decreases. Therefore, the change of gravel size can be used as a reference index to distinguish different lithofacies. Due to the poor conductivity of gravel, it appears as bright patches on the electrical imaging, which is obviously different from the background of the electrophotographic image. Therefore, the graphene information can be extracted by the digital image processing method. The specific steps are as follows:
Step 1 -grayscale
Grayscale processing refers to the process of converting a color image into a grayscale image. Since the color of each pixel in the color image is determined by the three components of R, G, and B, and there are 255 variation values for each component, the processing of the color image increases an enormous workload. Therefore, converting to a grayscale image becomes the usual choice. Grayscale image refers to a special image with the same three components of R, G, B. Converting the image to grayscale can greatly reduce the amount of calculation, but it still retains the whole image and local colorimetric and brightness levels feature. In this study, 8-bit grayscale images were selected, and the grayscale range was [0,255]. Based on the RGB value of color image, the gray value of the corresponding pixel is calculated by formula (1). After the grayscale, the image from Figure 4 (a) to Figure 4
Step 2 -blind area filling
For the electrical logging imaging, there are some blind spots in the image due to the shielding of the plate, showing a white band, as shown in Figure 4 (a). Image repair methods are mainly nonlinear filtering methods, Bayesian methods, wavelet and spectral analysis methods, texture-based repair methods and multi-point statistical methods (Nie Tuxian, 2011). In this study, multi-point geostatistics was used to fill the reservoir. This method was originally used in the field of geological modeling to treat continuous geological entities at reservoir scale. The basis of the multipoint geostatistical method is to replace the variogram in the two-point geostatistics with the training image. The basic idea is to extract the image features of the features from the training images and then restore the patterns to the final model. Compared with the traditional two-point correlation function, this method can restore the long-range correlation in the electrical imaging log image, so the repaired image more accurately reflects the actual geological environment. The Filtersim simulation algorithm is a filter-based multi-point geostatistical approach that uses a set of filters to classify the various patterns of a training image. Training image variable types can be either discrete or continuous. Based on the template classification, Filtersim simulates the simulated area. A filter is a data template with weights at each pixel location. When you place a filter on a training image, you get the filter score in the data template area, which can be considered as the sum of the filter scores for the region training image. The filter transforms each pattern in the training image into a filter score space so that the dimensions of the training image are greatly reduced. In the process of simulation, the filter is used to obtain the filter score of the data event in the area to be simulated, the mode in the training image that is closest to the data event, and then "paste" the mode into the area to be simulated. Due to the filter used, the training image dimension decreases, so the simulation speed becomes faster. After filling the blind spot, the image changes from Fig.4 (b) to Fig.4 (c).
Step 3 -Image Filtering
Image filtering is to suppress the noise of the target image while preserving the image detail characteristics (Xiao Mengqiang, 2012). FMI images are often contaminated with noise during downhole electrode measurements, data transmission and conductivity imaging, affecting the overall image quality. These noises often appear as isolated pixels or pixel blocks in the image, but appear as the maximum or the minimum value in the digital information, forming the interference of light and dark spots, affecting the extraction and analysis of digital information. In 1971, J.W. Jukey firstly applied the median filter in one-dimensional signal processing (time series analysis) and later in twodimensional image signal processing. Under certain conditions, the median filter can overcome the ambiguity of the image detail generated by the linear filter, and is most effective in filtering out the 5 1234567890 ''"" pulse interference and the image scanning noise. As the actual operation, the filter does not require the statistical characteristics of the image, making it more computationally efficient. The median filter is to build a sliding window composed of an odd number of pixels, the gray value of the center of the window is replaced by the median value of each point in the window. Assuming there are 5 points in the window, the gray values are 30, 50, 150, 100, and 110 respectively, then the median of each point in the window is 100, then the gray value of the center point in the filtered window becomes 100. Expressed as a mathematical formula Where m is the length of the window, which is an odd number. For some images with more details, you can use different median filters multiple times, and then combine the results as output, so as to obtain better smoothing and edge protection. After image median filtering, effectively remove the target and background noise, while preserving the image-specific geometric and topological features, as shown in Figure 4 (d).
Step 4 -Image Segmentation
Image segmentation refers to the extraction of a specific, unique region of an image for the purpose of identifying and analyzing the target. As the key technology of image processing, image segmentation has proposed nearly a thousand kinds of segmentation algorithms since the 1970s, but so far no general segmentation theory has been put forward. Grayscale threshold segmentation as a parallel regional technology, the most common application in image segmentation. Assuming that the grayscale of the original image is a function of the pixel position ( , ) f x y , an appropriate grayscale value is determined as a threshold t according to certain criteria in ( , ) f x y , and the image ( , ) g x y segmented according to the above method can be expressed as formula (3).
The key of this method is to determine the threshold. This study uses an adaptive threshold technique to segment the gravel area, as shown in the red area in Figure 4 (e).
Step 5 -Gravel extraction
The gravel in the segmented image is stored as a pixel. To account for gravel content and size distribution, it is necessary to characterize isolated pixels as gravels. The Hoshen-Kopelman algorithm is a grid-based tagging algorithm that has two states for each grid: "occupied" and "free", and the grid in the algorithm is the same as the pixels in a two-dimensional image. In the gravel cluster labeling algorithm, the grayscale pixel state is set to "occupied" and the remaining pixels are set to "free". If there are no pixels in the area around the gravel pixel that are "occupied," the gravel pixel is treated as a new gravel cluster, giving a new gravel cluster mark. If there is a pixel in the state of "occupied" around the gravel pixel, then the gravel pixel and "occupied pixel" are considered as a gravel cluster, labeled with the same gravel cluster. If there are multiple gravel around the gravel pixel in the state of "occupied", the flag with the lowest gravel cluster mark in the "occupied" state is selected as the gravel cluster mark. Pixels with the same gravel cluster mark are considered as a gravel. According to the resolution of three-dimensional digital core, the area (two-dimensional) or volume (three-dimensional) of gravel can be obtained by counting the number of pixels (two-dimensional) or three-dimensional pixels Circle or ball radius. In Figure 5 (b), the gravel with gravel cluster labeled 3 contains 8 pixels. Gravel distribution after H-K labeling is shown in Fig. 4 (f). Different colors only distinguish different gravel bodies, which have nothing to do with size. Parameters such as gravel number, shape, particle size and the like can be obtained based on gravel-labeled images.
Parameter Optimization
After the image processing and gravel extraction were performed on the 3500-3730meters vertical section of the well Y22-22, the parameters such as the number of gravel, the average gravel size, the maximum gravel size and the minimum gravel size were obtained. In the process, there are some details that need special instructions, one is the depth of data statistics interval. Taking into account the sampling interval of electrical imaging and the characteristics of glutenite, the depth interval of 1meter is considered as the data, that is, the maximum gravel size refers to the maximum gravel size within a depth of 1meter, and so on. Second, the grayscale image size and The actual size of the relationship conversion. The length of the well circumference can be calculated according to the borehole diameter of 6.5 in. The actual size is matched with the width of the electrophotographic image to obtain the actual size corresponding to each electrophotographic pixel, which can be converted to the size parameter of the gravel It has not been considered whether the gravel information can be completely reflected by the electrical imaging. In view of the main purpose of this study is to extract gravel relative changes in particle size, so this factor can be ignored; third is the particle size optimization problem. Considering that the gravel number and gravel average particle size in the treatment interval weaken the heterogeneity of glutenite to a certain extent, the maximum gravel radius is adopted as the grain size parameter.
General Logging Model Building Based on Gravel Information
Logging curve is one of the most intuitive means to reflect the lithofacies change of reservoirs. Due to the limitation of acquisition cost, not all wells have electrical imaging logging. Therefore, gravel information extracted from electrical imaging data is used as calibration to establish gravel maximum grain Quantitative relationship between path and conventional logging is particularly important.
Due to the heterogeneity of reservoir, radioactivity, lithology and oiliness, the log response of glutenite in the Yanjia area is extremely complicated. In order to characterize the relationship between conventional logging and gravel information, it is necessary to optimize the information of gravel sensitive logging characterization parameters. The single-correlation analysis of the maximum gravel size and the conventional well logging response was carried out one by one. Finally, the resistivity, combination gamma ray neutron laterolog and density were selected as modeling parameters. The larger the gravel diameter, the thicker the rock facies, the larger the rock density and the higher the resistivity. The rock size is positively correlated with the resistivity and density. The smaller the rock 7 1234567890 ''"" size, the smaller the lithology. Therefore, the combination gamma ray neutron laterolog is higher, the particle size and combination gamma ray neutron laterolog is inversely related. Taking the maximum gravel size extracted by electrical imaging as the dependent variable, the statistical regression was conducted with the resistivity, the combination gamma ray neutron laterolog and the density as the independent variables. The following statistical relationship was obtained according to the layer-layer correspondence modeling method: Dmax=2.509×DEN-0.01×GR+0.118×RD-3.376 R 2 =0.8615 (4) Inter Dmax-Gravel maximum particle size, cm; DEN-Reservoir measured density log, g/cm 3 ; GR-Reservoir measured natural gamma value, API; RD-Reservoir measured deep resistivity value, Ω.m. The accuracy of the model was tested. The maximum gravel diameter extracted by electrophotography was plotted on the horizontal axis, and the maximum gravel diameter calculated on the basis of equation (4) was taken as the vertical axis for accuracy analysis (see Figure 6) .On both sides of the 45 ° line, the requirements for model accuracy are met.
Application examples and analysis
Based on the statistical relationship of formula (4), well logging data of Y22-22 well are processed in single well, as shown in Fig.7. The second is the density curve, the third is the combination gamma ray neutron laterolog curve, the fourth is the resistivity curve. The "maximum particle size of rock" in the sixth channel is the calculation result of electric imaging extraction. "Calculate the largest particle size" is the result of calculation according to formula (4). From the data trend and numerical matching, the relationship between the two is good, As a reference for calculating the particle size in the next step. The logging data of 3500m-3700m in Well Y22-22 of the study area are processed and the gravel information of different lithofacies is extracted (see Table 2). Through the correlation with core, electrical imaging, logging lithology calibration and physical properties of conventional core analysis data scale, the following lithology identification criteria are formulated (see Table 3). It is important to note that the resolution of the electrical imaging (theoretical maximum resolution is about 0.5 cm) is limited by the definition of the gravel size range This standard is therefore only used to differentiate the grading curve levels calculated from electrical imaging or conventional logging data. After the grain size curve is obtained based on conventional well logging data, the lithology of different types of conglomerate can be identified according to the classification in Table 2. From conglomerate, gravelly sandstone, gravelly sandstone to medium-fine sandstone, the grain size gradually decreases and gravel Sandstone physical properties to a certain extent, better than pebbly sandstone.
Conclusion
FMI images can be converted into gravel information vertical profiles that include changes in gravel particle size based on grayscale, dead-zone fill, image filtering, image segmentation, and gravel extraction. By choosing the largest particle size as the indicator of lithofacies, the quantitative statistical relationship between the maximum particle size of rock and the conventional well logging curve was established. The proposed criterion of lithofacies division was established by using electrical imaging scale.
|
2019-04-27T13:10:17.854Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "1e32bb5d13ea52609d5d2eb9a93fe12d3960da47",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/170/2/022006",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3e5d3eca89301a58adb15be76c81c307c2cdd4a5",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": [
"Geology",
"Physics"
]
}
|
16351409
|
pes2o/s2orc
|
v3-fos-license
|
Early diagnosis of diabetic retinopathy in primary care.
OBJECTIVE
To evaluate the impact of a strategy for early detection of diabetic retinopathy in patients with type 2 diabetes mellitus (DMT2) in Quintana Roo, México.
METHODS
Study transversal, observational, prospective, analytical, eight primary care units from Mexican Social Security Institute in the northern delegation of the State of Quintana Roo, Mexico were included. A program for early detection of diabetic retinopathy (DR) in adult 376,169 was designed. Were diagnosed 683 cases of type 2 diabetes, in 105 patients randomized was conducted to direct ophthalmoscopy were subjected to a secondary hospital were assigned. Will determine the degree of diabetic retinopathy and macular edema was performed.
RESULTS
In population were 55.2% female, mean age 48+11.1 years, 23.8 % had some degree of DR, 28.0% with mild non- proliferative diabetic retinopathy 48.0 % moderate 16.0% and severe and 8.0% showed proliferative diabetic retinopathy. Those over age 30 are 2.8 times more risk of developing DR, OR= 2.8; 95%CI: 0.42-18.0, and OR= 1.7; 95%CI: 1.02-2.95 women.
CONCLUSIONS
The implementation of programs aimed at the early detection of debilitating conditions such as diabetic retinopathy health impact beneficiaries, effective links between primary care systems and provide second level positive health outcomes for patient diseases.
Introduction
Diabetes mellitus (DM) is a chronic degenerative disease with a worldwide prevalence of 2-6% 1 . According to the World Health Organization, it is estimated that there are currently 150 million people with diabetes and by 2025 this number will double. More than 90% of new cases will be patients with diabetes mellitus type 2 (DM2) [2][3][4] , and in Mexico more than half of these cases are not diagnosed 5 . 10% of the general population suffers from DM, which is 3-4% higher than that reported for other countries 6,7 .
The main problem with DM is the occurrence of metabolic, vascular and neurological complications 8,9 . Diabetic Retinopathy (DR) is one of the most serious DM complications; in most industrialized countries it has become the leading cause of vision loss and blindness among adults 10,11. This condition is of vascular origin, and is characterized by signs of retinal ischemia (microaneurysms, hemorrhages, exudates, intraretinal microvascular abnormalities, abnormalities in the venous caliber and neovascularization) as well as signs of increased vascular permeability 12 . This progresses from mild nonproliferative disease, to moderate or severe nonproliferative retinopathy, and finally proliferative disease 10 .
Significant loss of vision results from retinal hemorrhages in the fragile new vessels, with two types of bleeding; that derived from the superficial capillary plexus, which is flame or splinter shaped, and that coming from below, with a mottled or stained appearance; vitreous hemorrhage, macular edema, or retinal capillary hypoperfusion, which leads to scarring and damage to the secondary retina 10 . Activation of the local renin-angiotensin system in the eyes of patients with diabetes can directly or indirectly increase growth factor concentration in the vascular endothelium, contributing to angiogenesis and vasopermeability 13 . Macular edema is caused by abnormal permeability of microcirculation at this level, and is accompanied by hard exudates, which are formed from lipoproteins, soft exudates, myocardial signals in the nerve fiber layer, as well as micro aneurysms and micro hemorrhages; macular edema can occur with any level of retinopathy and is the leading cause of decreased vision 3 .
Diabetic Retinopathy development depends on a variety of factors, including time affected by diabetes, effective glucose control, blood pressure and, blood lipid levels. It is more common among Mexico-Americans than non-Hispanic whites; considered as an unexplained risk factor, whereas other sociodemographic factors such as age, medical treatment, education, and gender of the patient are not considered to be risk factors 14 .
According to current public health sector policy, patients diagnosed with DM2 require an annual assessment by an ophthalmological specialist, in order to promptly diagnose DR, because otherwise the chance to establish early treatment is delayed; thus the purpose of this study was to demonstrate the impact of a strategy that promotes prompt treatment for patients diagnosed with DM2, by providing immediate ophthalmological attention, in order to determine the degree of DR that exists among patients with newly diagnosed DM2, in units of primary care.
Materials and Methods
A cross-sectional study was conducted at the Mexican Social Security Institute in Quintana Roo, consisting of eight primary care units (UAP), 4 corresponding to the northern region of the state which were randomly included, comprising a total adult population of 376,169, among whom 683 cases of First Time DM2 were diagnosed over one year, when reporting for the UAP in Cancun, Quintana Roo.
A proportional sample size for a finite population was determined, with a confidence level of 95%, and 80% precision, with an expected ratio of 6.7% for that being evaluated, consisting of 97 subjects in total, with an expected rate of 15% adjustment for loss of sample size, resulting in a final sample that comprised 114 individuals.
Firstly patients with first time DM2 diagnosis were identified for each unit assigned No. 13, 14, 15 and 16, as reported by SUAVE (Single System Automated Epidemiological Surveillance), including adults from both genders, and any section of the Unit. Patients who refused to participate in the study, or were proscribed by some mental impairment, were excluded. Once patients were identified, we proceeded to select the sample at random, using a random number table, personally locating affiliates by their phone number registered in the AccDer system (Access to affiliates) IMSS; personal interview was requested with authorization of informed consent and attendance for an appointment with the ophthalmologist for evaluation of the fundus by pupillary dilation, recording the number of cases with diabetic retinopathy and the level of severity for diabetic retinopathy at diagnosis, using the classification for diabetic retinopathy produced by the American Association of Ophthalmology (AAO) (Fig. 1).
All patients had their visual acuity measured and a pinhole test was applied, the anterior segment was assessed in search of cataract and rubeosis and an examination of the fundus of the eye was undertaken by pharmacological mydriasis using an indirect ophthalmoscope and lens with three mirrors, among patients where thickening of the macula was identified. All valuations were carried out by the same ophthalmologist retina specialist in order to increase accuracy for diagnosing proliferative diabetic retinopathy (PDR). Diabetic Retinopathy level was determined according to AAO 3 . This project was registered with the Local Committee for Research and Ethical Research 2301, in strict accordance with the General Health Law in Research Matters, considered as of Minimal Risk, requesting signed informed consent from participants.
Data were analyzed using the statistical program SPSS® (version 20.0) for Windows 7. Proportions found in the sample were calculated applying 95% confidence intervals. For the two resulting groups; with and without DR, odds ratio was calculated and a logistic regression model applied.
Results
The diagnosis of clinically significant macular edema (CSME) was undertaken if any of the following characteristics were present: 1) Thickening within 500 μ of the center of the macula. Exudates within 500 μ from the center of the macula, associated with adjacent thickening. 2) Thickening of a disk or greater area, a disc diameter or less from the center of the macula.
Diabetic Retinopathy classification was carried for each eye of every patient and the most advanced retinopathy identified was taken into account. Those for whom a fundus exploration showed a normal retina were provided with necessary recommendations for annual monitoring of the eye fundus, and patients for whom the presence of some degree of retinopathy was identified were given appropriate treatment. No existing cases of glaucoma were identified.
The total number of registered cases of patients diagnosed with DM2 for the first time was 683, with 114 patients obtained at random, of which nine retired from the study because six emigrated and 3 did not attend the Ophthalmology appointment, leaving a total of 105 patients. 55.2% (58) were women; the mean age was 43±11.1 years. Diabetic Retinopathy was found among 25 patients (24.0%) ( Table 1). Diabetic Retinopathy was mostly at moderate level of severity (Table 2).
Risk analysis was conducted to present retinopathy according to age for those younger and older than 30 years, and increased risk was found among patients who were newly diagnosed with type 2 diabetes and who were over 30. Those over the age of 30 are 2.8 times more likely to develop DR, (OR= 2.8; IC95%: 0.42-18.0), as well as women OR= 1.7I; C95%: 1.02-2.95).
Discussion
Diabetic retinopathy is a leading cause of blindness among adults, studies in Europe and America suggest the presence of no proliferative diabetic retinopathy (NPDR) and PDR among patients with DM2, with less than 5 years of evolution 9,13 . The American Diabetes Association confirmed that 25% of patients who are detected as diabetics may have some degree of DR at the time of diagnosis. The Wisconsin study demonstrates a NPDR of 21% and a PDR of 2%. Similar evidence is available for the Mexican Republic 14 , in Durango, Mexico a frequency of 14.5% for NPDR and 1.6% for PDR was obtained among patients diagnosed with DM2 2 . In Chiapas, a prevalence of 38.9% for DR was obtained, which was higher than that found in our study 14. In the present study, a frequency of 21.9% with NPDR and 1.9% with PDR was found among patients who underwent the diagnosis for DM2, these results being similar to those from reported studies.
Overall, diabetic retinopathy affects 63% of diabetics and increases the risk of blindness 25 times, compared to non-diabetics. 90% of patients with type 1 diabetes and 65% of patients with DM2 develop retinopathy, 10 years after the initiation of the disease 9,14 . However, up to 25% of patients newly diagnosed with type 2 diabetes may have retinopathy, at the time of diagnosis 13,14 .
The United States reports up to 40 to 45% of patients diagnosed with DM as having some degree of DR, and this is the most common cause of blindness among patients with DM 10,14 . This country is where most studies have been conducted on the prevalence of retinopathy and its complications, perhaps due to its high prevalence. Among these, we can mention the Wisconsin Epidemiologic study of Diabetic Retinopathy, where there is prevalence for blindness of 3.6%, among patients with DM1 and 1.6% of those with DM2; among 86% of those with DM1 and a third of those with DM2, DR was secondary 1,9,14 .
The American Diabetes Association (ADA) has found epidemiological evidence indicating that the development of DR initiates at least seven years before type 2 diabetes is diagnosed clinically, so in a patient, who has been newly detected as a diabetic, there may be some degree of DR, but it is essential to identify patients with retinopathy at the time DM diagnosis is made and before their vision is affected, because DR may be present, even if the patient shows no ophthalmologic symptoms; as control needs to be preventive 5 . This is the reason why strategies for making prompt assessments of patients diagnosed with DM2 as proposed in this study, are effective and provide better quality of life for patients who are affected, tending to improve control of DM2, a situation which is directly dependent on the patient's habits and the rigor with which they apply their pharmacological and dietary management.
According to various studies, direct ophthalmoscopy, undertaken by ophthalmologists and/or trained technicians, reaches a sensitivity of 80% and a specificity that exceeds 90% and is considered the method of choice as well as being low cost, for diagnosis of diabetic retinopathy and its classification 5,10 .
Intervention, targeting primary care processes as in this case, provides successful results for improving the management of health services, 100% of patients were localized in order to be assessed by ophthalmology.
Direct ophthalmoscopy should be performed by the family physician at the moment that DM2 is diagnosed, however, in this study none of the patients had undergone a previous fundus examination by their family doctor, before being reviewed by the ophthalmology service, likewise, none of them were initially referred by their doctor, all patients denied any symptoms directing them to request an evaluation by an ophthalmologist.
Clinical practice guidelines have been implemented in primary care for the diagnosis and treatment of diabetes mellitus, which are systematically reviewed by the assigned doctors, however there are hindrances to the implementation of some recommendations such as ophthalmological evaluation, perhaps because of lack of appointments, or sometimes this exploration is performed only among patients with longstanding diabetes, or who have a visual impairment, thus giving more priority to symptoms rather than signs, however on carrying out targeted actions to fulfill these recommendations, benefits are observed for early diagnosis of DR.
This study is that health service management systems substantially improve, makes clear the need for routine examination of fundus among all newly diabetic patients, which limits damage to retinopathy, when also accompanied by adequate glycemic control.
According to the results obtained after logistic regression analysis, it has been proposed that all women over 30 years with an initial diagnosis of DM2 should receive priority ophthalmological evaluation 14 . The age of first diagnosis of DM2 is associated with greater risk of developing diabetic retinopathy, OR= 2.8 and more so if you are female, where the risk increases 3.9 times, in this way and considering that this occurs in developing countries, in health systems that are similar to those in Mexico, the care of these patients can be prioritized, if the epidemiological behavior of this disease is understood.
Weaknesses lie in the mainstreaming of data; monitoring is warranted and even comparisons with other types of problems (renal, sensitive) in order to enrich data and combine monitoring strategies for the control of patients.
It is necessary to demonstrate the need for direct ophthalmology among all diabetic patients and an evaluation by an ophthalmologist physician, providing follow up and complying with that indicated by ADA and the official standard for the control of diabetes mellitus, issued by the Ministry of Health.
|
2018-04-03T04:35:51.867Z
|
2015-03-30T00:00:00.000
|
{
"year": 2015,
"sha1": "fbf90dfdc45d08454dd3a36100a884b5b7da229b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a58f153577eee77411ecd3ef657f9b6e26b9847a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125196686
|
pes2o/s2orc
|
v3-fos-license
|
EFFECTIVENESS OF NESTING ON POSTURE AND MOTOR PERFORMANCE AMONG HIGH RISK NEWBORN
Dr. K. Jeyabarathi and Mrs. Niranjana Shalini. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 12 September 2018 Final Accepted: 14 October 2018 Published: November 2018 Background and objectives: Lixisenatide, a selective short-acting glucagon-like peptide 1–receptor agonist (GLP-1RAs), approved in many countries worldwidefor use with oral glucose-lowering agents with or without basal insulin for the treatment of adults with uncontrolled type 2 diabetes mellitus (T2DM) as an adjunct to diet and exercise. The aim of this study was to assess the effectiveness of basal insulin treatmentregimen intensification with Lixisenatide compared with another injectable drugin patients with T2DM. We also aimed to identify the respective predictive factors for glycemic control.
Background and objectives: Lixisenatide, a selective short-acting glucagon-like peptide 1-receptor agonist (GLP-1RAs), approved in many countries worldwidefor use with oral glucose-lowering agents with or without basal insulin for the treatment of adults with uncontrolled type 2 diabetes mellitus (T2DM) as an adjunct to diet and exercise. The aim of this study was to assess the effectiveness of basal insulin treatmentregimen intensification with Lixisenatide compared with another injectable drugin patients with T2DM. We also aimed to identify the respective predictive factors for glycemic control.
Introduction:-
High risk newborn is defined as any neonate when is in danger of serious illness or death as a result of prenatal, perinatal or neonatal conditions, regardless of birth weightor gestational age. High risk newborn is most often classified according to birth weight (LBW, VLBW,ELBW) and gestational age (SGA, IUGR, preterm < 37wks) and pathophysiologic problem.
The preterm or sick babies requires support to facilitate and maintain postures that enhances motor control and physiological functioning and reduce stress. Nesting, as a component of developmental care, improves neonates curved limb position and reduction of sudden movements as well as immobility of the arms and legs. Good positioning practices promote neuromotor development and can have a positive effect on both short and long-term outcomes for babies.
Materials And Methods:-
A quantitative approach and quasi experimental pre-testpost-test with control group was adopted to assess the effectiveness of nesting on posture and motor performance among high risk new-born in Vimal Jyothi hospital at Coimbatore.The samples were selected by convenience sampling technique. The total size of sample was 60, in that 30 samples were allotted for experimental group and 30 were control group. Newborn who satisfied the inclusion criteria were selected for the study. One sample for experimental group and the other sample for control group,likewise the sample were assigned to both groups till the sample size reached. Before Nesting the posture of newborn was assessed by using Infant Position Assessment Tool (IPAT) and motor performance was assessed by using Modified Ferrari Tool, it took 10 minutes. Then the Nesting was provided for one day for each newborn in experimental group and no intervention, only routine care was given for control group. The next day, posture and motor performance of newborn was assessed for both experimental and control group, to assess the effectiveness of nesting by using same IPAT and Modified Ferrari Tool. Infant Position and Assessment Tool (IPAT) was used to assess the position of the high-risk newborn and Modified Ferrari Tool was used to assess the motor performance of high risk newborn. The mean pretest posture value among experimental group and control group was 6.8 and 6.7. The mean pretest motor performance value among experimental and control group was 7.3 and 7.1. This indicates, that there is no significant effect on maintaining posture among high risk new born in the experimental and control group before providing nesting.
Results:-
The Second Objective of the Study was to Provide Nesting among High Risk Newborn in Experimental Group.
Nesting was made with 4 baby sheets. Roll the sheets way so that they are tubes. These are than placed round the baby. Nesting was provided for one day. And the next day reassessed the posture and motor performance by using IPAT and Modified Ferrari Tool.
|
2019-04-22T13:13:07.842Z
|
2018-10-31T00:00:00.000
|
{
"year": 2018,
"sha1": "502364526e8f77e82430e9f1a1ab3c2c27f22c41",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/8099",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "76b9b27057a66dcd91892428b3dd5839af529f9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
118528269
|
pes2o/s2orc
|
v3-fos-license
|
Hadroproduction of electroweak gauge boson plus jets and TMD parton density functions
If studies of electroweak gauge boson final states at the Large Hadron Collider, for Standard Model physics and beyond, are sensitive to effects of the initial state's transverse momentum distribution, appropriate generalizations of QCD shower evolution are required. We propose a method to do this based on QCD transverse momentum dependent (TMD) factorization at high energy. The method incorporates experimental information from the high-precision deep inelastic scattering (DIS) measurements, and includes experimental and theoretical uncertainties on TMD parton density functions. We illustrate the approach presenting results for production of W-boson + n jets at the LHC, including azimuthal correlations and subleading jet distributions.
small-x region, in a manner which can be controlled using the estimation of theoretical and experimental uncertainties on TMD distributions proposed in [14] within the herafitter framework [16,17]. Given the complexity of the final states considered, this is a challenging problem. The results are however encouraging. Moreover, they are sufficiently general to be of interest to any approach that employs TMD formalisms in QCD to go beyond fixed-order perturbation theory and appropriately take account of nonperturbative effects. This will be relevant both to precision studies of Standard Model physics and to new physics searches for which gauge boson plus jets production is an important background.
Using the parton branching Monte Carlo implementation of TMD evolution developed in [14] we make predictions, including uncertainties, for final-state observables associated with W -boson production. We study jet transverse momentum spectra and azimuthal correlations. In particular, we examine subleading jet distributions, measuring the transverse momentum imbalance between the vector boson and the leading jet.
The starting point of our approach is to apply QCD high-energy factorization [7] at fixed transverse momentum to electroweak gauge boson + jet production, q + g * → V + q, where V denotes a gauge boson and g * an off-shell gluon. The basic observation is that this factorization allows one to sum high-energy logarithmic corrections for √ s → ∞ to all orders in the QCD coupling provided the spacelike evolution of the off-shell gluon includes the full BFKL anomalous dimension for longitudinal momentum fraction x → 0 [18]. The CCFM evolution equation [8] is an exclusive branching equation which satisfies this property. In addition, it includes finite-x contributions to parton splitting, incorporating soft-gluon coherence for any value of x. The evolution equation reads [8,9] A(x, k t , p) where A(x, k t , p) is the TMD gluon density function, depending on longitudinal momentum fraction x, transverse momentum k t and evolution variable p. The first term in the right hand side of Eq. (1) is the contribution of the non-resolvable branchings between starting scale q 0 and evolution scale p, while the integral term in the right hand side of Eq. (1) gives the k t -dependent branchings in terms of the Sudakov form factor ∆ and unintegrated splitting function P. Unlike ordinary, integrated splitting functions, the latter encodes soft-virtual contributions into the non-Sudakov form factor [8,9]. In this framework the vector boson production cross section has the form where the symbol ⊗ denotes convolution in both longitudinal and transverse momenta, A is the gluon density function obeying Eq. (1), H is the off-shell (but gauge-invariant) continuation of the qg hard-scattering function specified by the high-energy factorization [7], and B is the valence quark density function introduced at unintegrated level according to the method [19], such that it obeys a modified CCFM branching equation. Explicit calculations for H are carried out in [20][21][22][23] with off-shell partons [24,25]. 1 1 Ref. [26] provides an approach to vector boson plus jets also inspired by QCD high-energy factorization [7].
This approach differs from that of the present paper as it is based on matching tree-level n-parton amplitudes with BFKL amplitudes in the multi-Regge kinematics, treating initial-state partons as collinear. TMD parton density functions and k t -dependent branching evolution do not enter in the approach [26].
The A 0 term in the right hand side of Eq. (1), and the analogous term in the modified CCFM branching equation for the quark distribution B [19], depend on nonperturbative parton distributions at scale q 0 , which are to be determined from fits to experimental data. We here use the determination [14] from the precision measurements of the F 2 structure function [16] in the range x < 0.005, Q 2 > 5 GeV 2 , and the precision measurements of the charm structure function F (charm) 2 [15] in the range Q 2 > 2.5 GeV 2 . Good fits to F 2 and F (charm) 2 are obtained (with the best fit to F (charm) 2 giving χ 2 per degree of freedom χ 2 /ndf 0.63, and the best fit to F 2 giving χ 2 /ndf 1.18 [14]). Despite the limited kinematic range, the great precision of the combined data [15,16] provides a compelling test of the approach at small x. The production of final states with W boson and multiple jets at the LHC receives contributions from a non-negligible fraction of events with large separations in rapidity between final-state particles [27], calling for parton branching methods beyond the collinear approximation [6]. On the other hand, the average values of longitudinal momentum fractions x at which the gluon density is sampled in the W -boson + jets cross sections at the LHC are not very small. Moreover, quark's average momentum fractions are moderate, and quark density contributions matter [21] at TMD level. For these reasons, W + jets pushes the limits of the approach probing it in a region where its theoretical uncertainties increase [28], and where the DIS experimental data [15,16] do not constrain well the TMD gluon distribution.
The purple, pink and green bands correspond to mode A, mode B and mode C as described in the text. The experimental data are from [30], with the experimental uncertainty represented by the yellow band.
The numerical results that follow are obtained using the Rivet -package [29]. We use the TMD distribution set JH-2013-set2 [14]. We compare the results with the ATLAS measurements [30] (jet rapidity |η| < 4.4) and CMS measurements [31] (jet rapidity |η| < 2.4). The uncertainties on the predictions are determined according to the method [14]. This treats experimental and theoretical uncertainties. Experimental pdf uncertainties are obtained within the herafitter package following the procedure of [32]. Theoretical uncertainties are considered separately due to the variation of the starting scale q 0 for evolution, the renormalization scale µ r for the strong coupling, the factorization scale µ f . We apply this method in different modes: mode A (purple band in the plots) includes uncertainties due to the renormalization scale, starting evolution scale, and experimental errors; mode B (pink band in the plots) and mode C (green band in the plots) also include factorization scale uncertainties. These are estimated as follows. We take the central value for the factorization scale to be µ 2 f = m 2 + q 2 ⊥ , where m and q ⊥ are the invariant mass and transverse momentum of the boson + jet system. The choice of this scale is suggested by the CCFM angular ordering [6,8,9] and the maximum angle available to the branching. We then consider two different types of variation of µ f . In mode C, we vary the transverse part of µ 2 f by a factor of 2 above and below the central value. In mode B, we decompose µ f as µ 2 f = m 2 V + ν 2 , where m V is the vector boson mass, and vary the dynamical part ν 2 of µ 2 f , again by a factor of 2 above and below the central value. We note that the above variation affects the kinematics of the hard scatter, and the amount of energy available for the shower. While the mode C variation is more closely related to the estimation of unknown higher-order corrections in standard calculations performed under collinear-ordering approximations, the mode B variation is a (conservative) way to estimate uncertainties from possibly enhanced higher orders due to longitudinal momentum kinematics (not considered under standard approximations). For this reason we expect large mode-B uncertainties especially in the case of high multiplicity. One of the limitations of the current treatment is that this variation is applied to the shower but not to the hard matrix element. In a more complete calculation, subject for future investigations, the scale dependence is taken into account in the hard factor, and the pdf fitted to data is also changed [14], unlike the ordinary case of collinear calculations. The net result of these two effects is expected to reduce the uncertainty band. The present treatment, on the other hand, combined with the sensitivity of the process to the medium to large x region, leads to significant theoretical uncertainties, in particular larger than the experimental uncertainties. Thus, we regard the mode B bands presented in the following as the most conservative estimate of the uncertainties. We expect mode C bands to be smaller, and intermediate between mode A and mode B. We note that the factorization scale variation plays a different role here than in ordinary collinear calculations. Fig. 1 shows the total transverse energy distribution H T for production of W -boson +n jets, for different values of the number of jets n. We take the minimum jet transverse momentum to be 30 GeV. The main features of the final states are described by the predictions including the case of higher jet multiplicities. The theoretical uncertainties are larger for larger H T , corresponding to increasing x. At fixed H T , they are larger for higher jet multiplicities, corresponding to higher probability for jets to be formed from the partonic showers. The comparison of the bands for the three modes described above illustrates that mode C is intermediate between mode A and mode B.
We next consider the spectra of the individual jets. Fig. 2 shows the spectrum of the leading jet associated with the W -boson, inclusively (left) and for n ≥ 3 jets (right). For the sake of simplicity we only show uncertainty bands corresponding to the two extreme cases, A and B (mode C is intermediate between these, similarly to the case of Fig. 1). The CMS [31] (left) and ATLAS [30] (right) measurements cover different ranges in jet rapidity, respectively |η| < 2.4 [31] and |η| < 4.4 [30]. The plot on the left includes higher values of p ⊥ . Given the computational limitations at finite x outlined above, the theory comparison with the measurements in Fig. 2 is satisfactory over a broad p ⊥ range. It is noted in [27] that, in contrast, the leading-order Pythia [33] result strongly deviates from these measurements in the high-multiplicity and the high-p ⊥ regions. In such a framework the description of the high-p ⊥ region is to be improved by supplementing the parton shower with next-to-leadingorder corrections to the matrix element, e.g. via matched NLO-shower calculations [34] such as Powheg. The TMD formulation with exclusive evolution equations, on the other hand, incorporating at the outset large-angle, finite-k ⊥ emissions [9,35], can describe the shape of the spectra also at large multiplicity and large transverse momentum. We note in particular that the different ranges in rapidity quoted above for the samples [30,31] play a non-negligible role, given that our exclusive formalism is designed to treat gluon radiation over large rapidity intervals.
In Fig. 3 we look into the multi-jet final states in closer detail by examining the p ⊥ spectra of the second jet and the third jet associated with W production. We see that not only the leading jet and global distributions of Figs. 2 and 1 but also the detailed shapes of the subleading jets in Fig. 3 can be obtained from the TMD formalism. The uncertainty bands, on the other hand, increase as we go to higher jet multiplicity. The effect is moderate for mode A, but pronounced for the conservative mode B. [30] (left) and [31] (right), with the experimental uncertainty represented by the yellow band.
In Fig. 4 we turn to angular correlations. We consider two examples: the distribution in the azimuthal separation ∆φ between the two hardest jets (left); the correlation of the third jet to the W -boson (right). As noted earlier, predictions of the structure of angular correlations are a distinctive feature of the TMD exclusive formulation. The shape of the experimental measurements is well described, within the theoretical uncertainties, both at large ∆φ and down to the decorrelated, small-∆φ region.
In conclusion, this work shows how exclusive evolution equations in QCD at high energies can be used to take into account QCD contributions to the production of electroweak bosons plus multi-jets due to finite-angle soft gluon radiation, and estimate the associated theoretical uncertainties. This will be relevant both to precision studies of Standard Model physics and to new physics searches for which vector boson plus jets are an important background.
Unlike traditional approaches to electroweak boson production including effects of the initial state's transverse momentum in the low-p ⊥ region, the formulation of TMD pdfs and factorization employed in this work incorporates physical effects which persist at high p ⊥ and treats final states of high multiplicity. The effects studied come from multiple gluon emission at finite angle and the associated color coherence [6,8,9], and are present to all orders in the strong coupling α s . In particular, they are beyond next-to-leading-order perturbation theory matched with collinear parton showers [5]. They can contribute significantly to the estimate of theoretical uncertainties in multi-jet distributions at high energies.
The method of this work incorporates the experimental information from the highprecision DIS combined measurements [15,16]. The use of the TMD density determined [14] from these measurements in the comparison with the LHC W + n-jet data indicates that detailed features of the associated final states can be obtained both for the leading jet and the subleading jets. It underlines the consistency of the physical picture which can be extended from DIS to Drell-Yan processes to describe QCD multi-jet dynamics. It also points to the relevance of Monte Carlo event generators which aim at including parton branching at transverse momentum dependent level (see e.g. [36,37]).
Future applications may employ vector boson pp data to advance our knowledge of transverse momentum parton distributions [17,38]. Vector boson plus jets are a benchmark process for QCD studies of multi-parton interactions [39], and may help shed light on topical issues in the physics of forward jet production [40]. A program combining Drell-Yan and Higgs measurements can become viable at high luminosity [3] to carry out precision QCD studies accessing gluon transverse momentum and polarization distributions [3,4].
|
2014-07-31T20:56:25.000Z
|
2014-06-11T00:00:00.000
|
{
"year": 2014,
"sha1": "92defd24eb004ba5e35f8aeef75d64a2edeff5c9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2014.07.035",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "92defd24eb004ba5e35f8aeef75d64a2edeff5c9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
969475
|
pes2o/s2orc
|
v3-fos-license
|
Aortic Dissection and Renal Failure in a Patient with Severe Hypothyroidism
Acute aortic dissection (AAD) is a life-threatening condition associated with high morbidity and mortality. The most important recognized acquired cause that leads to dissection is chronic arterial hypertension. With respect to the anuria and renal failure, aortic dissection is not something that is always considered and is still not a very common presentation unless both renal arteries come off the false lumen of the dissection. However, when present, preoperative renal failure in patients with acute type B dissection has been noted to be an independent predictor of mortality. Early recognition and diagnosis is the key and as noted by previous studies as well, almost a third of these patients are initially worked up for other causes until later when they are diagnosed with aortic dissection. Here we present a case of a patient presenting with severe hypothyroidism, long-standing hypertension, and anuria. Through the case, we highlight the importance of having aortic dissection as an important differential in patients presenting with anuria who have a long standing history of uncontrolled hypertension. Pathophysiology relating to severe hypothyroidism-induced renal dysfunction is also discussed.
Case Presentation
The patient is a 68-year-old male with history of untreated hypothyroidism, untreated hypertension, and no medical care for over the last 10 years who presented to hospital with complaints of nausea, vomiting, and lower extremity weakness. Patient had called 911 two weeks prior for an episode of chest pain that felt like he was having a heart attack. When emergency medical service (EMS) arrived, chest pain had resolved and patient refused to come to hospital. A similar episode of severe chest pain occurred the following week, for which he called 911, but again refused transfer. On the day of admission patient called 911 again, but this time for nausea, vomiting, and weakness. When EMS arrived, they noticed he had slurred speech, a left-sided facial droop, and, therefore, transferred him to the hospital with concerns for stroke.
In the emergency room, physical exam was most remarkable for all the classic signs of hypothyroidism including hypothermia at 35.8 • C, periorbital edema, puffy facies, macroglossia, hoarse voice, and delayed relaxation of deep tendon reflexes. His electrocardiogram (EKG) showed low voltage and sinus bradycardia with a rate in the 40 s. He did have left-sided facial droop and dysarthria, which was found to have been present for many years according to his family, and strength was 5/5 throughout his upper and lower extremities. No other focal neurological deficits were appreciated. Head CT without contrast indicated there was no acute intracranial pathology, brain MRI without contrast showed extensive chronic microvascular ischemic disease, as well as remote microhemorrhages in the right occipital and left cerebellar hemisphere. Lumbar spine MRI without contrast showed multilevel degenerative changes, most pronounced at the L5-S1 with a diffuse disc bulge, moderate-to-severe left and right neural foraminal stenosis, but no central canal stenosis.
Initial laboratory data was significant for a TSH of 63.4 IU/mL, creatinine of 1.9 mg/dL, hemoglobin of 7.3 gm/dL, and a normal white blood cell count. Patient was given two units of packed red blood cells, which improved his 2 Case Reports in Medicine anemia to 9.7 gm/dL. He was admitted to general medicine service for further management of his severe hypothyroidism and workup for his anemia of unknown etiology.
The following morning repeat labs showed further decline in his kidney function, with a creatinine of 3.1 mg/dL, and potassium of 5.1 mMol/L. There also was new leukocytosis of 15 (×10 9 /L) with a 94% left shift, a new thrombocytopenia of 131 (×10 9 /L), down from 225 (×10 9 /L) at admission, and an elevated creatine phosphokinase (CPK) of 500 IU/L. A portable chest X-ray did not show any obvious sings of widened mediastinum but did show a left lower lobe consolidation consistent with a pneumonia for which he was started on IV azithromycin and ampicillin/sulbactam.
Nursing staff noted stool incontinence, for which a rectal exam was performed showing good rectal tone, and a positive guaiac. In addition, despite receiving aggressive fluid resuscitation, patient continued to be in auric renal failure. Patient then received 3 more liters of fluid throughout the day, a Foley was placed, and bladder scans showed a total of 48 cc of urine, enough to send urine studies. Urinalysis was negative for any signs of infection, and urine electrolytes indicated a fractional excretion of sodium (FeNa) of 0.96% looking initially like a prerenal process.
Labs were again repeated that evening, with a rising creatinine to 4.1 mg/dL, a lactate of 3.7 mMol/L, and patient still had no urinary output. Nephrology and endocrinology specialists were consulted, and the thought process was that his renal failure was likely stemming from his severe hypothyroidism causing a low flow state. He was started on levothyroxine (T4) and liothyronine (T3) and continued to get intravenous fluids.
The third day after admission morning laboratory data showed further increase in his creatinine to 6.1 mg/dL, a worsening leukocytosis to 16.7 (×10 9 /L), an improved lactate of 2.2 mMol/L, and a worsening thrombocytopenia of 92 (×10 9 /L). Thrombotic thrombocytopenia purpura (TTP) and HUS were also considered on the differential, given the anemia, and high LDH of 1014 IU/L. However, the smear did not have significant amounts of schistocytes, and the haptoglobin was normal; thus making it less likely.
Patient began complaining of abdominal pain and in the setting of an increasing leukocytosis and diarrhea, an abdominal CT without contrast was performed. This showed colitis, which looked either infectious or ischemic, as well as, a possible aortic dissection. A CT angiogram of the chest, abdomen, and pelvis was subsequently performed STAT, which showed a large type B dissection starting in the descending thoracic aorta just past the origin of the subclavian artery, extending into the abdominal aorta, with near complete collapse of the true lumen at the level of the renal arteries, with extension of the dissection into the common iliac arteries bilaterally, and ending at the level of iliac bifurcation (see Figures 1(a)-1(f)).
Following is a discussion on the presentation of this case of severe hypothyroidism and long-standing hypertension who presented with anuria/renal failure. Through the case, we highlight the importance of having aortic dissection as an important differential in patients presenting with anuria who have a long-standing history of uncontrolled hypertension.
Discussion
"Acute aortic dissection (AAD) is a life-threatening condition associated with high morbidity and mortality [1]. It is not uncommon to come across patients with aortic dissection. While hourly mortality data for type B AAD are not available, the overall in-hospital mortality is reported to be 11%. For those patients in the highest risk group, type B mortality can be as high as 71% [1]." Aortic dissection is a variant of so-called Acute Aortic Syndromes (AASs), which include other variants including intramural hematoma (IMH) and atherosclerotic ulcer. With improved diagnostic modalities, these syndromes are diagnosed early and more often [2]. With respect to "intramural hematomas of the aorta, they usually result from rupture of the vaso vasorum within the medial wall, resulting in aortic infarction; and in a third or more of cases, IMHs evolve into aortic dissections. Most cases of IMH occur within the descending thoracic aorta in patients with chronic systemic arterial hypertension. Like AAD, IMH may extend up or down the aorta, regress (10% of cases), and reabsorb [2]." And as noted in our case, it is possible that he might have developed an IMH first that progressed to such an extensive dissection. His two episodes of chest pain prior to presenting to the hospital may have been indicative of this.
Descriptions of aortic pathologies including dissection date back to the 2nd century, but it was not until 1955 that the first successful management of a case of aortic dissection was reported by Ramanath et al. [2].
The most important recognized acquired cause that leads to dissection is chronic arterial hypertension, and as in our case, the uncontrolled hypertension explains the extensive dissection and intramural hematoma noted. Other associated causes include iatrogenic (cardiac procedures, intra aortic balloon pumps, etc.), pregnancy (third trimester and early post partum), and familial syndromes/connective tissue disorders (Table 1). There have also been case reports of patients with myxedema who had dissections, but we only came across 4 cases and whether it was a coincidence or a causal risk factor is questionable [3][4][5].
Patient with type A dissections usually present with chest pain, and type B dissections more so with back or abdominal pain. Abdominal pain is seen approximately a one-fifth of type A and about half of patients with type B dissections. As noted in our case as well, it was the worsening of the abdominal pain and nonresolving diarrhea that made us pursue abdominal imaging that showed the findings as noted.
Pulse deficits in one or more arterial vessels have important prognostic implication and in our patient as well, when a temporary dialysis line was being placed, the right femoral pulse specifically was really difficult to appreciate.
Most of the cases of aortic dissection described in the literature have been often referred to as missed and not being timely diagnosed. In a retrospective review of 49 patients in Greece, almost a third of patients were initially admitted for other reasons [11]. However, as noted by Ramanath et al. in a very comprehensive review on acute aortic syndromes, "a key point for clinicians is that nearly 30% of patients later found to have AAD are initially diagnosed as having other conditions"; as was the case in our patient as well; the severe hypothyroidism leading to possibly a low flow state and renal failure was the initial diagnosis and the thought process that initially did not lead us to think about aortic dissection. There was a delay of 72 hours before the aortic dissection was diagnosed.
Severe hypothyroidism can present with impairment of renal function. Pathophysiology underlying hypothyroidism-induced renal impairment is not completely understood. It is postulated that renal blood is compromised in hypothyroidism secondary to a hypodynamic state resulting in low glomerular filtration rate (GFR) and reduced tubular secretory and reabsorptive capacity [12]. Furthermore, glomerular morphological changes are also seen in profound hypothyroidism [13]. Moreover, rhabdomyolysis secondary to severe hypothyroid myopathy also contributes to impairment of renal function. In fact, acute renal failure seen in hypothyroidism is often attributed to the associated rhabdomyolysis [14][15][16]. In our patient as well, acute renal failure in the setting of severe hypothyroidism with increased CPK led us to consider it as a case of hypothyroidism induced renal failure. However, in retrospect CPK level in our patient was not as high as seen in typical cases of rhabdomyolysis.
In addition to this, complete anuria has not been reported in hypothyroidism.
With respect to the anuria and renal failure, aortic dissection is not something that is always considered and is still not a very common presentation unless both renal arteries come off the false lumen of the dissection, as noted in our case [17]. Renal failure in this case was due to pure vascular compromise and not hypotension. Aldridge and Birchall reviewed the literature regarding dissecting aneurysm presenting as renal failure and did not find a lot of cases pertaining to it. Per the literature review by Aldrige, Demos et al. described three cases and found only four others in the literature [18][19][20]. Similarly, some degree of renal dysfunction was noted in up to 8% of patients in a single-center study of 272 patients, but again frank renal failure with anuria is not that commonly seen [21]. Woywodt and colleagues also reported a case of anuria in an otherwise healthy patient with hypertension, in whom too the diagnosis of AAD was diagnosed later in admission [22]. There was also a reported case of a patient who had a high-speed motor vehicle crash resulting in a traumatic midthoracic aortic dissection. Since it resulted in involvement of the orifices of both renal arteries, anuria was noted in that case [23]. In another retrospective study of all the cases who presented to a center at Greece, renal failure/anuria was not the main presenting feature [11]. However, when present, preoperative renal failure in patients with acute type B dissection was noted to be an independent predictor of mortality [24,25]. There are several different imaging modalities that can be utilized in the diagnosis of an aortic dissection with varying degrees of sensitivities and specificities. Chest Xray has a sensitivity of 67% and specificity of 70% for mainly thoracic aortic dissection, and the findings of a "widened mediastinum" are often missed, making it the least useful imaging modality [26]. Aortography, where contrast is injected into the femorals, used to be the study of choice for evaluating suspected aortic dissections many years ago, with a sensitivity ranging between 81-91% [27]. Computerized Tomography (CT) scan with contrast have varying degrees of sensitivity and specificity depending on the type of dissection. CT for acute Type A dissections is 80% and 94% sensitive for subacute and acute dissections, respectively, while CT for Type B dissections is 93% and 100% sensitive for subacute and acute dissections, respectively; whereas MRI has a reported sensitivity and specificity of 98% and 85%, respectively [28]. Transesophageal echo has also been studied and has a reported sensitivity between 97-99%, but with a lower specificity of 77-85% due to false positive findings in the ascending aorta [29]. More often than not, it is usually with CT scans with contrast/CT angiograms that a diagnosis is made or incidentally noted as in our case.
With respect to management in patients presenting with type B dissection, medical therapy usually is the primary mode of treatment in controlling their hypertension, and in cases where an intervention would be required, like in ours, endovascular has better outcomes and less morbidity/mortality associated with it [30,31]. For acute Type B aortic dissections with renal, mesenteric, limb ischemia, or neurological deficits, open surgical or endovascular repair with stenting has a Class IIa recommendation, where there is conflicting evidence but in a favorable direction regarding the efficacy of intervention [32]. Open surgical repair consists of a left posterolateral thoracotomy with a prosthetic graft replacement of the descending thoracic aorta, with a reported 10-17% mortality rate [32]. Endovascular repair consists of an aortic endograft or stent placement that obliterates the false lumen, with significantly lower rates of in-hospital mortality as compared to open surgery, 11% versus 33% noted in one series [32]. In addition, the more complicated the dissection, the higher rate of mortality, with a 50-88% mortality rate for patients with an unstable Type B aortic dissection with renal or mesenteric ischemia. These lower morbidity and mortality figures with the endovascular repair persisted despite the procedure done in older patients in some series [33].
Renal failure, cerebrovascular accidents, paraplegia (temporary or permanent), access site injuries, and endovascular leaks are the usual categories of complications seen with these procedures and are noted to be reversible as well in most of these patients [34,35]. These complications are noted to decrease in centers doing a higher volume of these cases, and outcomes improve as the learning curve gets better. Perioperative mortality estimates from these procedures are also variable (10-22% on some series) and have been relatively improving as noted above [36,37].
Early diagnosis and intervention along with associated comorbities (with Acute Physiology and Chronic Health Evaluation (APACHE) II score as a general indicator of patient condition used in some series) appear to be the major contributing factors with the classic teaching of mortality figures increasing by 1% per hour in these patients as the diagnosis gets delayed [37,38].
The other interesting thing from the management standpoint are reports of cases where renal function has been noted to recover up to even 2 months after the dissection leading to frank renal failure from compromised blood flow to the renal arteries [39].
With respect to our patient, initially an exploratory laparoscopy was pursued by the general and vascular surgeons to make sure that there was no bowel ischemia/ gangrene given evidence of colitis and fluid in his pelvis with tenderness on exam concerning for possible perforation. These were not noted on the exploratory laparoscopy and bowels looked viable. An endovascular repair was then pursued. Patient initially was in the intensive care unit and was transitioned slowly to rehab given deconditioning from the prolonged hospital course. He continues to have dialysis (now through a permanent dialysis catheter) and thus far, 2 weeks after his initial hospitalization, his renal function has not returned.
This case highlights the importance of having aortic dissection as an important differential in patients presenting with anuria who have a long standing history of uncontrolled hypertension.
Conclusions
(i) A key point for clinicians is that nearly 30% of patients later found to have acute aortic dissections are initially diagnosed as having other conditions. (ii) With respect to the anuria and renal failure, aortic dissection is not something that is always considered and is still not a very common presentation unless both renal arteries come off the false lumen of the dissection. (iii) When present, preoperative renal failure in patients with acute type B dissection is noted to be an independent predictor of mortality. (iv) In patients presenting with type B dissection, medical therapy usually is the primary mode of treatment in controlling their hypertension and in cases where an intervention would be required, endovascular has better outcomes and less morbidity/mortality associated with it.
|
2016-05-04T20:20:58.661Z
|
2012-07-09T00:00:00.000
|
{
"year": 2012,
"sha1": "aa3c61ba18e30e00aa759d77e7d43710fb33397e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crim/2012/842562.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48e484be8f5c343e6d2dcd4bbb832ccde8996534",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1586809
|
pes2o/s2orc
|
v3-fos-license
|
Celecoxib increases lung cancer cell lysis by lymphokine-activated killer cells via upregulation of ICAM-1.
The antitumorigenic mechanism of the selective cyclooxygenase-2 (COX-2) inhibitor celecoxib is still a matter of debate. Using lung cancer cell lines (A549, H460) and metastatic cells derived from a lung cancer patient, the present study investigates the impact of celecoxib on the expression of intercellular adhesion molecule 1 (ICAM-1) and cancer cell lysis by lymphokine-activated killer (LAK) cells. Celecoxib, but not other structurally related selective COX-2 inhibitors (i.e., etoricoxib, rofecoxib, valdecoxib), was found to cause a substantial upregulation of ICAM-1 protein levels. Likewise, ICAM-1 mRNA expression was increased by celecoxib. Celecoxib enhanced the susceptibility of cancer cells to be lysed by LAK cells with the respective effect being reversed by a neutralizing ICAM-1 antibody. In addition, enhanced killing of celecoxib-treated cancer cells was reversed by preincubation of LAK cells with an antibody to lymphocyte function associated antigen 1 (LFA-1), suggesting intercellular ICAM-1/LFA-1 crosslink as crucial event within this process. Finally, celecoxib elicited no significant increase of LAK cell-mediated lysis of non-tumor bronchial epithelial cells, BEAS-2B, associated with a far less ICAM-1 induction as compared to cancer cells. Altogether, our data demonstrate celecoxib-induced upregulation of ICAM-1 on lung cancer cells to be responsible for intercellular ICAM-1/LFA-1 crosslink that confers increased cancer cell lysis by LAK cells. These findings provide proof for a novel antitumorigenic mechanism of celecoxib.
INTRODUCTION
Celecoxib is a selective inhibitor of the prostaglandin (PG)-synthesizing enzyme cyclooxygenase-2 (COX-2) [1]. Owing to its analgesic and anti-inflammatory effects, the selective COX-2 inhibitor (coxib) was approved for the symptomatic treatment of pain associated with rheumatoid arthritis and arthrosis in 1998. In addition, a 6-months treatment with a daily dose of 800 mg celecoxib was demonstrated to result in significant reductions of colorectal polyps in patients with familial adenomatous polyposis (FAP) [2], resulting in celecoxib's approval for adjuvant treatment of FAP patients by the US Food and Drug Administration in 1999. In matter of lung cancer reports have suggested celecoxib as treatment and preventive option [3][4][5][6] and to enhance the response to preoperative paclitaxel and carboplatin in early-stage non-small cell lung cancer (NSCLC) [7]. Taken into account that lung cancer is worldwide the most common cancer in terms of both incidence and mortality and that the response and remission rates in NSCLC patients still remain relatively low [8], these findings may offer new pharmacotherapeutical options in this field.
On the cellular level celecoxib exerts its anticarcinogenic action primarily via induction of cancer cell apoptosis or inhibition of proliferation (for review see [9]). Increasing evidence suggests a significant part of this action to occur independent of celecoxib's COX-2 inhibitory activity [9][10][11][12][13]. In a recent study, celecoxib was shown to even enhance COX-2 expression and PG formation by lung cancer cells as key events within its proapoptotic action [14]. However, the impact of celecoxib on cancer cell lysis resulting from tumor-immune interactions has been poorly investigated. In one study, celecoxib was found to produce a downregulation of major histocompatibility complex I molecule expression on metastatic breast cancer cells, thereby leading to improved recognition by natural killer (NK) cells conferring an enhanced tumor cell lysis [15].
The intercellular adhesion molecule 1 (ICAM-1), a glycoprotein consisting of five extracellular immunoglobulin-like domains, a transmembrane and a C-terminal intracellular domain [16], plays an important role in tumor immune surveillance and elimination of neoplastic cells [17]. There are several studies indicating cytokineinduced upregulation of ICAM-1 on cancer cells [18][19][20][21][22][23] or cancer cell transfection with the ICAM-1 gene [24,25] to confer increased cytotoxic tumor cell lysis by immune cells. In the same context, a recent in vitro study suggests ICAM-1 upregulation as part of pharmacotherapeutic strategies. Accordingly, cannabinoids, a group of substances with diverse anticarcinogenic properties, have been shown to enhance the susceptibility of lung cancer cells to cytolytic death mediated by lymphokine-activated killer (LAK) cells via increase of ICAM-1 on cancer cell surface [26]. In line with its antitumorigenic responses observed in vitro, ICAM-1 expression has been likewise reported to be negatively correlated with metastasis of several cancer types in clinical studies [27][28][29].
The present study investigates the impact of celecoxib on tumor immune surveillance and the role of ICAM-1 within this process. Here we show that celecoxib, but not other structurally related COX-2 inhibitors, induces an upregulation of ICAM-1 expression on lung cancer cells, thereby causing increased cancer cell lysis by LAK cells. These findings provide evidence for a hitherto unknown mechanism underlying the anticarcinogenic action of celecoxib.
Celecoxib induces ICAM-1 expression on both protein and mRNA level
To investigate the impact of celecoxib on ICAM-1 expression and tumor cell lysis two human NSCLC cell lines (A549, H460) as well as metastatic cells derived from a lung cancer patient were used. In each of these cell types celecoxib was found to stimulate the protein expression of ICAM-1 (Fig. 1A-1C). According to an all-or-none principle this effect was significant after a treatment with 30 μM celecoxib in all three cell lines.
Additional experiments were performed to investigate the impact of three other structurally similar selective COX-2 inhibitors (etoricoxib, rofecoxib, valdecoxib) on ICAM-1 protein expression (Fig. 1D-1F). In fact, an upregulation of ICAM-1 protein greater than 3-fold was unique for celecoxib and was not shared by other selective COX-2 inhibitors. These findings are consistent with recently published data by our group indicating an upregulation of COX-2 expression by celecoxib, but not by other COX-2 inhibitors [14].
Time-course experiments revealed a significant upregulation of ICAM-1 protein expression in lung cancer cells after a 48-h incubation with 30 μM celecoxib ( Fig. 2A-2C). In accordance to elevated protein levels an increase of ICAM-1 mRNA level was detected after 6 h in each cell line (Fig. 2D-2F).
Celecoxib increases LAK cell-mediated tumor cell lysis
To investigate the functional consequence of increased ICAM-1 expression by celecoxib, LAK cellmediated tumor cell killing was investigated using a co-culture of LAK cells and pretreated cancer cells at a defined effector:target cell ratio (see Materials and Methods). Noteworthy, lymphocyte function associated antigen 1 (LFA-1), the cognate ICAM-1 receptor on the surface of immune cells, has recently been demonstrated to confer LAK cell-mediated killing of lung cancer cells incubated with the ICAM-1-upregulating phytocannabinoid cannabidiol before [26].
The close interactions between tumor cells and LAK cells were visualized by scanning electron microscopy showing a firm attachment of the LAK cell with their processes to the tumor cell surface (Fig. 3A, upper two panels). The identity of LAK cells was verified by immuno-labelling using an LFA-1 antibody in conjunction with a secondary antibody coupled to 15 nm colloidal gold, detectable as bright dots by high resolution electron microscopy ( Fig. 3A, lower two panels with inserts).
The scanning electron microscopy analysis shows that gold grains indicating LFA-1 expression decorate the cell surface and processes of LAK cells (e.g., lowermost panel, open arrows), whereas the cell bodies and filopodial extensions of the underlying tumor cells are devoid of LFA-1 labelling (lowermost panel, filled arrows).
To address the impact of celecoxib on LAK cellmediated tumor cell lysis, tumor cells that were incubated with increasing concentrations of celecoxib for 48 h were subsequently labeled with calcein-AM and co-cultured with LAK cells. Following a 6-h incubation, tumor cell lysis was measured by detection of calcein fluorescence in the supernatant. As shown in Fig. 3B-3D, celecoxib at 30 μM increased LAK cell-mediated tumor cell lysis of each tested lung cancer cell line. In some cases lysis of cancer cells appeared to be higher in the absence of LAK cells resulting in negative values of calculated percental LAK cytotoxicities, which is in line with observations from other groups [23,30].
Celecoxib does not interfere with LAK cell function
Further experiments were performed to address the impact of celecoxib on tumor cell lysis under conditions where LAK cells are exposed to this compound as would be the case under in vivo conditions. To this end, LAK cells prepared and cultured using the same protocol were additionally incubated with celecoxib or
ICAM-1 antibody suppresses celecoxib-induced LAK cell-mediated tumor cell lysis
To confirm a causal link between celecoxib-induced upregulation of ICAM-1 protein expression and the concomitant increase of LAK cell-mediated tumor cell Values are means ± SEM obtained from densitometric analysis of n = 3 blots. *P < 0.05, **P < 0.01, ***P < 0.001 vs. corresponding vehicle control of the respective ICAM-1 analysis; Student's t test. D-F. Real-time RT-PCR analysis of the impact of 30 μM celecoxib on ICAM-1 mRNA expression over a 48-h incubation period. Values are means ± SEM of n = 4 experiments. *P < 0.05, **P < 0.01, ***P < 0.001 vs. corresponding vehicle control of the respective ICAM-1 analysis; Student's t test.
lysis by celecoxib, a neutralizing antibody to ICAM-1 was tested for its inhibitory action on tumor cell lysis. In all tumor cells investigated the ICAM-1 antibody significantly suppressed the celecoxib-induced tumor cell lysis by LAK cells when compared to cells treated with vehicle and isotype control antibody, respectively ( Fig. 5A-5C). Noteworthy, both neutralizing ICAM-1 antibody and isotype control antibody did not alter the celecoxib-induced loss of cancer cell viability as assessed by WST-1 analysis (data not shown).
panels) visualizes the interactions between LAK cells and tumor cells. Electron micrographs at lower magnification show that LAK cells
firmly attach to the spread A549 tumor cells with their processes (upper two panels). In addition, immunolabelling with LFA-1 antibody and a secondary antibody coupled to 15 nm colloidal gold was used to mark LAK cells. The gold labelling is visible as bright dots in the electron micrographs at higher magnifications (lower two panels with inserts) and decorates the cell body (second to last panel) as well as the processes of the LAK cell (lowermost panel, corresponding to the boxed area in the low magnification second from above panel). Note that the 15 nm gold labeling is confined to the processes of the LAK cell (open arrows) but is absent from the intermingled filopodia of the underlying tumor cell (filled arrows). Right panels: Concentration-dependent impact of celecoxib on LAK cell-mediated killing of A549 B. H460 C. or lung cancer patient's metastatic cells D. Tumor cells were incubated with celecoxib at the indicated concentrations for 48 h. Subsequently, these cells were co-incubated with LAK cells for 6 h. Values are means ± SEM of n = 24 (B, 6 donors), n = 28 (C, 7 donors) or n = 20 (D, 5 donors) experiments. *P < 0.05, ***P < 0.001 vs. corresponding vehicle control; one-way ANOVA plus post hoc Dunnett test. www.impactjournals.com/oncotarget
LFA-1 antibody reverses celecoxib-induced tumor cell killing by LAK cells
The interaction of tumor and LAK cells is shown by light microscopy in Fig. 6A. According to immunocytochemical analysis (Fig. 6B), CD11a (LFA-1) is present on the surface of LAK cells (Fig. 6B, green fluorescence), but not detectable on the surface of tumor cells.
To verify an intercellular ICAM-1/LFA-1 crosslink as crucial event within the process of LAK cell-mediated lysis of celecoxib-treated tumor cells, LAK cells were preincubated with a neutralizing LFA-1 antibody for 2 h before killing assay was started. According to the histograms presented in Fig. 6C-6E, the neutralizing LFA-1 antibody significantly inhibited the effect of LAK cell-mediated tumor cell lysis by celecoxib. These results indicate LFA-1 as a potential receptor for ICAM-1 conferring LAK cell-mediated tumor cell lysis.
Celecoxib does not affect human bronchial epithelial cells
To evaluate if celecoxib has any effect on nontumor cells, the bronchial epithelial cell line BEAS-2B was used in further experiments. As shown in Fig. 7A,
DISCUSSION
The present study provides first-time proof for celecoxib to induce upregulation of the adhesion molecule ICAM-1 on the surface of tumor cells, resulting in increased tumor cell lysis by LAK cells. There are several lines of evidence supporting this notion. First, celecoxib caused a substantial upregulation of ICAM-1 expression on both mRNA and protein level in lung tumor cell lines as well as in metastatic lung cancer cells. Second, celecoxib treatment of lung cancer cells resulted in an enhanced susceptibility to cytotoxic lysis by LAK cells. Third, the celecoxib-induced increase of LAK cell-mediated lysis of tumor cells was abrogated by neutralizing antibodies against ICAM-1 and LFA-1. The LFA-1 heterodimer (CD11a/CD18) is the natural ligand of ICAM-1 [31] that has been reported to represent an important link to conjugate ICAM-1-bearing cells with natural killer cells [32] and to confer lymphocyte-induced tumor cell killing [33]. The data presented here suggest LFA-1 as a crucial counter-receptor [18][19][20][21][22][23][24][25][26].
In the present study upregulation of ICAM-1 protein expression in lung cancer cells was confined to celecoxib and was not elicited by other related selective COX-2 inhibitors (i. e., etoricoxib, rofecoxib, valdecoxib) bearing a diaryl heterocyclic structure. This pattern is in agreement with a recent study from our group that shows a specificity of celecoxib among other selective COX-2 inhibitors in inducing lung cancer cell apoptosis and upregulating COX-2 expression [14]. Likewise, celecoxib, but not other selective COX-2 inhibitors, has been reported to induce apoptosis in synovial fibroblasts [34] and to cause antiproliferative effects on colon cancer cells and reduction of tumor growth in vivo [35].
Clearly, the concentrations of celecoxib causing ICAM-1 expression (i.e., 30 μM) or apoptosis in other studies (i.e., 40-100 μM) [9] exceed plasma concentrations of celecoxib, which have been reported to reach a maximum of 7.67 μM after single-dose administration of 800 mg celecoxib to human volunteers [1]. However, the unique effects of celecoxib within the coxibs may be due to an intracellular accumulation of this COX-2 inhibitor. Accordingly, celecoxib was detected at five-to ten-fold higher intracellular concentrations in different tumor cell types when compared to other coxibs (etoricoxib, lumiracoxib, rofecoxib, valdecoxib) [36]. According to Maier et al. [36] the intracellular accumulation of celecoxib results from integration into cellular phospholipid membranes and may thus provide a molecular basis for celecoxib's ability to interact with non-COX-2 targets in vivo despite comparatively low plasma concentrations [36]. For example, celecoxib at 50 μM has been reported to cause a COX-2-independent activation of the transcription factor nuclear factor κB [37,38] that plays a pivotal role in ICAM-1 expression [39]. In addition, higher intracellular concentrations may be achieved in vivo through longer exposure times. Accordingly, cancer patients receive repeated treatment over weeks or months resulting in cumulative effects of the respective chemotherapy or radiation therapy [9,40].
Upregulation of ICAM-1 is likewise supposed to mediate several adverse effects such as perpetuation of bronchial injury by adhesion of neutrophils to epithelial cells [41]. Consequently, the impact of celecoxib on healthy tissue was evaluated by use of the bronchial epithelial cell line BEAS-2B, which was established from normal bronchial epithelium of non-cancerous individuals [42]. However, in contrast to lung cancer cells celecoxib neither significantly affected ICAM-1 protein expression nor susceptibility of these cells against LAK cell-induced cytotoxicity. In line with this finding celecoxib was previously reported to impair the growth of colorectal cancer in vivo without causing toxic effects on normal gut epithelium [43].
In matter of ICAM-1 expression some studies indicate an inhibition of protein expression by celecoxib. Thus, celecoxib was shown to cause inhibition of ICAM-1 expression in colon cancer cells and a decreased adhesion to fetal calf serum (FCS)-coated plastic wells with both events mediated via a COX-2-independent pathway [44]. However, the maximal inhibitory effect of celecoxib (10 μM) on ICAM-1 expression occurred after a 4-h incubation with a maximal decrease of about 45% and already declined after 6 h [44]. In another investigation the same group found a 4-h incubation of tumor necrosis factor α-stimulated human umbilical vein endothelial cells with celecoxib to decrease ICAM-1 and vascular cell adhesion molecule 1 expression with a maximal inhibition of about 60% and 50% at 10 μM celecoxib, respectively, followed by reduced adhesion of colon cancer cells to endothelial cells [45]. Further studies indicate that celecoxib treatment of mice with experimentally induced atherosclerosis [46] and autoimmune encephalomyelitis [47] or rats with colitis and lung injury [48,49] decreases the expression of ICAM-1. The reasons for this apparent discrepancy remain to be identified but may be explained, in case of in vitro studies, by the different experimental settings and specificity of cell types, respectively.
Altogether, the results of this study argue for an antitumorigenic function of ICAM-1. This view is further corroborated by animal studies that determined a 2.6-fold greater tumor volume in ICAM-1-deficient mice than in wild-type mice 14 days after injection of melanoma cells [50] or a development of malignant tumors in LFA-1-deficient but not wild-type mice injected with cancer cells [51]. In athymic nude mice the non-psychoactive cannabidiol elicited an increase of ICAM-1 protein in A549 xenografts and an antimetastatic effect that was fully reversed by a neutralizing antibody against ICAM-1 [52]. In other murine models ICAM-1 overexpression on tumor cells was found to elicit a reduced tumor growth [25,53]. Analyses of primary tumors from patients with breast cancer revealed a negative correlation of ICAM-1 expression to tumor size, lymph node metastasis and tumor infiltration as well as a better relapsefree and overall survival in patients with ICAM-1-positive tumors than in those with negative tumors [27]. In line with this notion, the incidence of lymph node or liver metastasis was significantly lower in patients with ICAM-1-positive colorectal tumors than in those with ICAM-1-negative tumors [29]. Infiltration of tumor infiltrating lymphocytes was more frequently observed in the ICAM-1-positive tumors in this study [29]. In patients with gastric cancer ICAM-1 expression on cancer cells was significantly decreased in patients with lymph node metastasis with the prognosis of patients being poorer in patients with ICAM-1-negative tumors [28]. Other studies showed an association between ICAM-1 expression and infiltration of lymphocytes into tumor tissue of patients with renal cell carcinoma [54], colorectal [55] and esophageal cancer [56].
Collectively, the present study identified upregulation of ICAM-1 expression in lung cancer cells leading to LAK cell-mediated tumor cell lysis as a novel antitumorigenic mechanism of celecoxib. Further studies addressing the impact of celecoxib on tumor immune surveillance in vivo are suggested to better understand the pharmacological action of this drug.
Cell culture
The NSCLC cell lines A549 and H460, the lung cancer patient's metastatic cells as well as the human bronchial epithelial cell line BEAS-2B (ATCC-LGC, Wesel, Germany) were maintained in DMEM supplemented with 10% heat-inactivated FCS, 100 U/ml penicillin and 100 μg/ml streptomycin. A549 human lung carcinoma cells were purchased from DSMZ (Braunschweig, Germany; A549: DSMZ no.: ACC 107, species confirmation as human with IEF of MDH, NP; fingerprint: multiplex PCR of minisatellite markers revealed a unique DNA profile). H460 cells were purchased from ATCC-LGC (Wesel, Germany; ATCC™ Number: HTB-177™; cell line confirmation by cytogenetic analysis). Following resuscitation of frozen cultures none of the cell lines was cultured longer than 6 months.
Lung cancer patient's metastatic cells were obtained from resection of brain metastasis of a 47-year-old female Caucasian with NSCLC with the procedure of cell preparation described recently [52]. The patient had been informed about the establishment of cellular models from its tumor and had given informed consent in written form. The procedure was approved by the institutional ethical committee. Experiments were performed using passages 2-15 of these cells.
Cells were grown in a humidified incubator at 37°C and 5% CO 2 . All incubations with test substances were performed in serum-free medium. PBS was used as vehicle for COX-2 inhibitors with a final concentration of 0.1% (v/v) DMSO. As vehicle control PBS containing 0.1% (v/v) DMSO was used. The neutralizing ICAM-1/ CD54 antibody and the isotype control antibody were dissolved in PBS. The LEAF™ Purified anti-human CD11a and LEAF™ Purified Mouse IgG1, κ Isotype Control antibody were supplied soluted in 0.2 μm filtered PBS (pH 7.2), containing no preservative and an endotoxin level < 0.01 ng/μg of the protein. For further dilutions PBS was used. For all antibody approaches PBS was used as vehicle control.
Generation of LAK cells
Peripheral blood mononuclear cells (PBMCs) were isolated from buffy coats of healthy donors. A volume of 50-70 ml of each buffy coat was diluted 1:2 with PBS, carefully poured over 20 ml of Lymphocyte Separation Medium (LSM 1077) and centrifuged at 1171 × g for 25 min; no brake was applied during deceleration. Following centrifugation lymphocytes concentrating in the interphase (white phase) were collected and washed twice with PBS. Washing was performed by centrifugation at 300 × g for 10 min in the first step and 200 × g for 10 min in the second step. After centrifugation pellets were resuspended in RPMI 1640 supplemented with 10% heat-inactivated FCS, 100 U/ml penicillin and 100 mg/ml streptomycin. Adherent cells were removed from PBMC suspensions (2 × 10 6 cells/ml) by attachment to plastic at the flask bottom for 1-2 h. This procedure was repeated once more before cells in the culture supernatant were subjected to further treatment. For generation of LAK cells the cell suspension was incubated with 10 ng/ml IL-2 for 6 days at a density of about 1.5 × 10 6 cells/ml. After 3 days the medium was changed and fresh IL-2 was added.
For some experiments fractions of LAK cells were treated with vehicle or celecoxib. In this case vehicle or celecoxib was added to LAK cell suspension into the culture flask 48 h before starting the LAK cell cytotoxicity assay.
LAK cell cytotoxicity assay
The cytotoxicity of LAK cells on tumor or BEAS-2B cells was determined by the calcein-AM release assay. Tumor or BEAS-2B cells (target cells) were seeded into 96-well flat bottom plates at a density of 1 × 10 4 cells/well and were allowed to grow for 24 h. Cells were washed with PBS and treated with vehicle or test substance in serumfree DMEM. Following a 48-h incubation period, target cells were washed and labeled with 5 μM calcein-AM for 30 min. Subsequently, cells were washed with PBS and generated LAK cells (effector cells) were added to target cells at an effector:target cell ratio of 4:1 in a final volume of 100 μl/well. After a 6-h incubation (37°C, 5% CO 2 ) supernatants were transferred to other wells of the 96-well plate and remaining target cells were lysed with 2% (v/v) Triton ® X-100. The fluorescence of supernatants and lysed target cells was recorded using a 485 nm excitation filter and a 535 nm emission filter by a Tecan infinite pro200 plate reader. LAK cell-induced cytotoxicity was monitored by the release of calcein by cancer cells into the supernatant due www.impactjournals.com/oncotarget to toxic effects induced by LAK cells in the co-culture. To account for a probable modulation of cancer cell viability by vehicle or test substances, the fluorescence of cancer cells in the absence of effector cells, referred to as spontaneous release of calcein, was subtracted from these values. Finally, values were normalised to the release of calcein that can be achieved maximally by the effector cells. The percentage of LAK cytotoxicity was calculated as follows: % LAK cytotoxicity = (fluorescence of supernatant of sample well with effector cells -fluorescence of spontaneous release) / (fluorescence of maximal release -fluorescence of spontaneous release) [18]. Blank values were subtracted from the experimental data. Before LAK cytotoxicity was calculated the raw data of the fluorescent measurement were analysed with Nalimov test and outliers were excluded. In parallel to the LAK cell cytotoxicity assay viability of tumor cells was determined under equal conditions using the WST-1 assay (Roche, Grenzach-Wyhlen, Germany).
Experiments to determine the functional involvement of ICAM-1 in enhanced LAK cell-mediated tumor cell killing were performed using a neutralizing antibody against ICAM-1. Experiments with the ICAM-1 neutralizing antibody were performed by incubation of target cells with 1 μg/ml of an ICAM-1 antibody or an isotype control antibody as negative control for 2 h. Following preincubation of cancer cells with the antibodies, supernatants were removed carefully and without washing the co-incubation with LAK cells was started. For analysis of the involvement of LFA-1 in LAK cell-mediated tumor cell lysis a CD11a antibody or an isotype control antibody was used. Experiments were carried out by incubation of LAK cells with 1 μg/ml of the respective antibody for 2 h before starting cytotoxicity assay by adding the LAK cell suspension with the therein containing antibody to the target cells.
Quantitative RT-PCR analysis
Lung cancer cells were seeded into 24-well plates at a density of 1 × 10 5 cells/well and allowed to grow for 24 h. Following incubation of cells with celecoxib or its vehicle for the indicated times, cell culture media were removed and cells were lysed for subsequent RNA isolation. Total RNA was isolated using the RNeasy total RNA Kit (Qiagen, Hilden, Germany). β-Actin-(internal standard) and ICAM-1 mRNA levels were determined by quantitative real-time RT-PCR using the TaqMan ® RNA-to-C T ™ Kit (Applied Biosystems, Darmstadt, Germany) according to the manufacturer's instruction. Primers and probe for human β-actin and ICAM-1 were Gene Expression Assay™ products (Applied Biosystems, Darmstadt, Germany).
Western blot analysis
To analyze protein levels of ICAM-1, lung tumor or non-tumor cells were grown in 6-well plates at a density of 2 × 10 5 cells/well for 24 h and subsequently incubated with vehicle or test substance for 48 h. After incubation cells were washed with PBS, harvested and lysed in solubilization buffer (50 mM HEPES, 150 mM NaCl, 1 mM EDTA, 1% (v/v) Triton ® X-100, 10% (v/v) glycerol, 1 mM PMSF, 1 mM orthovanadate, 1 mg/ml leupeptin, 10 mg/ml aprotinin). Lysis was performed for at least 30 min on ice and frequently mixing on a vortex mixer. Subsequently, lysates were centrifuged at 10,000 × g for 5 min and supernatants were then used for Western blot analysis. Total protein of cell lysates was determined using the bicinchoninic acid assay (Pierce, Rockford, USA). Proteins were separated using 10% sodium dodecyl sulfate (SDS) polyacrylamide gels and then transferred to nitrocellulose membranes (Roth, Karlsruhe, Germany) that were blocked with 5% milk powder (BioRad, Munich, Germany). Membranes were incubated with a primary mouse monoclonal antibody raised to ICAM-1 (Santa Cruz Biotechnology, Heidelberg, Germany) at 4°C overnight. Subsequently, blots were probed with horseradish peroxidase-conjugated antimouse IgG (New England Biolabs GmbH, Frankfurt am Main, Germany) and incubated for 1 h at room temperature. Antibody binding was visualized by a chemiluminiferous solution (100 mM Tris-HCl pH 8.5, 1.25 mM luminol, 200 mM p-coumaric acid, . Densitometric analysis of band intensities was achieved by optical scanning and quantifying using the Quantity One 1-D Analysis Software (Bio-Rad, Munich, Germany). To identify the band size of the Western blots, the prestained SDS-PAGE Standard (Broad Range; Bio-Rad, Munich, Germany) was used. A regression of the prestained standard revealed a band size of ICAM-1 at 90 kDa and of β-actin at 42 kDa. Vehicle controls were defined as 100% for evaluation of changes in protein expression. To ascertain equal protein loading in Western blots of cell lysates, membranes were probed with an antibody raised to β-actin (Sigma-Aldrich). All densitometric values were normalized to β-actin.
Analysis of CD11a with fluorescence microscopy
For imaging of CD11a A549 cells were seeded at a density of 1-1.5 × 10 5 cells/ml in 4-well culture slides (BD Falcon™, Heidelberg, Germany). After 3 days tumor cells were incubated with LAK cells at an effector:target cell ratio of 4:1 for 3 h. Subsequently, cells were fixed with 4% paraformaldehyde overnight, washed with PBS and blocked with PBS containing 0.3% (v/v) Triton ® X-100 and 5% (v/v) FCS for 1 h. After washing with PBS, cells were incubated with a CD11a antibody (1:250) for 1 h. For this purpose the same antibody as in the LAK cell cytotoxicity assay was used. As secondary antibody a goat anti-mouse Alexa Fluor ® 488-labeled IgG (1:1000) was used and incubation took place for 1 h as well. All antibodies were diluted in PBS containing 0.3% (v/v) Triton ® X-100 and 1% (v/v) FCS. Notably, experiments with secondary antibody were performed in the dark.
Electron microscopy
The NSCLC cell line A549 was seeded on Melinex ® films (Plano, Wetzlar, Germany) at a density of 5 × 10 5 cells per well in a 24-well plate. After tumor cells were co-cultured with LAK cells (effector:target cell ratio of 4:1) for 3 h, cells were carefully washed with PBS containing magnesium and calcium (each at 1 mM). Afterwards, cells were fixed with 4% paraformaldehyde containing magnesium and calcium (each at 1 mM) overnight. Subsequently, cells were blocked with 5% (v/v) NGS in PBS for 30 min and incubated with the primary CD11a (LFA-1) antibody (1:250) diluted in PBS containing 5% (v/v) NGS. For this purpose the same antibody as in the LAK cell cytotoxicity assay was used. After 1 h of incubation cells were washed with PBS containing 0.1% (v/v) Tween ® 20 before secondary goat anti-mouse antibody 15 nm gold (1:50) diluted in PBS containing 5% (v/v) NGS, 0.1% (v/v) Tween ® 20 and 0.1% (v/v) fish skin gelatin was added for 1 h. Cells were washed with PBS containing 5% (v/v) NGS, 0.1% (v/v) Tween ® 20 and 0.1% (v/v) fish skin gelatin and post-fixed with 2.5% (v/v) glutaraldehyde.
For scanning electron microscopy, the film supports with the attached cells were washed with 0.1 M sodium phosphate buffer pH 7.3 and were subsequently dehydrated in a graded series of acetone. Critical point drying was performed in an Emitech K850 critical point dryer (Emitech Ltd. Ashford, UK) after three rounds of immersion in CO 2 . The dried film supports were mounted on scanning electron microscopy stubs with adhesive carbon tape (Plano, Wetzlar, Germany) and coated with a carbon layer using a Leica SCD500 coater (Leica Microsystems, Wetzlar, Germany). Specimens were viewed in a Merlin VP compact scanning electron microscope (Carl Zeiss Microscopy, Jena, Germany) operated at 5 kV. Images with a size of 1024 × 768 and 2048 × 1536 pixels were recorded with the SmartSEM Software (Carl Zeiss Microscopy) and were processed with Adobe Photoshop CS6 (Adobe Inc. San Jose, CA, USA).
Statistical analysis
Comparisons between groups were performed with Student's two-tailed t test or with one-way ANOVA plus post hoc Bonferroni or Dunnett test using GraphPad Prism 5.0 (GraphPad Software, Inc., San Diego, USA).
|
2016-05-17T00:29:51.483Z
|
2015-10-22T00:00:00.000
|
{
"year": 2015,
"sha1": "db2595dd7cbeafe52c7c7f01b038b5f8ff9bde2e",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=16600&path[]=5745",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db2595dd7cbeafe52c7c7f01b038b5f8ff9bde2e",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
34934821
|
pes2o/s2orc
|
v3-fos-license
|
Persistence and Biodegradation of Spilled Residual Fuel Oil on an Estuarine Beach
[This corrects the article on p. 807 in vol. 29.].
medium resulted in improved growth by lactic streptococci at 30 C. The medium, called M17, contained: Phytone peptone, 5.0 g; polypeptone, 5.0 g; yeast extract, 2.5 g; beef extract, 5.0 g; lactose, 5.0 g; ascorbic acid, 0.5 g; GP, 19.0 g; 1.0 M MgSO4.7H20, 1.0 ml; and glass-distilled water, 1,000 ml. Based on absorbance readings and total counts, all strains of Streptococcus cremoris, S. diacetilactis, and S. lactis grew better in M17 medium than in a similar medium lacking GP or in lactic broth. Enhanced growth was probably due to the increased buffering capacity of the medium, since pH values below 5.70 were not reached after 24 h of growth at 30 C by S. lactis or S. cremoris strains. The medium also proved useful for isolation of bacterial mutants lacking the ability to ferment lactose; such mutants formed minute colonies on M17 agar plates, whereas wild-type cells formed colonies 3 to 4 mm in diameter. Incorporation of sterile GP into skim milk at 1.9% final concentration resulted in enhanced acid-producing activity by lactic streptococci when cells were inoculated from GP milk into skim milk not containing GP. M17 medium also proved superior to other media in demonstrating and distinguishing between lactic streptococcal bacteriophages. Plaques larger than 6 mm in diameter developed with some phage-host combinations, and turbid plaques, indicative of lysogeny, were also easily demonstrated for some systems.
Lactic streptococci are nutritionally fastidious and require complex media for optimum growth (9,10,11,14,16). In synthetic media, all strains require at least six amino acids and at least three vitamins (2, 27). Their homofermentative acid-producing nature requires that media be well-buffered for reasonable growth response; in this regard Hunter (12) observed that more growth and larger colonies (0.7 to 1.0 mm in diameter after 48 h) resulted in a medium containing lactose, yeast extract, peptone, and beef extract to which 0.05 M sodium phosphate had been added.
Bacteriophages for lactic streptococci usually are assayed by the agar overlay technique described by Adams (1), using one of the several complex media cited above. During a study of the plating efficiency of lactic streptococcal phages, Lowrie and Pearce (19) observed that not all bacterial strains, especially those of Streptococcus cremoris, grew well when inoculated into the most widely used of the complex media then available. They devised a new 'Present address: Department of Microbiology, Oregon State University. Corvallis, Ore. 97331. medium, designated M16, which overcame this problem; it was unique in containing a plant protein extract (Phytone) but lacked phosphate, relying on peptone and acetate for buffering capacity. The omission of phosphate was intentional to allow calcium supplementation for phage assays. However, Thomas et al. (30) incorporated phosphate into this medium for their study of streptococcal proteinases, calling the more-buffered medium T5.
In the present investigation, a correlation was obtained between restriction in sizes of bacterial colonies and phage plaques and a rapid decline in pH with the M16 medium. Attempts were made, therefore, to improve the buffer strength of the medium without resorting to the use of phosphate, well known for precipitation problems in bacteriological media due to its ability to sequester alkaline earth metals (5). The recent reports by Douglas et al. (8) and Douglas (7) suggested that glycerophosphate (GP) would be suitable for this purpose, especially since its use allowed large plaques on S. lactis to develop (8). The present report describes the resulting new medium, designated M17, and its use in demonstrating improved growth of lactic streptococci and their bacteriophages.
MATERIALS AND METHODS Medium. M17 broth medium is made by adding the following ingredients to 1,000 ml of glass-distilled water in a 2-liter flask: polypeptone (BBL, Cockeysville, Md.), 5.0 g; Phytone peptone (BBL), 5.0 g; yeast extract (BBL), 2.5 g; beef extract (BBL), 5.0 g; lactose (May and Baker Ltd., Dagenham, England), 5.0 g; ascorbic acid (Sigma Chemical Co., St. Louis, Mo.), 0.5 g; fl-disodium GP (grade II, Sigma Chemical Co.), 19.0 g; and 1.0 M MgSO4.7H2O (May and Baker, Ltd.), 1.0 ml. This concentration was optimum for growth and prevented the pH of S. cremoris cultures from falling below 5.9 after growth for 15 h at 30 C. Broth is dispensed (10 ml) into tubes and autoclaved at 121 C for 15 min; the pH of the broth (22 to 25 C) is 7.15 4 0.05. Bottom agar used for assay of bacterial colonies or phage plaques is prepared by adding 10.0 g of Davis agar (Davis Gelatine Ltd., Christchurch, N.Z.) to 940 ml of glass-distilled water and heating the mixture to boiling to dissolve the agar. The remaining ingredients, except lactose, are added to the dissolved agar and the mixture is autoclaved at 121 C for 15 min. After cooling to 45 C in a temperature-controlled water bath, a sterile solution of lactose (5.0 g in 50.0 ml of glass-distilled water and sterilized at 121 C for 15 min) to which has been added 10.0 ml of sterile 1.0 M CaCl .6H2O is gently added to the melted agar basal medium. The calcium addition is necessary only when the bottom agar plates are to be used for growing phage, but its addition has no adverse effect on use of the medium for plating bacteria; usually, slight cloudiness develops when the calcium is added. After mixing carefully to avoid bubbles, 15to 18-ml quantities are added to sterile petri plates. The bottom agar plates are held overnight (15 to 18 h) at 22 to 25 C to dry and then checked for any contaminating colonies; they then are stored at 2 to 5 C until used. Top overlay agar is prepared by adding 4.5 g of Davis agar to 1,000 ml of glass-distilled water and heating to boiling until the agar is dissolved. The remaining broth ingredients, including lactose but excluding calcium chloride, are then added and the medium is dispensed (50-ml quantities) into prescription bottles and autoclaved (121 C, 15 min). Top agar is used for carrying diluted phages and bacteria to bottom agar plates for determining titers of virus preparations and colony counts in bacterial cultures. M16, T5, and lactic broth were prepared as described previously (9,19,30).
GP-SM. Severe protein denaturation and browning occurred when the GP was autoclaved with skim milk (SM). Therefore, a stock solution containing 9.5 g of GP per 10.0 ml of glass-distilled water was sterilized (121 C, 15 min) separately, and 0.2 ml was added per 10 ml of sterile SM, providing a final concentration of 1.9% GP.
Molskness of Oregon State University.
Bacteriophage strains. Two bacteriophages were used, both isolated from cheddar cheese whey. Phage 799 is virulent for S. cremoris AM2, and strain 690 is virulent for S. cremoris SKI,; each phage, however, will form plaques on hosts other than those on which they were originally isolated.
Growth measurement. Bacterial growth was assayed by recording absorbance readings (600 nm) at 30-to 60-min intervals of the various strains inoculated (1.0%) into 10.0 ml of the various media in flasks fitted with a side arm accommodated by a Bausch & Lomb Spectronic 20 colorimeter. Colony-forming units per milliliter of culture were determined after blending (60 s) of 1:100 dilutions in 10% M17 broth (21) followed by serial dilution, as appropriate; aliquots (0.1 ml) were poured on the surface of M17 bottom agar plates after being mixed with 2.5 ml of top agar as described below for the phage assay procedure, except calcium chloride was omitted.
Culture activity in milk. The influence of daily subculturing in M17, M16, and LB for 10 days on acid-producing activity in SM was measured. Strains were maintained in the three broth media by inoculation at. 1% and incubation at 30 C for 24 h. Each day, the 24-h broth cultures were each inoculated in duplicate (1%) into 10 ml of SM containing 9.5% solids (100 g of powder plus 910 ml of distilled water; sterilized at 121 C for 15 min) and incubated, one tube at 30 C and the other at 22 C. Tubes at 30 C were tested for pH after 6 h, and tubes at 22 C were tested for ability to coagulate milk when held for 15 h. Studies also were carried out to determine the influence of culturing strains in SM containing GP on their subsequent acid-producing activity when inoculated into sterile SM. Strains AM1, AM2, ML,, and ML, were incubated at 22 C for 15 h in GP-SM and SM. Each strain was subcultured from these two types of milk into SM and incubated at 30 C in a temperaturecontrolled water bath; pH measurements were taken at hourly intervals.
Bacteriophage assays and stocks. To ensure homogeneity, bacteriophage stocks were renewed by single-plaque isolation (3,4,22). Aliquots (0.1 ml) of an overnight (15-h) MW7 broth culture of the appropriate bacterial host were placed in sterile test tubes (10 by 75 mm) fitted with aluminum caps. One drop (0.05 ml) of sterile calcium chloride (1.0 M) was then added to each tube followed by 0.1 ml of phage previously serially diluted in 10% M17 broth so that about 20 plaques per plate resulted. After 3 to 10 min at 22 to 25 C (room temperature) to allow for phage adsorption, melted and cooled (45 C) M17 top agar (2.5 ml per tube) was then added, and the tube contents were immediately poured on the surface of hardened M17 bottom agar in sterile plastic petri plates (10 by 90 mm). Plates were incubated at 30 C and observed periodically for isolated plaques from 3 h onward. When they appeared, usually between 3 to 5 h, two or three well-isolated plaques were MEDIUM FOR STREPTOCOCCI AND THEIR PHAGES picked by touching the top layer with sterile 152mm applicator sticks; the plaque was then transferred to 0.5 ml of chilled (2 to 5 C) M17 broth contained in a test tube (10 by 75 mm) and held in the refrigerator overnight. (The titer of these young plaques is 106 to 106/ml.) Incubation of phage plaque plates was continued until the next morning when they were examined to ensure that no other plaques developed which were partially coincidental with the plaques originally selected and that the plaques were typical in morphology and size for the particular phage-host system. One or more of the M17 broth phage-containing tubes were then used to prepare the phage stock. This was done by adding the entire contents of the tube to 10.0 ml of a 3.5-h M17 broth culture (absorbance = 0.10 to 1.15 at 600 nm) of the appropriate bacterial host growing at 30 C; 0.1 ml of CaCl .6H2O (1.0 M) also was added. With continued incubation, lysis occurred from 2 h onward, usually by 4 h. If lysis did not occur, the stock was discarded and prepared from another plaque isolate. Overnight (15to 18-h) incubation of phage-infected cultures would sometimes yield turbid cultures due to emergence of phage-resistant mutants. After lysis, the phage-laden culture was centrifuged at 4,500 rpm for 10 min in a bench-top clinical centrifuge. The supernatant was then filter sterilized by passing through a sterile syringe-mounted membrane filter (0.45 Mm; Millipore Corp.) into a sterile screw-capped tube. Titer of the stock was determined by counting plaques that developed in M17 top agar when the serially diluted sterile lysates were plated as described above. Stocks were stored at 2 to 5 C. Titers ranged from 108 to 1010 plaque-forming units per ml and would occasionally increase two-to threefold during the first week of storage. The phages were relatively stable when stocks were prepared in this manner, declining in titer only 5 to 10% over 6 months of storage.
RESULTS
Bacterial growth. Figure 1 shows the buffering capacity of M17 broth in comparison to three other media. Although T5 medium was almost as well buffered as M17, it was unsuitable for bacteriophage assays because of calcium precipitation and, therefore, was excluded from further study. The well-buffered nature of M17 under growth conditions also was apparent. For example, five lactic streptococcal strains tested gave pH values ranging from 5.78 to 6.10 after 24 h of growth at 30 C in M17 broth; these strains grown in M16 and LB, however, gave pH values from 4.70 to 4.87 and 4.42 to 4.70, respectively. When cells of S. cremoris AM, were grown either in M16 or M17 broth (Fig. 2) and inoculated into M16, M17, or LB media, the best growth response occurred in M17 medium. By 9 h, the absorbance readings of M16-grown cells revealed 49 and 52% more growth in M17 as compared to M16 and LB, respectively; for M17-grown cells, these in- creases were 30 and 45%, respectively. Also, a 1to 2-h lag occurred when M16-grown cells were used, whereas except in LB the lag was eliminated with M17-grown cells. These data are typical for all 12 lac+ strains included in the study. In support of the absorbance data, M17 consistently allowed higher total cell counts in each case. For example, with S. cremoris AM1, total counts after 15 h at 30 C in M16, M17, and LB were 1.0 x 108, 1.6 x 108, and 5.6 x 107, respectively, and for S. cremoris AM2 they were 3.7 x 107, 2.0 x 107, 2.0 x 108, and 5.0 x 106, respectively.
The medium also proved useful in selecting carbohydrate mutants, especially those unable to ferment lactose. For example, colonies of lac+ strains measured 3 to 4 mm in diameter at 5 days, whereas lacmutants, growing only on the small amount of glucose provided by the yeast extract (5) in the medium, reached colony sizes of less than 1.0 mm. The large colony size was typical of all 12 lac+ strains tested except S. cremoris P2 (an X-ray derivative of S. cremoris HP), which developed more slowly because of a requirement for carbon dioxide for rapid growth on agar plates (29). It should also be mentioned that colonies of all lac+ strains tested other than P2 were clearly visible for counting within 15 15 h, whereas this occurred only rarely for strains derived from M16 or lactic broth cultures after the second transfer. S. lactis strains, however, did coagulate milk even though the 6-h test showed them to be impaired in acidproducing activity. The pH values attained for strains AM,, AM2, ML,, ML8, and C2 in SM by 15 h at 22 C after maintenence for 10 days in the three broth media appear in Table 2.
Maintenance of strains in SM containing GP also preserved their rapid-acid-producing abilities in milk, especially early in the growth period. Data for S. cremoris ML,, typical for three other strains tested, appear in Fig. 3. It may be seen that for at least 5 h the SM culture was 1 h slower in achieving the same degree of acidity as compared to the culture grown in the GP-SM, although approximately the same final acidity was achieved by each culture by 15 h.
Bacteriophage development. The M17 medium was superior to the three other media for observing bacteriophages. This is illustrated in Fig. 4, where an M17 broth-derived stock of phage 799 replicating on S. cremoris 368 was assayed on M17 and M16 agars. The more clearly defined plaques on the M17 agar are evident. The same was true for whey-derived phage preparations when plated on M17 and TALE 1. Acid-producing activity of S. cremoris AM, and S. lactis ML. as revealed by pH attained after 6 h at 30 M16 agars, as well as for all other virus preparations assayed, although the titers on both media were similar. During this investigation into the efficiency of plating of lactic streptococcal phages on various hosts in M17 agar, it became clear that the medium, because it supported better host growth, allowed the demonstration of phenomena commonly associated with other bacterial virus systems but not previously reported for lactic streptococcal phages. Three examples appear in Fig. 5, where extremely large clear plaques, turbid plaques, and plaques exhibiting diffusion of phage lysin to surrounding uninfected cells are evident.
DISCUSSION
It is clear from the data presented that the growth of lactic streptococci in M17 medium is improved over that attained in two other com-monly used media, M16 and LB (Fig. 1). The buffering action of GP, as evidenced by the higher final pH in mature M17 broth cultures, apparently allows more total growth and reduced cell death and injury caused by the lower pH reached in other media. Maintenance of lactic streptococci in M17 with daily subculture (Tables 1 and 2). Maintenance in M16 or LB, however, yielded cells with impaired acid-producing ability, no doubt due to cell death and injury caused by the lower pH attained in these poorly buffered media. These findings suggested that maintenance of the organisms in milk even with daily subculture might cause impaired acid production when cells were reintroduced into milk. This apparently was the case since cells of four widely used starter strains showed improved acid-producing properties when initiating growth in milk if the cells originated from milk containing 1.9% GP (Fig. 3). Since early rapid acid production is highly desirable in such products as cheddar and short-set cottage cheese, future practical value may be found in buffering bulk starter milk with GP. This may prove to be an economical step, since GP is inexpensive and widely used in foods and as a carrier in certain medicines.
Slow acid production by lactic streptococci in milk may be due to loss of the ability to use lactose, presumably a rare event (20,23,26), or more frequently to loss of proteolysis, which limits the ability of the organism to obtain nitrogen from milk protein at sufficient rate to allow rapid cell growth (6). Recent data suggest that the genetic determinants for both of these cellular activities (lac and prt) are carried on plasmids (23,25,26), although direct proof is lacking. Reasons for the apparent difference in stability of lac and prt also have not been shown. Since proteinase is localized in the cell wall (30), it is likely that prolonged exposure to acid, which lactic streptococci experience in both milk and nonmilk media other than M17, alters cell wall integrity and encourages loss of proteinase activity. Studies on the influence of different pH levels on the frequency of appearance of prttypes would be revealing; incorporation of GP in media may minimize the loss, especially when, as shown herein, acid-producing activity of cells is improved by minimizing their exposure to low-pH conditions. Since prtappears to be a stable state inherited by descendant cells (6), the effect of the acid environment at the genetic level deserves consideration.
Preliminary data obtained in our laboratory indicate that GP addition to milk protects cells from freezing damage. Frozen concentrates of lactic starter cultures are now widely used in the United States, especially for direct inoculation of milk for buttermilk manufacture and to inoculate bulk starter milk intended for use in manufacture of cheddar and cottage cheese. Direct inoculation of vat milk with frozen concentrates, however, has not yet materialized, since at least 107 cells per ml is required to initiate acid production in the milk at a rate to ensure cheese manufacture in the accustomed time (18). The large volume of concentrate presently required to achieve such a cell density makes their use for this purpose impractical. Lyophilized cell concentrates may be applied in this manner in the future (28), and use of GP as a growth medium-neutralizing agent will no doubt prove useful.
The usefulness of M17 medium in selecting carbohydrate-requiring mutants also deserves mention. Since wild-type colonies grow to a large size, differences between mutants and parent cells are maximized. The medium, therefore, is finding extensive use in our laboratories to isolate and study such mutants and no doubt will be of value to others for the same purpose. It also is likely that media for other acid-producing bacteria, especially lactobacilli, will be improved by incorporation of GP. In this regard, we have found that S. thermophilus and Lactobacillus bulgaricus strains grow well in the medium, especially if the pH is adjusted to about 6.8 prior to inoculation; comparative growth studies in other media have not yet been made.
Few meaningful studies on plaque morphology and lysogeny in lactic streptococci have been reported (13,15,17), presumably because the media usually used allow little differentiation of plaque types because of poor buffering capacity. Nyiendo et al. (24) found that buffering medium was necessary to achieve high titers of lactic phages, and we have found that whey phage stocks at 109 to 1011 plaque-forming units/ ml can be prepared from SM containing GP.
It is noteworthy that the pH of M17 medium does not fall below 5.7 even upon incubation of lactic streptococcal cultures for 24 h at 30 C. Thus, not only are differences in plaque size, as determined by the phage host interaction, distinguishable, but other phage phenomena such as lysogeny as visualized by turbid plaques (Fig. 5) became demonstrable. In a subsequent publication, we will report on use of M17 medium to demonstrate widespread lysogeny in the lactic streptococci.
|
2018-04-03T02:45:23.084Z
|
1975-08-01T00:00:00.000
|
{
"year": 1975,
"sha1": "fcb3e54f6d6f2022ef741c382ec4db2dc04cd06c",
"oa_license": null,
"oa_url": "https://aem.asm.org/content/aem/29/6/807.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8a09144fd9af85c54cae659ca2faa49bcd659c79",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
233585826
|
pes2o/s2orc
|
v3-fos-license
|
Synergistic effect of Pseudomonas alkylphenolica PF9 and Sinorhizobium meliloti Rm41 on Moroccan alfalfa population grown under limited phosphorus availability
This study looked at the synergistic effect of Pseudomonas alkylphenolica PF9 and Sinorhizobium meliloti Rm41 on the Moroccan alfalfa population (Oued Lmaleh) grown under symbiotic nitrogen fixation and limited phosphorus (P) availability. The experiment was conducted in a growth chamber and after two weeks of sowing, the young seedlings were inoculated with Sinorhizobium meliloti Rm41 alone or combined with a suspension of Pseudomonas alkylphenolica PF9. Then, the seedlings were submitted to limited available P (insoluble P using Ca3HPO4) versus a soluble P form (KH2PO4) at a final concentration of 250 μmol P·plant−1·week−1. After two months of P stress, the experiment was evaluated through some agro-physiological and biochemical parameters. The results indicated that the inoculation of alfalfa plants with Sinorhizobium strain alone or combined with Pseudomonas strain significantly (p < 0.001) improved the plant growth, the physiological and the biochemical traits focused in comparison to the uninoculated and P-stressed plants. For most sets of parameters, the improvement was more obvious in plants co-inoculated with both strains than in those inoculated with Sinorhizobium meliloti Rm41 alone. In fact, under limited P-availability, the co-inoculation with two strains significantly (p < 0.01) enhanced the growth of alfalfa plants evaluated by fresh and dry biomasses, plant height and leaf area. The results indicated also that the enhancement noted in plant growth was positively correlated with the shoot and root P contents. Furthermore, the incensement in plant P contents in response to bacterial inoculation improved cell membrane stability, reflected by low malonyldialdehyde (MDA) and electrolyte leakage (EL) contents, and photosynthetic-related parameters such as chlorophyll contents, the maximum quantum yield of PS II (Fv/Fm) and stomatal conductance (gs). Our findings suggest that Pseudomonas alkylphenolica PF9 can act synergistically with Sinorhizobium meliloti Rm41 in promoting alfalfa growth under low-P availability.
Introduction
In the Mediterranean area, forge and grain legumes are largely cultivated for their high nutritional quality, high protein content, and their favorable effects on soil fertility (Lahrizi et al., 2021;Farissi et al., 2018). In fact, these species contribute to the incorporation of nitrogen in agro-pastoral ecosystems with beneficial economic impact, helping to reduce or limit the use of chemical nitrogen fertilizers by nitrogen-fixing symbiosis involving rhizobia (Oukaltouma et al., 2020;Faghire et al., 2011).
Among forage legumes, alfalfa (Medicago sativa L.) is one of the leguminous forage species with numerous socioeconomic and environmental benefits. Due to its contribution to sustainable agriculture and production of feed proteins per unit area, it is the most common forage legume in Moroccan crop-livestock systems, as well as many European and North American countries . In fact, alfalfa has the ability to provide more nitrogen to the agricultural ecosystems then the total amount of nitrogen applied by fertilization (Rengel, 2002). Furthermore, when correctly associated with specific rhizobia strains, this crop is important in maintaining the structure and nitrogen fertility of soils in which it grows (Guiñazú et al., 2010).
Despite the agro-environmental importance of legumes, their culture is concentrated in the northern regions with a favorable climate. In fact, over the last few decades, the environmental constraints have led to a reduction in grain and forage legume production areas in many countries in the southern part of the Mediterranean basin, including Morocco.
Besides osmotic stress, legumes are sensitive to nutritional constraints such as low phosphorus (P) availability, particularly during symbiotic nitrogen fixation (SNF), leading to a significant yield decrease (Oukaltouma et al., 2020). Indeed, SNF poses additional demands of P with up to 20% of total plant P being allocated to nodules and any P deficiency may influence the activity of rhizobia and consequently the symbiosis efficiency (Drevon, 2017). Moreover, the high reactivity of P with some cations such as iron, aluminum (Al) and calcium (Ca), to form insoluble compounds, reduces its mobility in the soil solution. In fact, Gessa et al., 2005 reported that the mobility of phosphate is influenced by pH and Ca concentration in soil. In fact, the increasing Ca concentration with increasing pH slows down the phosphate flux. Furthermore, the presence of Al inhibits the phosphate mobility. These reactions provoke a very low-P availability and low efficiency of phosphate fertilizers used by plants. As a consequence, the limitation of SNF process, the root growth, the process of photosynthesis (translocation of sugars and other functions), the growth of rhizobia and nodules development (Lazali et al., 2021;Boudanga et al., 2015;Neila et al., 2014).
The most important strategy employed in the last few years to reduce the effects of environmental constraints on legume production have focused on the selection of plant germplasm tolerating to drastic conditions (Latrach et al., 2014). However, an increase of rhizobia tolerance and the exploitation of their possible synergies with plant growth promoting rhizobacteria (PGPR) might constitute another approach to improve plant productivity and rhizobial symbiosis performance under unfavorable conditions (Keneni et al., 2010). In fact, PGPR can solubilize P into available forms or induce some other plant growth-promoting responses under low-P conditions (Matse et al., 2020). Tajini and Drevon (2014) that the positive interaction concerning PGPR and roots of plants can increase soil-P availability, especially under soil P-deficiency. As result, the increase of the number and size of nodules, the amount of nitrogen assimilated by nodules and the density of rhizobia in the rooting medium (Guiñazú et al., 2010). In fact, PGPR act directly and indirectly on plant growth improvement by a variety of mechanisms such as production of growth promoting substance and solubilization of minerals such as P (Korir et al., 2017). They increase also the native bacteria populations through various mechanisms that convert insoluble inorganic and organic soil P into plant available forms and therefore improve plant nutrition (Guiñazú et al., 2010). Matse et al. (2020) reported that the Rhizobium strains combined with the PGPR can enhance the symbiotic potential of the rhizobia, through the enhancement of the nitrogenase activity, and macronutrient contents in white clover plants under low P conditions. In the same sense, the intraspecific varia-tions of SNF efficiency within rhizobial and PGPR populations under low-P availability have been shown in many other legume species. Indeed, Guiñazú et al. (2010) observed that the Medicago sativa L. plants co-inoculated with Sinorhizobium meliloti B399 and the Bacillus sp. M7c showed significant increases in root and shoot dry weights, length, number and surface area of roots, and symbiotic properties. Also, the co-inoculation with PGPR and rhizobia has a synergistic effect on growth and the use of PGPR may improve the effectiveness of rhizobia biofertilizers for common bean production (Korir et al., 2017). Hence, the exploitation of the available genetic variability is a promising way to optimize legumes-rhizobia symbiosis under P limitations.
In this context, our idea is inserted. We aim to evaluate the synergistic effect of Pseudomonas alkylphenolica PF9 and Sinorhizobium meliloti Rm41 on the Moroccan alfalfa population (Oued Lmaleh) under SNF and low-P availability. The emphasis was on the agrophysiological and biochemical aspects associated with the tolerance to this environmental constraint. The research into more efficient inorganic-phosphate solubilizing bacteria is a promising route to optimize growth and yield of legume and their rhizobia symbiosis under low-P availability. This could ensure adequate plant nutrition and contribute to grain and forage yield improvement and stability in low-P soils.
Plant material and growth conditions
The plant material was the subject of this study consists of the Moroccan alfalfa (Medicago sativa L.) population Oued Lmaleh (OL). Seeds were supplied by National Institute for Agronomic Research (INRA, Marrakech, Morocco). Local populations of alfalfa are widely used in the Moroccan traditional agroecosystems, oases and mountains, and strongly contribute to the socio-economic development of local families as the main food for their livestock. They have been cultivated for many centuries and are still widely used by farmers in these traditional agroecosystems. Continuous natural and human selection has led, by this time, to their adaptation to the local habitats with distinction in the agromorphological characteristics of the landraces, which have reached Hardy-Weinberg equilibrium .
The seeds of OL population were germinated in 15 cm diameter and 15 cm height pots containing sterilized perlite as a substrate. The experiment was conducted in a growth chamber at 28 ± 2°C day/night, 60%À80% relative humidity, and a photoperiod of 16 h (18000 lx). After two weeks of sowing, the young seedlings were inoculated or co-inoculated three times with a suspension (10 8 bacteria per mL) of Sinorhizobium meliloti Rm41 strain alone or combined with Pseudomonas alkylphenolica PF9. These two strains were isolated from Beni-Mellal region in Morocco and identified at the molecular level using the housekeeping genes gyrB and rpoD respectively with the accession numbers of CP021808.1 and KY950274.1, respectively. These strains were chosen for their potential of Tricalcium Phosphate (Ca 3 HPO 4 ) solubilization in solid and broth NBRIP media and for their in vitro synergistic potential according to Habbadi et al. (2017). Briefly, 100 ll of Pseudomonas alkylphenolica PF9 suspension obtained on the liquid YEM medium was spread on Petri dishes containing the solid YEM medium. Then, discs of sterile filter paper were soaked in the cream of Sinorhizobium meliloti Rm41 strain and placed on the Petri dish on which the PGPR strain was spread. The absence of the inhibition halo after 05 days of incubation shows that the PGPR strain has no antagonistic effect on the growth of the rhizobial strain selected for the nodulation of alfalfa plants. Then, the seedlings were submitted to different treatments in terms of P forms (soluble or insoluble P) and bacterial treatments, Sinorhizobium meliloti Rm41 alone (R) or combined with Pseudomonas alkylphenolica PF9 (PGPR). The applied treatments were: -Irrigating seedlings with Nitrogen free nutrient solution containing Ca 3 HPO 4 as insoluble P form (-N + IP); -Irrigating seedlings with Nitrogen free nutrient solution containing monopotassium phosphate (KH 2 PO 4 ) as a form of soluble P (-N + SP); -Irrigating seedlings with Nitrogen free nutrient solution containing Ca 3 HPO 4 as insoluble P form and the seedlings were inoculated with the suspension of Sinorhizobium meliloti Rm41 alone (-N + IP + R). -Irrigating seedlings with Nitrogen free nutrient solution containing monopotassium phosphate (KH 2 PO 4 ) as a form of soluble P and the seedlings were inoculated with the suspension of Sinorhizobium meliloti Rm41 alone (-N + SP + R); -Irrigating seedlings with Nitrogen free nutrient solution containing Ca 3 HPO 4 as insoluble P form and the seedlings were inoculated with the suspension of Pseudomonas alkylphenolica PF9 only (-N + IP + PGPR). -Irrigating seedlings with Nitrogen free nutrient solution containing Ca 3 HPO 4 as insoluble P form and the seedlings were simultaneously co-inoculated with the suspensions of Sinorhizobium meliloti Rm41 and Pseudomonas alkylphenolica PF9 (-N + IP + R + PGPR).
The composition of the nutrient solution used consisted of P applied in the form of KH 2 PO 4 (sufficient supplies) and Ca 3 HPO 4 (insoluble P, deficient supplies) at a final concentration of 250 lmol PÁplant À1 Áweek À1 Neila et al., 2014). Urea was applied at 2 mmolÁplant À1 to nutrient solution only during the initial 2 weeks of growth to avoid Nitrogen deficiency during nodule development. Subsequently, the plants were grown in Nitrogen free nutrient solution. After 60 days of P stress, the plants were harvested, measured, and subjected to different argophysiological and biochemical analyses governing plant growth and development.
Plant biomass, plant height and leaf area
For the biomass measurements, shoots and roots were separated and their fresh weight (FW) was determined immediately. The dry weight (DW) of shoots and roots was measured using precision balance after their drying at 80°C for 48 h. The height of the aerial part of the plants was measured using a precision ruler, graduated in centimeters and millimeters. The leaf area was estimated using MESURIM software version 3.4.4.0. The leaves belonging to the same plant were cut and laid out on a white sheet containing a scale, and then they were scanned using a digital scanner. These parameters were measured on five plants per pot and grouped as three replicates.
Phosphorus contents
For the determination of assimilable P in shoots and roots, 0.5 g of the dry matter of each part was incinerated at 600°C for 6 h. The ash obtained was collected in 3 mL of HCl (10 N) and filtered. The filtrate was adjusted to 100 mL with distilled water. Subsequently, the P contents of shoots and roots were determined colorimetrically using the molybdate blue method (Murphy and Riley, 1962). P concentration was measured by reading the absorbance at a wavelength of 820 nm, using an UV-VIS absorption spectrophotometry (DLAB, SP-UV1000, China), after color development at 100°C for 10 min. A standard curve was established with KH 2 PO 4 solutions.
Relative water content (RWC)
RWC was estimated as described in Farissi et al. (2018) by recording the turgid weight (TW) of 0.1 g fresh leaflet (FW) samples by maintaining in water for 4 h, followed by drying in a hot air oven until a constant weight was achieved (DW). The RWC was calculated using the following formula: The malonyldialdehyde (MDA) content was determined according to the method described by Savicka and Škute (2010). Samples of 50 mg of fresh leaves were homogenized with 2 mL of trichloroacetic acid (TCA 0.1%) and centrifuged at 14.000 rpm for 15 min. After centrifugation, 1 mL of supernatant was added to 2.5 mL of thiobarbituric acid (0.5% TBA) prepared in 20% TCA. Then, the mixture was brought to a water bath at 95°C for 30 min. Then, it was immediately cooled in an ice bath to stop the reaction. The optical density was read at a wavelength 532 nm by an UV-VIS absorption spectrophotometry (DLAB, SP-UV1000, China). The values obtained are then corrected by subtracting the non-specific absorbance at 600 nm. The concentration of MDA is calculated using its extinction coefficient e = 155 mM À1 Ácm À1 .
Stomatal conductance (gs)
Stomatal conductance (g s ) was measured on healthy leaves as described in Latrach et al. (2014) using a porometer (Leaf Porometer Version 5.0, Decagon Devices, Inc., USA) at a temperature of 25 ± 1°C and relative humidity of 55 ± 5%. It was expressed in mmol de H 2 O m À2 Ás À1 .
Statistical analysis
Statistical analysis was performed using SPSS version 22. It concerned the analysis of variance (ANOVA). Means were compared using Tukey's test. XLSTAT software version 2014 (Addinsoft, Paris, France) was used to determine the correlations among the measured parameters.
Effect on plant biomass
The effect of Pseudomonas alkylphenolica PF9 and/or Sinorhizobium meliloti Rm41 on plant biomass of Moroccan alfalfa population studied under soluble (KH 2 PO 4 ) or insoluble (Ca 3 HPO 4 ) P forms is indicated in Fig. 1. Our results indicated that the inoculation of plants with rhizobial strain alone or combined with the Pseudomonas strain significantly increased both fresh and dry biomasses under P deficit in comparison to the uninoculated and Pstressed plants. For the fresh weight ( Fig. 1), the comparison between the two inoculants indicated that the two strains act synergistically (p < 0.001) in promoting alfalfa fresh weights under low P availability. In fact, the shoot and root fresh weights recorded, respectively, in the presence of both inoculants under low-P availability conditions were 199.5 and 110.9 mg. plant À1 , against 82.5 and 88.9 mgÁplant À1 for the plants inoculated with rhizobial strain alone under the same conditions of P stress. Also, the data showed a significant decrease (p < 0.001) in the shoot and root fresh weights of alfalfa plants inoculated with rhizobial strain alone and grown under insoluble P comparatively to the plants inoculated with the same strain and supplied with the soluble P form. However, no significant difference (p > 0.05) was noted between both P forms when the Pseudomonas strain was added to the rooting medium of stressed plants.
Under the conditions of P deficit, alfalfa plants exhibited a significant (p < 0.001) increase in their dry biomass (shoots and roots) when they were inoculated with the rhizobial strain alone or coinoculated with both rhizobacteria strains in comparison to the uninoculated and P-stressed plants. Indeed, the values recorded in stressed plants and in the presence of the rhizobial strain only were 17.8 and 28.35 mgÁplant À1 for shoots and roots respectively. Nevertheless, the quantities of 40.7 and 37.6 mgÁplant À1 were noted in the presence of the two inoculants in the rooting medium of the stressed plants. Like the fresh biomass, no significant difference (p > 0.05) was noted between both P forms when the Pseudomonas strain was added to the rooting medium of stressed plants compared to the plants inoculated with rhizobia and provided with the P in a soluble form (KH 2 PO 4 ). Fig. 2 indicated the effect of the inoculation with Sinorhizobium meliloti Rm41 alone or combined with Pseudomonas alkylphenolica PF9 on plant heights ( Fig. 2A) and leaf area (Fig. 2B) of alfalfa population studied under limited available P. Both inoculant treatments significantly (p < 0.001) improved the plant heights and leaf area under the insoluble form of P with significant differences between them (p < 0.001). In fact, under low P availability, the highest plant height was noted when the alfalfa plants are inoculated at the same time with both rhizobacteria inoculants, 14.50 cm versus 12.23 cm when the inoculation was done with the Sinorhzobium strain alone. For the leaf area (Fig. 2B), the highest values (p < 0.01) under insoluble P conditions were observed in plants co-inoculated with both bacterial inocula (1.90 cm 2 ) in comparison to P-stressed plants inoculated with rhizobial strain alone (1.64 cm 2 ).
Phosphorus contents
The shoot and root P contents of alfalfa plants grown under soluble or insoluble P forms and inoculated with Sinorhizobium meliloti Rm41 alone or combined with Pseudomonas alkylphenolica PF9 are shown in Fig. 3. The obtained results mentioned that the highest P contents (p < 0.001) in shoots and roots under insoluble P conditions were noted when the plants are co-inoculated with the two strains at once. Generally, the amounts of P recorded were more pronounced in the underground parts than in the aerial parts of the plants. The P contents obtained when the OL stressed plants are inoculated with the rhizobial strain only were respectively 283.17 and 429.18 mgÁg DW À1 in shoots (Fig. 3A) and roots (Fig. 3B). However, the amounts of 346.45 and 440.58 mgÁg DW À1 were noted under the co-inoculation at the same parts respectively and under the same conditions of P supply (Ca 3 HPO 4 ). The comparison between the plants inoculated with the Sinorhizobium strain alone and grown under insoluble or soluble P showed significant differences (p < 0.001) between their shoot and root P contents.
Relative water content (RWC)
Our results (Fig. 4) indicated that bacterial treatments maintained the same level of RWC whatever the P form added to the growing medium (p > 0.05). However, in comparison to the uninoculated and P-stressed plants, all inoculants significantly (p < 0.001) improved this parameter. Hence, for the plants inoculated with the rhizobial strain alone or combined with Pseudomonas strain the increases noted were 26.16 and 23.91% respectively.
Effect on EL and MDA contents
The EL contents were found increased (p < 0.001) in uninoculated and P-stressed alfalfa plants (Table 1). However, the inoculation with rhizobial strain alone or combined with Pseudomonas strain significantly (p < 0.001) reduced the EL in P-stressed alfalfa plants. There is no significant difference (p > 0.05) in EL recorded in plants inoculated with rhizobia strain alone and supplied with the two P forms. However, the EL was more reduced in the presence of both inoculants in the rooting medium 11.58%.
The MDA contents were more accumulated (p < 0.01) in uninoculated and P-stressed plants compared to the other treatments (Table 1). Nevertheless, the inoculation of alfalfa plants with Sinorhizobium strain alone significantly reduced this accumulation under the same P conditions (35.48 mmolÁg FW À1 ). However, the presence of both strains in the rooting medium remarkably decreased the MDA accumulation to 31.88 mmol. g FW À1 with no significant difference (p > 0.05) in comparison to alfalfa plants inoculated with rhizobial strain and supplied with the soluble P form.
3.6. Effect on photosynthetic-related parameters 3.6.1. Effect on Chl a, Chl b, total Chl and Chl a/b ratio The inoculation with rhizobial strain significantly (p < 0.001) increased the Chl a, Chl b and total Chl contents in alfalfa plants supplied with insoluble P in comparison to uninoculated plants and whatever the supplied form of P (Fig. 5). However, the simultaneous inoculation with both bacterial inoculants further improved (p < 0.05) the Chl a and the total chlorophyll contents in alfalfa stressed plants.
Concerning Chl a/b ratio (Fig. 6), the highest and significant values were noted in uninoculated and stressed plants (2.45). The lowest values were recorded in alfalfa plants inoculated with rhizobial strain and supplied with the soluble P form (1.72). However, the Chl a/b ratio reached 2.14 and 2.00 when the inoculation was done with rhizobia alone or combined with Pseudomonas strain respectively under limited available P (see Fig. 7).
Effect on the maximum quantum yield of PS II (Fv/Fm)
Results indicated that Sinorhizobium strain alone or its combination with the Pseudomonas one pronouncedly increased (p < 0.001) the Fv/Fm ratio under low P availability, with no significant differences between them (Fig. 5). Indeed, in the presence of the rhizobial inoculum only, the Fv/Fm reached the values of 0.804 and 0.790 in P-stressed and unstressed plants respectively. However, this parameter reached 0.802 when the plants were coinoculated with both inocula at the same time.
Effect on stomatal conductance (gs)
The results obtained for the g s (Fig. 8) showed that both inoculation with rhizobia only and co-inoculation with the two strains at the same time significantly (p < 0.001) increased this parameter under P deficiency with a significant difference between them (p < 0.05). The lowest value of g s was recorded in the absence of the bacterial treatment and under insoluble P conditions. However, the presence of the rhizobial inoculum induced the g s to reach 40.18 mmol H 2 O. m À2 Ás À1 . This enhancement was more obvious when the inoculum was constituted of both rhizobacterial strains (45.84 mmol H 2 O. m À2 . s À1 ). Values are means of three replicates of five plants for each. Different and same small letters above histograms indicate significant (p < 0.05) and no significant differences (p > 0.05), respectively, between the means according to Tukey's test. Bars represent the standard errors to the means.
Discussion
Plant-growth promotion has been associated to the Pseudomonas genus since the beginning of this research topic. In the present study, we focused on the synergistic action of Pseudomonas alkylphenolica PF9 and Sinorhizobium meliloti Rm41 on Moroccan alfalfa population grown depending on SNF under limited available P. We noted that the inoculation with rhizobial strain alone or combined with Pseudomonas strain generated positive effects on the growth and physiology of alfalfa plants fertilized with the insoluble form of P compared to uninoculated plants. However, the comparison between both bacterial treatments showed overall that the improvement was more pronounced when the alfalfa plants are simultaneously co-inoculated with both inoculants. In fact, our results indicated that the co-inoculation of P-stressed plants at the same time significantly improved the fresh and dry biomasses, plant heights and leaf area with no significant differences in comparison to alfalfa plants inoculated with Sinorhizobium strain alone and supplied with the soluble P form. Likewise, the improvement in plant growth was strongly correlated with the P content of shoots and roots (Fig. 9), suggesting the synergistic role of the two used strains on phosphate nutrition improvement and therefore the plant growth. Sulieman and Hago (2009) found that the growth of legumes was positively correlated with the P concentration in the soil solution and the low-P availability caused a depressive effect on plant nodulation, growth as well as on the leaf area (Chaudhary et al., 2008;Tang et al., 2001). In line with our findings, the results observed in some leguminous species like Phaseolus vulgaris L. showed that the co-inoculation with PGPR and rhizobia had a synergistic effect on plant growth parameters in comparison to the single inoculation with rhizobial strains (Korir et al., 2017). Therefore, Pseudomonas polymyxa and Bacillus megaterium strains can be used together with the tested rhizobia strains to improve common bean growth in low-P soils (Korir et al., 2017). In the same sense, Charana and Yoon (2013) reported that the strain KL28 of Pseudomonas alkylphenolica promoted the growth of Brassica campestris L. under metallic stress. P nutrition is important for metabolic activities in plants. The reduced uptake of P due to lower P availability may influence various physiological and biochemical processes such as water uptake and cell membrane stability. At this point, our results showed a significantly smaller level of RWC in alfalfa stressed and uninoculated plants. In fact, the soil P level is associated with water status in plants (Shubhra et al., 2004). However, the presence of the bacterial inocula tested in the rooting medium significantly increased the leaf RWC whatever the form of P supplied. The PGPR can, directly or indirectly, improve growth of plants by a range of mechanisms such as the fixation of molecular nitrogen and its conversion to ammonia transmitted to the plant, production of siderophores that making iron available in the plant rhizosphere, solubilization of minerals including P, and synthesis of phytohormones like gibberellins, cytokinins and auxins (Belimov et al., 2015). In fact, exogenous indole-3-acetic acid (IAA) raised RWC in leguminous Glycine max L. (Gadallah, 2000). In lettuce plants, PGPR inoculations significantly increased the leaf RWC (Sahin et al., 2015). Mayak et al. (2004) documented that PGPR could ameliorate the rooting and the growth of plants by enhancing the water use efficiency. P is an essential constituent of the phospholipids composing the cell membranes of plants. Any P-deficiency could induce great damages on cell membrane integrity and on tissue rigidity. In our present study, the disturbance effect of P-deficiency to cell membrane stability was reflected by the increase in MDA contents associated with the high EL percentages. In fact, we noted strong negative correlations between the shoot fresh and dry weights and EL and MDA accumulations (Fig. 9). In leguminous species Phaseolus vulgaris L., the P-deficiency induced a significant increase in EL and MDA contents of nodules and leaves . However, likely to our results, the PGPR inoculations decreased EL and MDA of lettuce plants grown under lower irrigation levels (Sahin et al., 2015). Determination of the MDA concentration and, hence, the extent of membrane lipid peroxidation, is often used as a tool to evaluate the gravity of oxidative stress induced by abiotic stress. In rice seedlings, the levels of MDA and EL were significantly increased under nutrient-deficient conditions including P and N as compared to sufficient nutrients. However, their contents were found to be decreased by the inoculation with Paenibacillus lentimorbus B-30488, Bacillus amyloliquefaciens SN13 and their consortium (Bisht et al., 2020). Under nutrient deficiency, the PGPR Bacillus amyloliquefaciens SN13 focuses on the carbohydrate metabolism which in turn provokes downstream signaling allowing plants to weather nutritional stress including P (Bisht et al., 2020). The bacterial inoculation leads to deregulation of glycolytic pathway genes and hence sugar level (Bisht et al., 2020). This might be a strategy of PGPR to induce tolerance in nutrientstarved plants.
Measurements of photosynthesis parameters such as chlorophyll content, chlorophyll fluorescence and g s are often used in the evaluation of plant adaptation to different environmental stresses, including P stress. In our study, the observed reductions on these photosynthetic-related parameters clearly reflected the decrease in the plant growth of uninoculated and P-stressed plants. Strong positive correlations were noted between plant biomasses and measured photosynthetic-related parameters (Fig. 9). The effect of P-deficiency on chlorophyll contents is documented in many leguminous species. In soybean, the supplement of P improved the total Chl and Chl a contents compared to unfertilized plants (Rotaru, 2015). We noted that the co-inoculation with both inoculants significantly enhanced the total Chl and Chl a contents in alfalfa plants supplied with insoluble P form. Also, no significant differences were noted between the two P forms when the plants were co-inoculated with Sinorhizobium and Pseudomonas strains simultaneously. In line with our results, the treatment of soybean plants with Pseudomonas fluorescence and Azotobacter chroococcum simultaneously revealed an overall increase in Chl a and total Chl content under P starvation (Rotaru, 2015). However, the two bacterial treatments did not significantly change the Chl b contents under sufficient and deficient P-supply. The same observation was noted in soybean (Rotaru, 2015). The lack of effects on the Chl a/b ratio indicates that Chl a is more sensitive to P-deficiency than Chl b. In rice, Alam et al. (2001) accorded the positive effects in root length, leaf area and chlorophyll content to Xanthobacter sp. inoculation. The growth-promoting effect of Serratia plymuthica BMA1 strain was accompanied by a substantial increase in chlorophyll contents in the leguminous Vicia faba L. under low P availability (Borgi et al., 2020). A decrease of total chlorophyll with P deficiency stress suggests a reduced capacity for light harvesting. Meanwhile, the formation of reactive oxygen species is mostly compelled by excess energy absorption in the photosynthetic apparatus, this might be eschewed by damaging the absorbing pigments (Herbinger et al., 2002). A decrease in chlorophyll content could be related to the increase of chlorophyll degrading chlorophyllase activity, the destruction of the chloroplast structure and (Singh and Dubey, 1995). The reduction in the photochemical efficiency of PSII (Fv/Fm) in uninoculated and P-stressed plants is possibly related to a reduction of chlorophyll contents noted under the same conditions. Indeed, we observed a very highly significant positive correlation between the Fv/Fm and Total Chl contents (Fig. 9). Changes in Chlorophyll fluorescence emissions, occurring mainly from PSII, provide information on almost all aspects of photosynthetic activity. This parameter had also usually been used to probe photosynthetic function in higher plants and exhibit plant tolerance to environmental stresses (Farissi et al., 2018;Gray et al., 2006;Panda et al., 2008). Shi et al. (2019) noted that the inoculation of Brassica campestris L. plants with Pseudomonas alkylphenolica KL28 improved photosynthetic parameters like Fv/Fm under metallic stress. In barely, Hordeum vulgare L., all of the processes in the photosynthetic machinery including the PSII quantum yield were influenced by P deficiency (Carstensen et al., 2018). The inoculation of Phaseolus vulgaris seedlings with Trichoderma sp and/or Bacillus sp improved photosynthetic efficiency evaluated by Fv/Fm ratio (Yobo et al., 2009). This finding matches our results. In fact, we have noted that the presence of rhizobacteria tested in the rooting medium improved the quantum yield of PSII whatever the supplementation form of P. The improvement in chlorophyll fluorescence and Chl contents by bacterial treatments suggest more reaction centers and higher light harvesting. In pepper plants, the quinone acceptor (Qa) was highly oxidized by Bacillus bacteria inoculation and its excitation energy is utilized in electron transport, leading higher adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH) production, employed for carbon assimilation in the Calvin cycle, and improving plant growth (Samaniego-Gámez et al., 2016). Values are means of three replicates. Different and same small letters indicate significant (p < 0.05) and no significant differences (p > 0.05), respectively, between the means according to Tukey's test. Bars represent the standard errors to the means.
In addition, our results also showed that inoculation treatments have a highly significant effect on the increase in g s under P starvation. The increase in g s results in the opening of the stomata, due to differential variations in turgor, in order to facilitate the entry of the CO 2 necessary for photosynthesis, at the same time causing water losses through transpiration (Bresson et al., 2013). The cell turgor at higher levels contributes to improved plant performance and to the maintenance of physiological processes such as stomatal opening, photosynthesis and leaf expansion (Serraj and Sinclair, 2002;Subbarao et al., 2000). Our results recorded a highly significant positive correlation between the g s and RWC (Fig. 9). We reported here that the low P availability noticeably decreased the g s in uninoculated alfalfa plants. In rice plants, the low P conditions caused reductions in photosynthetic rate, g s , transpiration rate, and internal CO 2 concentration (Veronica et al., 2017). However, in our study, the bacterial inoculants particularly the mixed inoculation of Sinorhizobium meliloti Rm41 and Pseudomonas alkylphenolica PF9 improved the g s of P-stressed plants. Indeed, the endophyte and rhizospheric microorganisms can promote plant growth by regulating nutritional and hormonal balance, producing plant growth regulators and solubilizing nutrients (Mahmood et al., 2014). Indeed, the IAA affects plant cell division, pigment synthesis and photosynthetic activity by modulating the plant auxin pool (Ahemad, 2014). Furthermore, the bacterial respiration led to CO 2 formation that could involve in photosynthesis improvement. In fact, the CO 2 generated by bacterial respiration in roots can be transported to the stems through the vascular tissues (xylem). It was reported that the carbon involved in photosynthesis in stem cells of tobacco plants is obtained from the vascular system and not from stomata (Hibberd and Quick, 2002). The same observation was also reported by Sahin et al. (2015) in lettuce plants inoculated with Bacillus megaterium and B. subtilis strains.
Conclusions
The present study suggests that the co-inoculation of alfalfa plants with Pseudomonas alkylphenolica PF9 and Sinorhizobium meliloti Rm41 could alleviate the deleterious effects of low P conditions in the rooting medium. These rhizobacteria improved growth in P-stressed plants in terms of plant biomass, leaf area and plant heights. Such a beneficial effect was associated with P solubilization and uptake, the maintenance of water nutrition, the cell membrane stability and the performance of photosynthetic-related parameters such as the chlorophyll contents, the Fv/Fm and the g s . This implies that their applicability as a promising alternative to minimize the P problem in agricultural soils.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-05-04T22:05:38.621Z
|
2021-04-02T00:00:00.000
|
{
"year": 2021,
"sha1": "feedcbec838515aff4dfda41591b9de38c4826b2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2021.03.069",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0fa74aae04a03317d3cbdee52628f632797befc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119046305
|
pes2o/s2orc
|
v3-fos-license
|
On a nonstatic Painleve-Gullstrand spacetime
A time dependent geometry outside a spherically symmetric mass is proposed. The source has zero energy density but nonzero radial and tangential pressures. The time variable is interpreted as the duration of measurement performed upon the physical system. For very short time intervals, the effect of the mass source is much reduced, going to zero when $t \rightarrow 0$. All physical quantities are finite when $t \rightarrow 0$ and $r \rightarrow 0$ and also at infinity. The total energy flux measured on a hypersurface of constant $r$ is vanishing.
Introduction
It is a known fact that one of the conceptual problems of quantum theory is the so-called "measurement problem" -the standard Quantum Mechanics (QM) crucially depends on the concept of measurement, even though such notion is not defined rigurously within the theory [1,2,3]. According to Okon and Sudarsky, the solution to the measurement problem may well lie in Quantum Gravity (QG), which is still lacking. Moreover, they suggest that it may be necessary to solve the measurement problem for to build a quantum theory of gravity. Gisin [2] and Gisin and Frowis [3] argued that, without solving the measurement problem, quantum theory is not complete, as it does not tell us how one should -in principle -perform measurements. They consider the time is ready to pass from the study of Quantum non-locality -a very fruitful subject of research -to the Quantum measurement problem, another basic problem in the foundations of Quantum Mechanics.
Connections between quantum-foundational issues and QG have been also pointed out by Penrose [4] (see also [5,6,9,10]), who studied the intrinsic spacetime instability when macroscopic bodies are placed in quantum superposition of different locations, an idea that leads him to a link between quantum collapse of the wave function and gravity. Diosi [7] introduced a nonlocal, gravitational term in the time-dependent Schrodinger equation for to find the quantum uncertainty in the position of a free pointlike macroscopic object, from the minimization of the energy. In addition, Pearle and Squires [11] suggested that the curvature scalar of the spacetime is responsable for the spontaneous quantum collapse.
Motivated by the importance of the measurement process within QM and QG, we investigate in this paper the role played by the duration of measurement on the spacetime structure of the physical system under consideration. We know that the (Newtonian) gravity has not been checked out experimentally in the range below 0.1 mm. We pass from short range distances to short range time intervals and suggest that the strength of the gravitational field may be modified when the measurement is performed in a very short time interval w.r.t. the gravitational radius of the object.
Throughout the paper we use geometrical units G = c = = 1, unless otherwise specified.
2 Painleve-Gullstrand geometry with time dependent mass The Schwarzschild exact solution for the geometry outside a star or a BH is given by In (2.1) t S is the Schwarzschild time and dΩ 2 is the metric on the unit 2sphere. To get rid of the coordinate singularity of the metric at the horizon r = 2m, Painleve and Gullstarnd (P-G) used the following temporal transformation [12,13,14] Therefore, the line element appears as where t is the free-fall time, that is the proper time experienced by an observer who free-falls from rest at infinity. We chose the "+" sign in front of the square root in order to deal only with the inward moving free particles (along a geodesic curve with dr + 2m/rdt = 0, the velocity dr/dt = − 2m/r is negative). The geometry (2.3) is stationary, namely invariant under time translations (however, it is not invariant under time reversal because of the nondiagonal term). In addition, a constant time slice is simply flat space. We also emphasize that (2.3) represents physical space freely falling radially into the BH at the Newtonian escape velocity 2m/r. The proper time of one observer at rest (dr = dθ = dφ = 0) is dτ = 1 − (2m/r)dt.
As we know, a time dependent source with spherical symmetry will no longer lead to a Ricci-flat geometry, i.e., to a vacuum solution of the Einstein equations. Therefore, Birkhoff's theorem does not apply for this case. A nonstatic Schwarzschild (S) spacetime with a time dependent mass, outside an object with spherical symmetry was investigated in [15]. It was found there that the source of geometry (an anisotropic fluid) has zero energy density and radial pressure, nonzero tangential pressures and radial energy flux. We intend in this paper to introduce a variable mass directly in the line-element (2.3), for to explore whether more simple properties may be obtained. Therefore, we write down the geometry (2.3) as t > 0, k -positive constant and m -the particle constant mass. To find k, we make use of reasonings from [4,5,6,7]: one looks for a link between quantum collapse of the wave function and gravity, when macroscopic objects are placed in quantum superposition at different locations. Diosi [7] added a nonlocal gravitational term to the standard QM terms from the Schrodinger equation for a macroscopic object of mass M and radius R, with x and x' -the locations of the two branches of the superposition. Diosi showed that, when ∆x ≡ |x − x'| << R, the Newtonian potential energy from (2.5) acquires the form where ω 2 = GM/R 3 = 4πGρ/3 is the frequency of the Newtonian oscillator (which could be obtained from the geodesic deviation), ρ is the constant density of the particle and U (0) = GM 2 /R. The standard kinetic term from (2.5) tends to spread the wave function, competing with the Diosi-Penrose spontaneous collapse which tends to shrink the wave function. When the spreading rate /M (∆x) 2 equals the collapse rate 1/τ ≡ M ω 2 (∆x) 2 / , an equilibrium is reached and one obtains 1/τ = ω, where τ represents the decoherence time, required to collapse the macroscopic superposition, or the quantum Zenon time [8]. For our case of interest, we propose to consider τ as the time that light needs to cross the Schwarzschild radius of the object. In this case we have τ = 2M which means to insert k = 2m in Eq. (2.4). Hence (2.4) becomes with m(t) = me − 2m t . To avoid a signature switch of the metric coefficient g tt , we impose the condition f (r, t) ≡ 1 − 2m r e − 2m t > 0, namely r > 2me − 2m t , with r AH = 2me − 2m t -the location of the apparent horizon. That is necessary because otherwise the proper time and t will not have the same sign for an observer located somewhere at r, θ, φ = const. For constant r, f (r, t) is a monotonic decreasing function of t, tends to unity when t → 0 and acquires the standard Schwarzschild value (1 − 2m/r) at infinity (or when t >> 2m). When f (r, t) is considered as a function of r, it equals unity for r → ∞. However, the limit r → 0 has to be taken with t → 0, in order to satisfy the condition r > 2m(t). Consequently, 0 < f (r, t) < 1. We notice also that the apparent horizon is an increasing function of t, from r AH → 0 when t → 0 and r AH → 2m at infinity, having an inflexion point at t = m.
We take the timelike variable t as the duration of measurement, so that from (2.7) results that gravity is weakened when a measurement is performed in a time interval of the order or less than 2m. This could be checked measuring the trajectory of a high energy cosmic ray particle (a proton, for example), freely falling in the gravitational field of the Earth. If the duration of measurement is of the order of 2m or less (m being the Earth mass), the trajectory will be less curved. As we already remarked in [15], we may now give a reasonable explanation of the fact that the zero point energy does not gravitate: the very fast quantum vacuum fluctuations reduce the strength of gravity so much that its influence is canceled.
Properties of the gravitational fluid
In order for (2.7) metric (which is not Ricci flat) to be a solution of Einstein's equation G ab = 8πT ab , with a, b = t, r, θ, φ, we need a source stress tensor on its r.h.s. The source is an anisotropic fluid with the nonzero components Let us take now a congruence of observers with the velocity vector field The above congruence of observers is geodesic, namely the acceleration a b = u a ∇ a u b = 0, to whom the inward radial velocity u r = − 2m r e − 2m t is the Newtonian escape velocity. The spacetime (3.1) being nonstatic, the scalar expansion is nonzero which vanishes when t → 0 and r → 0 ( which goes to zero simultaneously with t). We also obtain a nonzero shear tensor with the nonzero components σ r r = −2σ θ θ = −2σ φ φ = −(2/3)Θ and σ r t = 2me − 2m t /r 2 . Consider the general form of an anisotropic fluid with energy flux with ρ(r, t) = T ab u a u b the energy density of the fluid, p r (r, t) -the radial pressure, p t -the pressures on the transversal directions θ and φ, n a is a spacelike vector orthogonal to u a , with n a u a = 0, n a n a = 1, q a is the heat flux with q a u a = 0 and it is given by the expression q a = −T a b u b − ρu a , obtained from (3.4). Using now (3.2) and (3.4), one finds that In spite of the fact that T r t = 0, we get from (3.4) a vanishing energy flux q a = 0. That is perhaps related to the geodesic character of the congruence (3.2). From (3.4) one further finds that ρ = 0, p r = T r r = 4p t . Having now the expressions of the energy density and pressures, it is an easy task to see that the weak energy condition (WEC) (ρ ≥ 0, ρ + p r ≥ 0, ρ + p t ≥ 0), null energy condition (NEC) (ρ + p r ≥ 0, ρ + p t ≥ 0) and strong energy condition (SEC) (ρ + p r ≥ 0, ρ + p t ≥ 0, ρ + p r + 2p t ≥ 0) are obeyed. However, the dominant energy condition (DEC) (ρ > |p r |, ρ > |p t |) is not satisfied because ρ is vanishing.
One observes that all components of T a b vanish when t → ∞ (or when t >> 2m) because the metric (2.7) becomes Ricci-flat. We must remind that the limit r → 0 goes simultaneously with t → 0 so that T a b tends to zero at this limit, too. That takes place because of the exponential factor e − 2m t which is present in all expressions, including the scalar curvature R a a = −12πp r . Moreover, in the latter case (t → 0), the geometry (2.7) becomes Minkowskian and the effective mass m(t) goes to zero.
Having now the components of the stress tensor and the basic physical quantities associated to it, our next task is to compute the total energy flow measured by an observer sitting at r = const. [16] where γ is the determinant of the 3-metric of constant r, i.e. γ = −(1 − 2m(t)/r)r 4 sin 2 θ. With T r t from (3.1), Eq. (3.6) gives us E = 0. The fact that E = 0 is not surprising if we remember that P-G observers are in free fall (the acceleration vector a b = 0) and the energy flux q a is vanishing.
Conclusions
The role of the measurement process in gravitational physics is investigated in this paper. In the time dependent spacetime we have proposed, the time variable plays the role of the duration of measurement upon some physical system. Very short time intervals w.r.t. the gravitational radius lead to much weaker values of the gravitational field where our system is located. That may direct us to an explanation of the well-known fact that the vacuum energy does not gravitate: very fast quantum fluctuations get rid of the influence of gravity. We also notice that some results from this paper are much more simple than similar quantities obtained in [15] and all parameters derived are finite throughout.
|
2018-02-08T09:30:07.000Z
|
2018-02-08T00:00:00.000
|
{
"year": 2018,
"sha1": "7656d789a7d8a0db7d53216ab21cda24b97413b1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7656d789a7d8a0db7d53216ab21cda24b97413b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
115529920
|
pes2o/s2orc
|
v3-fos-license
|
The use of simulator studies to assess the impact of ITS services on road users behaviour
The subject of this publication is the use of top-of-the-range driving simulators to study the impact of ITS services on the road safety. The aim of the article is to describe the assumptions of simulation studies carried out as part of the RID 4D project and to present the method of building research scenarios. The article discusses the catalogue of ITS services of the greatest importance to the Road Traffic Safety and traffic efficiency developed under the project. Then, services from the catalogue were specified, which were tested on the driving simulator of the Motor Transport Institute. The tests included sections of a dual-lane expressway. As a result of the work, four scenarios were created containing various dangerous events and variable message boards informing drivers about the danger and/or limiting the permitted speed. During the simulation, a set of several dozen parameters related to vehicle motion was recorded, in particular the distance to the vehicle ahead, time to collision with another vehicle or object on the road, speed, intensity of braking and acceleration. The tests were performed for good and bad weather conditions on a 60person study group. The division into age groups 18-24, 25-50 and above 50 years was applied. The research showed a difference in the way drivers of static signs and speed limits on the variable message signs affect drivers. For variable messages signs, there was a greater decrease in speed than in the case of static signs.
Introduction
The aim of the article is to describe the assumptions and methods of performing simulator tests carried out as part of the RID 4D project. The subject of the publication is using thetop-of-the-range driving simulator to evaluate the impact of ITS services on the road users behaviour. As part of the project work, a number of analyses of the state of road traffic safety in Poland were carried out. The data shows that every year, as a result of traffic incidents, several thousand people die in Poland and a dozen thousand are injured [1]. The road safety analyses conducted as part of the project show that about 95% of traffic incidents are caused by improper behaviour of the road users [1,2]. The most common causes of accidents include, among the others, non-adaptation of speed to the traffic conditions, failure to give priority to cross, incorrect overtaking, incorrect behaviour towards pedestrians, or failure to maintain a safe distance between vehicles [1].
As the research carried out both in Poland and abroad proves, Intelligent Transport Systems influence the level of road safety and traffic flow as well as efficient traffic management and optimization of road network use [3,4]. Selected data on this subject can be found in the article "The use of simulator studies to assess the impact of ITS services on road users behaviour". Determining the level of road safety in Poland and the impact of ITS services on its level cannot be based solely on statistical data. Therefore, it was necessary to carry out additional analyses as well as research in order to determine the manner of implementing individual ITS services and the manner of their deployment. Under the project the tests of drivers were carried out to determine the impact of ITS services on their behaviour.
Assumptions of the experiment
As part of the work, a series of tests was planned using a high-end AS 1200-6 driving simulator. For this purpose, it was necessary to develop the assumptions of the experiment and research scenarios. The tests included sections of a two-lane expressway with a lane width of 3.5 m, and hard shoulder of 2.5 m width and a design speed of 100 km/h [5]. The road sections prepared have also included alternative routes. The specification was made based on Annex 1 to the Resolution of the Minister of Infrastructure of July 3, 2003 and guidelines of the General Directorate for National Roads and Motorways. Boards and variable message signs, which provide information to drivers, were placed on the prepared section.
Compiling the research scenarios
The area for research scenarios was made in a dedicated PreScan software. It is a simulation platform consisting of a pre-processor, based on a graphical user interface, enabling designing and modification of research scenarios and an executing environment for their construction. The main user interface for creating and testing algorithms includes MATLAB and Simulink. It is an open software platform that has flexible interfaces to connect with the dynamics model and third-party HIL equipment /simulators. The graphical user interface (GUI) allows to build scenarios and model sensors, the Matlab / Simulink interface allows to add a control system. The work in the program is based on 4 steps: scenario construction, sensor modelling, control system and launch of the experiment.
A special pre-processor (GUI) allows users to build and modify traffic scenarios in a short time. Scenarios are built from database elements using the "drag and drop" method. The database consists of: -road sections: straight sections, curves, viaducts; -infrastructure elements: trees, buildings, road signs; -vehicles and road users: cars, trucks, bicycles and pedestrians, -weather conditions: rain, snow, fog; -sun light sources, headlights, street lights.
By modelling road sections, the user has the option of specifying parameters for each element. For road sections: -number of lanes, -adding/removing lines, -the width of the line, -line type (continuous, broken short, broken long), -for broken lines: spacing between lines, -for curves: curve length (curve radius, angle of curve), -hard shoulder width, According to the assumptions, ITS services of the highest importance for efficiency and road safety were subject to simulation tests. A detailed list of selected services is presented in the article [7]. Due to the restrictions imposed on the driving simulator, the following services were tested: vehicle speed management, conveying traffic information to the drivers, managing adverse events, managing environmental information and communicating environmental information to the drivers. For each of the selected services, different variants of conveying the information to the driver by means of signs and boards with variable content were developed. For the vehicle speed management, a variant with a static speed limit sign, a VMS board with a speed limit, a board and repetition of the limit on the sign and a board with the reason of the limitation were selected in order to compare different types of speed limit information.
Road traffic disruption announcements (lane blocked by road works) and possible detour for alternative route were used to provide traffic information. The information on possible alternative routes was displayed in three variants: text information -"recommended detour" and an arrow as well as two variants of boards with travel timesdifference in travel time of several percent, and twice the time difference. The experiment also included the simulation of a traffic accident when the information was passed to the driver using VMS boards and/or VMS signs. 4 variants of events were listed: -traffic incident without displaying information about the event, -information about the incident displayed on the board, without information about the need to change the lane and without using the speed limit sign, -information about the incident and the need to change the lane, -information about the incident and the need to change the lane and the sign with the speed limit. Also various weather conditions have been modelled, such as slippery surface, reduced visibility and strong wind. The driver's behaviour was monitored in a situation where the driver did not receive information from the system and, after prior information, provided by means of VMS boards.
All boards and variable message signs have been developed based on relevant requirements and regulations. Taken into account, among the others, were signs sizes, letter sizes, spacing between them and margins. As a result of the work, 4 scenarios were created containing various dangerous events and variable message boards informing the drivers about the danger and/or limiting the allowed speed.
Simulator tests
The research group consisted of 60 people. The condition for participating in the tests was to have a valid category B driving license and to drive a minimum of 2000 km per year. The participants were divided into three age groups: 18-24, 25-49 and over 50 years of age. There were 20 people in each age group.
Each of the participants had 4 scenarios to go through. Two trips were conducted in good conditions and two in bad weather conditions. Before commencing the drive, each examined person was familiarized with the regulations of participation in the study and about possible side effects such as dizziness or nausea. It was also necessary to sign the consent for participation in the study and a statement on getting acquainted with the rules of using the driving simulator. Additionally, it was necessary to complete the SSQ questionnaire in order to check the well-being of the examined person. This operation was also carried out after each subsequent drive in order to detect possible symptoms of the simulator sickness. The evaluation covered, among the others, such factors as: -general discomfort, -tiredness, -drowsiness, -headache, -eye strain, -difficulty with concentration, -nausea, -confusion, -blurred vision, -dizziness, -general weakness, -the need to take a breath, -stomach discomfort, -vomiting.
If any of the factors were considered to be severe, it would be necessary to terminate the test. None of the participants had symptoms of simulator sickness, therefore all participants completed the study.
Having completed the questionnaire, the person examined was invited to the cab of the AS 1200-6 driving simulator. The next step was to conduct the familiarization and commence the test. First, an adaptation scenario was run in order to accustom people to the driving simulator and possibly exclude those with simulator sickness. After each drive there was a few minutes break, followed by the completion of the next questionnaire.
Time spent on one person ranged from 60 to 90 minutes. As a result, 60 trips were obtained: 30 in good and 30 in bad weather conditions. During the simulation, a set of several dozen parameters related to vehicle motion were recorded, in particular the distance to the vehicle ahead, time to the collision with another vehicle or object on the road, speed, intensity of braking and acceleration.
Results
The main purpose of the analyses was to determine the impact of speed-limiting signs on the behaviour of the road users. In each of the points marked on the diagram, the vehicle speed and speed change were analysed. The value of the speed at a distance of 200 m before the sign was considered as the starting value. Speed diagrams for individual persons show significant differences in the way drivers react to speed limits. Most drivers slowed down after passing the sign/board and maintained a reduced speed for about 1200 m. There were drivers who kept the speed at a much higher distance or did not return to a original speed at all. As expected, some drivers did not respond to speed-limiting signs regardless of their type or message. The way individual drivers reacted is presented in the diagrams. Another aspect of the study was to determine the impact of signs informing about the possibility to use an alternative route. Three ways of providing information for drivers were developed: a board with the text "Recommended detour" (Fig. 5), a board with driving times with several percent time difference (Fig. 6) and a board with driving times of twice the time length (Fig. 7). Table 1 presents the results obtained at the end of the study. Analysing the results of the research, one can notice the visible relationship between the content of the messages transmitted and the reaction of drivers. The drivers reacted most strongly to the signs informing about the much longer time of the journey through the main route and the recommended detour. There were also differences in the drivers' behaviours depending on age. Studies have shown that people in the youngest age group were more likely to choose alternative routes.
Conclusion
The use of a driving simulator for researching an impact of ITS services on the Road Safety makes it possible to conduct tests in a repeatable and safe manner. Studies have shown that drivers have reacted more strongly to the speed limits displayed on variable messages signs than speed limits on static road signs.
Studies have shown, among the others, the difference static signs and speed limits on variable content signs, affected the drivers. For variable message signs, there was a greater decrease in speed than in the case of static signs. Also the age of the drivers and atmospheric conditions had a significant influence on their behaviour. Persons in the youngest age group showed the highest inclination to bravura even in adverse weather conditions and slippery surfaces. Most of the respondents slowed down after seeing the speed limit sign, but only some of them were travelling at the allowed speed. The participants can be divided, taking their behaviour under consideration, in to persons who reduced the speed after seeing the sign and maintained this speed and then accelerated to speeds close to the original speed, persons who had slowed down and did not return to the original speed and those who ignored the speed limit.
The article presents only examples of research results. Detailed results are discussed in the article entitled: "Influence of the selected ITS services on the manner of driving a vehicle -results of the simulation tests using the-top-of-the-range driving simulators". The results were used to calibrate the Vissim/Visum/Saturn simulation software in order to carry out further analyses. The data will also be processed in detail in the SPSS software.
|
2019-04-16T13:29:07.161Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "a8868a61a37479ef278988612a77f3128b559eb2",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/90/matecconf_gambit2018_02009.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0109ca6d19650ba692c3bb9404238217a47c9590",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221496824
|
pes2o/s2orc
|
v3-fos-license
|
Natural Flavonoids as Potential Angiotensin-Converting Enzyme 2 Inhibitors for Anti-SARS-CoV-2
Over the years, coronaviruses (CoV) have posed a severe public health threat, causing an increase in mortality and morbidity rates throughout the world. The recent outbreak of a novel coronavirus, named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) caused the current Coronavirus Disease 2019 (COVID-19) pandemic that affected more than 215 countries with over 23 million cases and 800,000 deaths as of today. The situation is critical, especially with the absence of specific medicines or vaccines; hence, efforts toward the development of anti-COVID-19 medicines are being intensively undertaken. One of the potential therapeutic targets of anti-COVID-19 drugs is the angiotensin-converting enzyme 2 (ACE2). ACE2 was identified as a key functional receptor for CoV associated with COVID-19. ACE2, which is located on the surface of the host cells, binds effectively to the spike protein of CoV, thus enabling the virus to infect the epithelial cells of the host. Previous studies showed that certain flavonoids exhibit angiotensin-converting enzyme inhibition activity, which plays a crucial role in the regulation of arterial blood pressure. Thus, it is being postulated that these flavonoids might also interact with ACE2. This postulation might be of interest because these compounds also show antiviral activity in vitro. This article summarizes the natural flavonoids with potential efficacy against COVID-19 through ACE2 receptor inhibition.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which is the causative agent of Coronavirus Disease 2019 or COVID-19, triggered a pandemic affecting over 215 countries and territories around the world [1,2]. As of August 2020, there are more than 23 million cases worldwide with over 800,000 deaths, indicating that the virus is highly infectious with its pathogenicity being a global health threat [3][4][5]. The number of positive cases and deaths due to COVID-19 continues to SARS-CoV-2, which causes severe respiratory syndrome in humans, is a positive-strand RNA virus. The virus replication cycle begins with the entry of the virus into the human body by attaching to the host cellular receptor angiotensin-converting enzyme 2 (ACE2), assisted by a protein spike (S), followed by the release of the virus genome material into the host cell [9]. The viral genome contains two overlapping polyproteins (polyprotein 1a and polyprotein 1ab), which are cleaved by Mpro (the main protease) into 16 non-structural proteins, which are then translated into structural (STR proteins) and non-structural proteins (non-STRs). This is followed by virus assembly, which releases virions from the infected cells through exocytosis [10,11].
The angiotensin-converting enzyme (ACE)-related carboxypeptidase, ACE2, is a type I integral membrane protein of 805 amino acids containing one HEXXH-E zinc-binding consensus sequence [12]. ACE2 is involved in regulating cardiac function and is also a functional receptor for the coronavirus that causes acute respiratory syndrome (SARS). ACE2 receptors are the largest target of SARS-CoV-2 because they play an important role in the transmission of viruses to alveolar cells [13]. Inhibition or regulation of ACE2 receptors may potentially be effective in the treatment of COVID-19. COVID-19 is currently being treated with anti-infective drugs such as antimalarial drugs (chloroquine, hydroxychloroquine [14][15][16][17], antiviral drugs (remdesivir [18], saquinavir [19], favipiravir [20], lopinavir [21], ribavirin [22], and oseltamivir), and certain immunosuppressive drugs such as tocilizumab [23]. Tocilizumab was approved by the Food and Drug Administration (FDA) to manage cytokine release syndrome (CRS) in patients receiving chimeric antigen receptor T-cell therapy. This drug was shown to reduce toxicity and improve immune-related toxicity [24,25]. Tocilizumab can block the activity of proinflammatory interleukin-6 (IL-6), which is involved in the pathogenesis of pneumonia that causes death in COVID-19 patients [26]. However, to date, we are still waiting for the results of the ongoing phase 3 clinical trial that might support and prove the effectiveness of these drugs in treating patients with SARS-CoV-2 infection. For example, conducted a randomized study on the use of placebo-controlled and intravenous remdesivir SARS-CoV-2, which causes severe respiratory syndrome in humans, is a positive-strand RNA virus. The virus replication cycle begins with the entry of the virus into the human body by attaching to the host cellular receptor angiotensin-converting enzyme 2 (ACE2), assisted by a protein spike (S), followed by the release of the virus genome material into the host cell [9]. The viral genome contains two overlapping polyproteins (polyprotein 1a and polyprotein 1ab), which are cleaved by Mpro (the main protease) into 16 non-structural proteins, which are then translated into structural (STR proteins) and non-structural proteins (non-STRs). This is followed by virus assembly, which releases virions from the infected cells through exocytosis [10,11].
The angiotensin-converting enzyme (ACE)-related carboxypeptidase, ACE2, is a type I integral membrane protein of 805 amino acids containing one HEXXH-E zinc-binding consensus sequence [12]. ACE2 is involved in regulating cardiac function and is also a functional receptor for the coronavirus that causes acute respiratory syndrome (SARS). ACE2 receptors are the largest target of SARS-CoV-2 because they play an important role in the transmission of viruses to alveolar cells [13]. Inhibition or regulation of ACE2 receptors may potentially be effective in the treatment of COVID-19. COVID-19 is currently being treated with anti-infective drugs such as antimalarial drugs (chloroquine, hydroxychloroquine [14][15][16][17], antiviral drugs (remdesivir [18], saquinavir [19], favipiravir [20], lopinavir [21], ribavirin [22], and oseltamivir), and certain immunosuppressive drugs such as tocilizumab [23]. Tocilizumab was approved by the Food and Drug Administration (FDA) to manage cytokine release syndrome (CRS) in patients receiving chimeric antigen receptor T-cell therapy. This drug was shown to reduce toxicity and improve immune-related toxicity [24,25]. Tocilizumab can block the activity of proinflammatory interleukin-6 (IL-6), which is involved in the pathogenesis of pneumonia that causes death in COVID-19 patients [26]. However, to date, we are still waiting for the results of the ongoing phase 3 clinical trial that might support and prove the effectiveness of these drugs in treating patients with SARS-CoV-2 infection. For example, Wang et al. (2020) conducted a randomized study on the use of placebo-controlled and intravenous remdesivir in 10 hospitals in Hubei, China [27]. The study found that intravenous remdesivir did not significantly increase the time for clinical improvement, the mortality, or the time for virus clearance in patients with serious SARS-CoV-2 compared to placebo. However, hydroxychloroquine or chloroquine with or without azithromycin did not enhance clinical status at 15 days [28]. In an effort to find new therapies for COVID-19, natural product sources are also being explored and re-evaluated for their activity against this deadly virus [24].
Natural compounds with high bioavailability and low cytotoxicity are the most efficient candidates [29]. Flavonoids are structurally heterogeneous, polyphenolic compounds present in high concentrations. Flavonoids are natural products found in many plants, and they play an important role in plant physiology; they were intensively investigated for having bioactivity beneficial to health, such as anti-inflammatory [30], anticancer [31], antioxidant [32], anti-lipogenic [33], metal-chelating [34], antimicrobial [35], and antiviral [36] properties. More than 2000 plant-derived flavonoids have been identified. Bioactive compounds from flavonoid derivatives are valuable for the development of drugs and as additional therapies for these infections. Other flavonoids including flavones and flavonoids were investigated for having antiviral potential, and many of them showed significant antiviral responses in both in vitro and in vivo studies. Naringenin and hesperetin (flavanon), hesperidin (flavanonone glycoside), baicalin and neohesperidin (flavone glycoside), nobiletin (O-methylation), scutellarin (flavone), nicotinamin (nonproteinogenic amino acids), and glycyrinodin (methylated-eminin-1,3,8trihydroxyanthraquinone)are amongst natural ACE2 inhibitors [37][38][39]. This review focuses on the prospect of utilizing flavonoids as potential treatment for SARS-CoV-2 infection.
Methods
This review was based on the literature obtained from PubMed and Google Scholar using 15 keywords. The results of the initial search strategy were firstly filtered by title and abstract. The full text of the relevant articles was examined for inclusion and exclusion criteria. When an article reported duplicate information from the same source, the information of the two reports was combined to obtain the complete data but was only counted as one case. A list of selected references from papers taken was used to further identify relevant citations. For the purpose of this review, the research focused on seven key words, namely, "coronavirus", "angiotensin-converting enzyme", "angiotensin converting enzyme II of coronavirus", "angiotensin-converting enzyme II inhibitor CoV", "natural compounds ACE and ACEII inhibitors enzyme II of coronavirus", "flavonoid as antiviral, antioxidant, antiinflammation", and "flavonoid as ACE2 inhibitor
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)
SARS-CoV-2 initially appeared as part of a major outbreak of respiratory disease centered in Hubei Province, China. It was identified as a novel type of coronavirus. Coronaviruses belong to the large and enveloped Coronaviridae family under the Nidovirales order of viruses with positive-stranded crown-like RNA [40,41]. The viral genome is 27 to 32 kb in size and is the largest virus among all RNA viruses [6,42]. There are six types of coronaviruses, namely, alphacoronavirus 229E, alphacoronavirus NL63, betacoronavirus OC43, HKU1 betacoronavirus, severe acute respiratory illness coronavirus (SARS-CoV-1), and Middle East respiratory syndrome coronavirus (MERS-CoV). CoV belongs to the betacoronavirus class [37,43]. Phylogenetic analysis shows that SARS-CoV-2 belongs to the same subgenus as CoVs that caused the outbreak of severe acute respiratory syndrome (SARS) in 2002-2004 [44] addition, the SARS-CoV-2 sequence is similar to CoVs isolated from bats [45]. The SARS-CoV-2 genome has an 89% similarity in homology compared to the ZXC21 bat coronavirus and an 82% similarity to SARS-CoV-1 [6,46]. Thus, a hypothesis was deduced that SARS-CoV-2 originated from bats, which mutated and became infectious to humans [39,47].
The genome of SARS-CoV-2 contains 14 open reading frames (ORFs) encoding 27 proteins ( Figure 2). The 5 terminus encodes for 15 nonstructural proteins collectively involved in virus replication and possibly in immune evasion, while the 3 terminus encodes for structural and accessory proteins [42,48]. The presence of a spike protein (S protein), which resembles a nail or an arrow on the surface of this virus, makes the structure even more unique than others. This S protein attaches to the angiotensin-converting enzyme (ACE) 2 receptors on the surface of host respiratory cells [49,50].
Molecules 2020, 25, x 4 of 21 arrow on the surface of this virus, makes the structure even more unique than others. This S protein attaches to the angiotensin-converting enzyme (ACE) 2 receptors on the surface of host respiratory cells [49,50]. A. B.
Angiotensin-Converting Enzyme 2 (ACE2)
SARS-CoV-2 uses the angiotensin-converting enzyme (ACE) 2 receptor for entry into target cells. ACE2 is largely expressed by epithelial cells of the lung, kidney, heart, blood vessels, and intestine. ACE and ACE2 belong to the ACE family of dipeptidyl carboxydipeptidases, and they have distinct functions. ACE converts angiotensin I into angiotensin II, which in turn binds and activates angiotensin II receptor type 1 (AT1R). This activation leads to vasoconstrictive, pro-inflammatory, and pro-oxidative effects [52]. ACE2 exists in two forms: a soluble form that represents the circulating ACE2, and a structural transmembrane protein with extracellular domain that serves as a receptor for the spike protein of SARS-CoV-2. The latter is a polypeptide composed of 805 amino acids [53]. This molecule is an inseparable part of a type 1 membrane protein that breaks down the main residue (a single hydrophobic molecule) on the carboxy C-terminal of any bound substrate [54]. ACE2 hydrolyzes the C-terminal domain of leucine from Ang I to produce non-peptides angiotensins 1-9 that can be converted into heptapeptides by ACE and other peptidases. Furthermore, ACE2 can directly reduce angiotensin II to angiotensins 1-7 [55]. Angiotensins 1-7 work on the Mas receptors to relax blood vessels and exhibit anti-proliferation and anti-oxidative activities. ACE2/angiotensins The structure of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (https://www.economist.com/briefing/2020/03/12/understanding-sars-cov-2-and-the-drugs-that-mightlessen-its-power) and (B) its genome [51].
Angiotensin-Converting Enzyme 2 (ACE2)
SARS-CoV-2 uses the angiotensin-converting enzyme (ACE) 2 receptor for entry into target cells. ACE2 is largely expressed by epithelial cells of the lung, kidney, heart, blood vessels, and intestine. ACE and ACE2 belong to the ACE family of dipeptidyl carboxydipeptidases, and they have distinct functions. ACE converts angiotensin I into angiotensin II, which in turn binds and activates angiotensin II receptor type 1 (AT1R). This activation leads to vasoconstrictive, pro-inflammatory, and pro-oxidative effects [52]. ACE2 exists in two forms: a soluble form that represents the circulating ACE2, and a structural transmembrane protein with extracellular domain that serves as a receptor for the spike protein of SARS-CoV-2. The latter is a polypeptide composed of 805 amino acids [53]. This molecule is an inseparable part of a type 1 membrane protein that breaks down the main residue (a single hydrophobic molecule) on the carboxy C-terminal of any bound substrate [54]. ACE2 hydrolyzes the C-terminal domain of leucine from Ang I to produce non-peptides angiotensins 1-9 that can be converted into heptapeptides by ACE and other peptidases. Furthermore, ACE2 can directly reduce angiotensin II to angiotensins 1-7 [55]. Angiotensins 1-7 work on the Mas receptors to relax blood vessels and exhibit anti-proliferation and anti-oxidative activities. ACE2/angiotensins 1-7/Mas formed by the participation of angiotensins 1-7 can attack certain parts of ACE-angiotensin II-AT1R, with functions in maintaining the balance of the body [55,56].
The binding of SARS-CoV to the ACE2 receptor regulates the cellular expression of the receptor, and the binding process induces internalization, which depends on clathrin [57]. ACE2 not only facilitates the invasion and rapid replication of SARS-CoV, but it is also used by the cell membrane, thus damaging angiontensin II, which results in acute damage of lung tissues [58]. Because the lungs are the main target organs for COVID-19 infection, early onset of respiratory symptoms is common among patients [59]. The results of the study conducted by Imai et al. [60] showed that blocking the renin-angiotensin signaling pathway could relieve severe acute lung injury caused by SARS-CoV-2.
SARS-CoV-2 attaches to human ACE2 through the binding of spike (S) proteins, as shown in Figure 3 [61]. The S protein of SARS-CoV-2 contains S1 and S2 subunits. The S1 subunit ( Figure 4) consists of a receptor-binding domain (RBD) that is responsible for binding with the host ACE2, and the S2 subunit facilitates membrane fusion in the host cells [62,63]. The RBD contains a loop-binding pocket (residue 424-494 or 438-506), which is called the receptor-binding motif (RBM) [62,64]. The RBM cleaves the ACE2 receptor so that SARS-CoV can enter the host cells. After SARS-CoV binds to ACE2, the S2 subunit facilitates membrane fusion in the endosomal plasma through conformational change, thereby releasing the RNA genome into the target cells. After transcription and translation, the structural and nonstructural proteins of CoV and the RNA genome are further assembled into virions, which are transported through vesicles and released from target cells.
Molecules 2020, 25, x 5 of 21 1-7/Mas formed by the participation of angiotensins 1-7 can attack certain parts of ACE-angiotensin II-AT1R, with functions in maintaining the balance of the body [55,56]. The binding of SARS-CoV to the ACE2 receptor regulates the cellular expression of the receptor, and the binding process induces internalization, which depends on clathrin [57]. ACE2 not only facilitates the invasion and rapid replication of SARS-CoV, but it is also used by the cell membrane, thus damaging angiontensin II, which results in acute damage of lung tissues [58]. Because the lungs are the main target organs for COVID-19 infection, early onset of respiratory symptoms is common among patients [59]. The results of the study conducted by Imai et al. [60] showed that blocking the renin-angiotensin signaling pathway could relieve severe acute lung injury caused by SARS-CoV-2.
SARS-CoV-2 attaches to human ACE2 through the binding of spike (S) proteins, as shown in Figure 3 [61]. The S protein of SARS-CoV-2 contains S1 and S2 subunits. The S1 subunit (Figure 4) consists of a receptor-binding domain (RBD) that is responsible for binding with the host ACE2, and the S2 subunit facilitates membrane fusion in the host cells [62,63]. The RBD contains a loop-binding pocket (residue 424-494 or 438-506), which is called the receptor-binding motif (RBM) [62,64]. The RBM cleaves the ACE2 receptor so that SARS-CoV can enter the host cells. After SARS-CoV binds to ACE2, the S2 subunit facilitates membrane fusion in the endosomal plasma through conformational change, thereby releasing the RNA genome into the target cells. After transcription and translation, the structural and nonstructural proteins of CoV and the RNA genome are further assembled into virions, which are transported through vesicles and released from target cells.
The Active Site of hACE2 as the Therapeutic Target of COVID-19
The amino-acid sequence of SARS-CoV-2 has a 76.5% similarity to that of SARS-CoV, and their S proteins are quite homologous [66,67]. As shown in Figure 4, the RBD of the S protein of SARS-CoV-2 is located within amino-acid residues 318-510 (left side), containing the RBM (green ribbon), which is on the surface, right in front of ACE2. Arg439 of the RBM in SARS-CoV-2 and Glu329 of ACE2 interact and form a bridge to stabilize the complex. Based on the interaction of ACE2 with the S protein in SARS-CoV-2, antibodies or small molecules can be used to target and inhibit SARS-CoV-2 replication through inhibition of the ACE2 receptor. The S protein, thus, loses its partners to enter the host cell, as illustrated on the right side of Figure 4. ACE2 can be a target for inhibiting the entry of SARS-CoV-2 into the host cell because the binding affinity of the S protein of SARS-CoV-2 to the ACE2 receptor is 10-20-fold stronger than that of the S protein of SARS-CoV [68][69][70]. The Active Site of hACE2 as the Therapeutic Target of COVID-19 The amino-acid sequence of SARS-CoV-2 has a 76.5% similarity to that of SARS-CoV, and their S proteins are quite homologous [66,67]. As shown in Figure 4, the RBD of the S protein of SARS-CoV-2 is located within amino-acid residues 318-510 (left side), containing the RBM (green ribbon), which is on the surface, right in front of ACE2. Arg439 of the RBM in SARS-CoV-2 and Glu329 of ACE2 interact and form a bridge to stabilize the complex. Based on the interaction of ACE2 with the S protein in SARS-CoV-2, antibodies or small molecules can be used to target and inhibit SARS-CoV-2 replication through inhibition of the ACE2 receptor. The S protein, thus, loses its partners to enter the host cell, as illustrated on the right side of Figure 4. ACE2 can be a target for inhibiting the entry of SARS-CoV-2 into the host cell because the binding affinity of the S protein of SARS-CoV-2 to the ACE2 receptor is 10-20-fold stronger than that of the S protein of SARS-CoV [68][69][70].
Han et al. identified the residues of ACE2 that directly interact with the RBD of the SARS-CoV-2 S protein. The residues involved are Gln24, Thr27, Lys31, His34, Glu37, Asp38, Tyr41, Gln42, Leu45, Leu79, Met82, Tyr83, Asp90, Gln325, Glu329, Asn330, Lys353, and Gly54. They also determined that Glu22, Glu23, Lys26, Asp30, Glu35, Glu56, and Glu57 are important in the interaction. Notably, Lys26 and Asp30 play a critical role in the interaction of the RBD S protein of SARS-CoV; thus, Han et al. concluded that these residues have the potential to be developed as a target for entry inhibitors [71]. Moreover, Gln325/Glu329 and Asp38/Gln42 of ACE2 are key binding sites that form hydrogen bonds with Arg426 and Tyr436 of the S protein SARS-CoV-2 [72]. These critical residues are also present in the S protein of SARS-CoV-2 with a similar sequence [73]. Therefore, the residues can be used as primary target active sites of ACE2 inhibitors. We hypothesize that, if the inhibitors selectively bind to this active site (shown in yellow color in Figure 2), then they might be able to inhibit the S protein of SARS-CoV-2 from interacting with hACE2. Guy et al. [74] hypothesized that the residues of the ACE2 binding pocket differ slightly from those of the active site of ACE2 (isolated from pig kidney tissue). However, the types of amino acids involved are nearly the same.
Synthetic Compounds of ACE2 Inhibitors
Research on ACE2 inhibitors or blockers is still lacking, and only very few drugs are currently available in the clinics. However, ACE1 inhibitors, such as losartan, are widely marketed. Several countries use ACE1/ARB, such as losartan and telmisartan, to reduce the aggressiveness and mortality of COVID-19. Kuster et al. proposed that ACE1 therapy should be continued or initiated on patients with a history of heart failure, hypertension, or myocardial infarction [75] Zhang et al. [76] found that, among patients with hypertension who were hospitalized with COVID-19, inpatient treatment with ACEI/ARB was associated with a lower risk of death from all causes compared to non ACEI/ARB users. ARB is widely used to treat hypertension, and the use of this drug clinically provides exceptional tolerance for several groups treated with this class of drugs. In addition, the profile of side effects is described as "like a placebo". ARBs are most suitable for antagonizing the proinflammatory effects of angiotensin II in patients with a recent positive COVID-19 test; thus, this compound may have the best pharmacological properties for this indication. From the comparative analysis of available ARBs, telmisartan has traits that make it the best compound [77].
Angiotensin receptor blockers (ARBs) have effects similar to angiotensin-converting enzyme (ACE) inhibitors, but ACE inhibitors act by preventing the formation of angiotensin II rather than blocking the binding of angiotensin II to muscles in blood vessels. ARB is used to control high blood pressure, treat heart failure, and prevent kidney failure in diabetics. Therefore, angiotensin receptor blockers (ARBs; such as losartan, valsartan, telmisartan, etc.) can be a new therapeutic approach to block the binding and, hence, the attachment of SARS-CoV-2 RBD to cells that express ACE2, thereby inhibiting their infection of the host cell [78].
Natural Compounds Inhibiting ACE1 and ACE2 Receptors
The discovery of novel drugs from natural products helps to improve our understanding of diseases [93,94]. The active lead compounds from natural products can be further modified to enhance their biological activity in order to be developed as drug candidates [95,96]. Recent progress on natural products resulted in compounds being developed to treat viral infections [97]. Utomo et al. [98]. reported the biological activity of natural products in inhibiting SARS-CoV-2 using in silico methods. Islam et al. comprehensively reviewed studies on natural products with inhibitory activity against CoV.
Natural products such as flavonoids, xanthones, proanthocyanidins, secoiridoids, and peptides were reported to contain anti-ACE activity; however, further research is needed to confirm the findings [24]. Table 1 summarizes the natural compounds that were reported to have inhibitory effects on ACE1 and ACE2 receptors. From this table, we can conclude that flavonoids are the most researched with regard to ACE inhibition activity.
Flavonoids as ACE2 Inhibitors
Flavonoids are an important class of natural products with several subgroups, including chalcones, flavones, flavonols, and isoflavones [109]. Flavonoids contain a flavan core with a 15-carbon skeleton. There are two benzene rings (A and C rings), connected by a heterocyclic pyran ring (B ring). The three cycles or heterocycles in the flavonoid backbone are generally called rings A, B, and C, as shown in Figure 5. The B ring comprises a C2-C3 double bond and carbonyl groups that play an important role in the biological activities. The hydroxyl groups (3 and 5 positions) of the C ring, as well as the hydroxyl groups of the A ring (7 and 5 positions), are known to be responsible for the radical scavenging activity of flavonoids [103]. The most important functional groups of flavonoids that might be involved in ACE2 inhibition are illustrated in Figure 6.
Flavonoids as ACE2 Inhibitors
Flavonoids are an important class of natural products with several subgroups, including chalcones, flavones, flavonols, and isoflavones [109]. Flavonoids contain a flavan core with a 15carbon skeleton. There are two benzene rings (A and C rings), connected by a heterocyclic pyran ring (B ring). The three cycles or heterocycles in the flavonoid backbone are generally called rings A, B, and C, as shown in Figure 5. The B ring comprises a C2-C3 double bond and carbonyl groups that play an important role in the biological activities. The hydroxyl groups (3′ and 5′ positions) of the C ring, as well as the hydroxyl groups of the A ring (7 and 5 positions), are known to be responsible for the radical scavenging activity of flavonoids [103]. The most important functional groups of flavonoids that might be involved in ACE2 inhibition are illustrated in Figure 6. As can be seen in Figure 6, the resorcinol molecule has two hydroxyl groups in its aromatic ring structure, and they are located at meta-positions with respect to each hydroxyl group. The high reactivity of the resorcinol structure is primarily associated with the location of these two hydroxyl groups in the benzene ring [110]. The resorcinol moiety of ring A might play a role in ACE2 inhibition, as this group might disrupt hydrogen bonds between Glu329/Gln325 of ACE2 and Arg426 of the S protein of SARS CoV-2, which form a salt bridge to stabilize their interaction [72,73].This hydrophobic interaction occurs in ring C with some non-polar amino acid residues such as Gly354, Asp355, and Phe356 [111].
As summarized in Table 1, flavonoids have potential as ACE1 and ACE2 inhibitors. Studies on flavonoids for anti-SARS-CoV activity were widely published. For example, myricetin inhibits viral replication by affecting the ATPase activity of SARS-CoV [112]. Other flavonoids reported to have anti-SARS-CoV activity include kaempferol [113], luteolin [114], quercetin, daidzein, EGCG, GCG, and herbacetin [115,116]. Quercetin functions as an inhibitor or noncompetitive inhibitor of 3chymotripsin-like protease (3CLpro) and papain-like protease (PLpro) [117]. Luteolin inhibits furin proteins which are known to be some of the enzymes that break down the S protein of SARS-CoV, as reported in the Middle East respiratory syndrome (MERS) [114]. Kaempferol functions as a noncompetitive inhibitor of 3CLpro and PLpro [117]. Hesperidin inhibits the interaction between the RBD of the S protein of SARS-CoV-2 and the ACE2 receptor in humans; thus, it was also predicted to potentially inhibit the entry of SARS-CoV-2 [118].
Mode of Action of Flavonoids
Polyphenolic compounds, including flavonoids, terpenoids, hydrolysable tannins, xanthones, procyanidin, and caffeoylquinic acid derivatives, were discovered to be effective natural ACE inhibitors [119,120]. Table 2 summarizes the studies on plant extracts rich in flavonoids used as ACE2 inhibitors. As can be seen in Figure 6, the resorcinol molecule has two hydroxyl groups in its aromatic ring structure, and they are located at meta-positions with respect to each hydroxyl group. The high reactivity of the resorcinol structure is primarily associated with the location of these two hydroxyl groups in the benzene ring [110]. The resorcinol moiety of ring A might play a role in ACE2 inhibition, as this group might disrupt hydrogen bonds between Glu329/Gln325 of ACE2 and Arg426 of the S protein of SARS CoV-2, which form a salt bridge to stabilize their interaction [72,73].This hydrophobic interaction occurs in ring C with some non-polar amino acid residues such as Gly354, Asp355, and Phe356 [111].
As summarized in Table 1, flavonoids have potential as ACE1 and ACE2 inhibitors. Studies on flavonoids for anti-SARS-CoV activity were widely published. For example, myricetin inhibits viral replication by affecting the ATPase activity of SARS-CoV [112]. Other flavonoids reported to have anti-SARS-CoV activity include kaempferol [113], luteolin [114], quercetin, daidzein, EGCG, GCG, and herbacetin [115,116]. Quercetin functions as an inhibitor or noncompetitive inhibitor of 3-chymotripsin-like protease (3CLpro) and papain-like protease (PLpro) [117]. Luteolin inhibits furin proteins which are known to be some of the enzymes that break down the S protein of SARS-CoV, as reported in the Middle East respiratory syndrome (MERS) [114]. Kaempferol functions as a noncompetitive inhibitor of 3CLpro and PLpro [117]. Hesperidin inhibits the interaction between the RBD of the S protein of SARS-CoV-2 and the ACE2 receptor in humans; thus, it was also predicted to potentially inhibit the entry of SARS-CoV-2 [118].
Mode of Action of Flavonoids
Polyphenolic compounds, including flavonoids, terpenoids, hydrolysable tannins, xanthones, procyanidin, and caffeoylquinic acid derivatives, were discovered to be effective natural ACE inhibitors [119,120]. Table 2 summarizes the studies on plant extracts rich in flavonoids used as ACE2 inhibitors. A number of epidemiological studies suggested a negative relationship between the consumption of flavonoid drugs and the development of various diseases. Flavonoids with typical structures can interact with enzyme systems involved in important pathways, showing effective poly-pharmacological behavior. Thus, it is not surprising that the relationship between chemical structures and their activities was widely studied [124]. The presence of C2=C3 double bonds in conjugation with C4 carbonyl groups of certain groups on flavonoids, as well as hydroxylation patterns, especially the catechol portions of ring B, methoxyl groups, and fewer saccharide bonds, provides higher antioxidant properties. The mechanism might involve planarity, which contributes to the shifting of electrons across the next molecule and affects the dissociation constant of the hydroxyl phenolic group, such that the whole molecule can bind to the target molecule, similar to an enzyme that matches the pattern [125].
Guerrero et al. [103] comprehensively analyzed different flavonoids to determine the functional groups responsible for inhibiting ACE. Quantitative structure-activity relationship (QSAR) modeling was conducted, and the lack of the B ring in the flavonoid skeleton was shown to reduce the inhibitory activity of ACE by up to 91%. The absence of carbonyl groups in the B ring also reduced the inhibitory activity of ACE by 74%. The 3-OH, 3 -OH, and 5 -OH groups are important since the loss of these groups reduced inhibitory activity by 44%, 57%, and 78% [103], respectively, as shown in Figure 6. These groups also play an important role in inhibiting neuraminidase receptors of the influenza A viruses (H1N1 and H3N2) [126]. Other studies also reported that losing the 3-OH group significantly reduced flavonoid antioxidant [127] and anti-CoV activities [115]. We also observed that 3-OH and catechol of the C ring moiety of catechin formed strong hydrogen bonds with H1N1 neuraminidase [126]. Hošek and Šmejkal [128] reported that these functional groups play an important role in anti-inflammatory activity against the receptor target of inflammation. Moreover, hesperidin was also reported as an ACE2 inhibitor since it can interact with the RBD of the S protein SARS-CoV2 and hACE2 interface. The dihydroflavone moiety of hesperidin was predicted to be parallel to the β-6 RBD S protein sheet, while the sugar moiety fits into a shallow hole in the direction away from ACE2 [118].
The most critical mechanism of flavonoids as antioxidant, anti-inflammation, anticarcinogenic, and antiviral compounds is the protection of the body against reactive oxygen species (ROS) [129,130]. ROS interferes with cellular function through the role of lipid peroxidation, resulting in damaged cell membranes. An increase in ROS production during tissue injury is due to the depletion of endogenous scavenger compounds [131,132]. Flavonoids have a role as endogenous scavenging compounds [133]; thus, flavonoids can prevent inflammation or repair cell damage by scavenging ROS. The interaction between flavonoids and hydrophilic amino-acid residues of protein targets with strong affinity is suggested to be a mechanism of flavonoids in repairing cell damage [130,134].
Based on these findings, we believe that there is a strong relationship among the ACE2 inhibition, anti-inflammation, and antioxidant activities of flavonoids. However, the correlation among these three activities needs to be clarified through comprehensive in vitro and in vivo evaluation.
Perspectives and Overall Conclusion
The renin-angiotensin system (RAS) controls the homeostatic function of the vascular system. The two important enzymes involved in the RAS system, ACE1 and ACE2, function in accommodating rapid but coordinated feedback to any specific situation in the body that may disturb the system balance [135]. Their function is indispensable; hence, the choice to modulate these receptors for other health conditions, such as against the current COVID-19 infection, would have to be done in a careful manner.
Based on the information put forth in this review, it can be concluded that ACE2 could be a key receptor to combat COVID-19 infection. The inhibition of hACE2 may prevent the S protein of SARS-CoV-2 from fusing and entering host cells. However, as both RAS enzymes influence each other, inhibition of ACE2 alone in this case would lead to an increase in Ang II blood levels and a parallel reduction in the blood concentration of vasodilators angiotensins 1-7. In such a case, any disturbance in circulation homeostasis would not be corrected rapidly due to the absence of angiotensins 1-7.
This would be a health risk, especially to susceptible patients such as the elderly and patients with underlying CVS-related medical conditions. Ironically, these are the group of people that would have a higher risk of contracting severe COVID-19 infection.
The discovery of ACE2 as a part of the RAS is relatively new; however, some evidence shows that ACE2 could be more important than ACE1 in the modulation of the whole system. Although the morphology of ACE1 and ACE2 receptors shares huge similarities, ACE inhibitors (ACEis) cannot inhibit ACE2 receptors. Hence, the currently available ACEis are not as useful as ACE2 inhibitors [135]. This means that the structure of ACEis cannot be used as a building block in the design of ACE2 inhibitors. A new and fresh approach should be taken, and a comprehensive study of the receptor itself is needed.
Thus, this paper proposes to shift the focus in the design of ACE2 inhibitors toward flavonoids, which are an abundant group of compounds that can be found in many plants. The functional groups of flavonoids, such as the pyran moiety in the B ring and hydroxyl groups of the A ring (7-and 8-positions) and C ring (3-, 3 -, 4 -, and 5 -positions), may play an important role in their ACE2 inhibition. Preliminary research showed that Glu22, Glu23, Lys26, Asp30, Glu35, Glu56, and Glu57 of the hACE2 could be used as primary target sites in the design of an hACE2 inhibitor.
Flavonoids are synthesized by plants in response to microbial attacks; hence, their antibacterial and antiviral activities are expected. The wide variety of activities reported in the literature depends on the structures and side chains available in each flavonoid [127]. Despite the available data on the activity of certain flavonoids against ACE1 and ACE2 enzymes, as presented in Table 1, the studies were stopped at in silico or in vitro stages, and no further detailed studies are available. This could be due to some limitations surrounding the research on natural products, such as difficulties in obtaining a sufficient amount of substance through plant extractions or difficulties in the chemical synthesis of the flavonoids. However, the application of flavonoid-based scaffolds in the design of new ACE2 inhibitors could be a good approach. Based on the history of drug development, a combination between natural-based products and chemical synthesis is able to produce potent and effective medications, such as the anticancer drugs vincristine and vinblastine. This could be an approach to bring forward natural-based products for human use. 50 : the half maximal effective concentration, ADME: absorption, distribution, metabolism, and excretion, HIA: human intestinal absorption, PPB: plasma protein binding, BBB: blood-brain barrier, CNS: central nervous system, QSAR: quantitative structure-activity relationship, ROS: reactive oxygen species, RAS: renin-angiotensin system.
|
2020-09-03T09:04:18.101Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d7be7c41942c8c262497723dffffd962fb55e40a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/17/3980/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc76f8fc21a2b664a18cd77f3f32db8fd995ced",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
8618987
|
pes2o/s2orc
|
v3-fos-license
|
Unusual states of vortex matter in mixtures of Bose--Einstein Condensates on rotating optical lattices
A striking property of a single-component superfluid under rotation, is that a broken symmetry in the order parameter results in a broken translational symmetry, a vortex lattice. If translational symmetry is restored, the phase of the order parameter disorders and the broken symmetry in the order parameter is restored. We show that for Bose-Condensate mixtures on optical lattices (which may possess a negative dissipationless intercomponent drag), a new situation arises. A phase disordered nonsuperfluid component can break translational symmetry in response to rotation due to interaction with a superfluid component. This state is a modulated vortex liquid which breaks translational symmetry in the direction transverse to the rotation vector.
In a single-component superfluid under rotation a broken symmetry in the order parameter space results in a broken translational symmetry in real space: a vortex lattice. If translational symmetry is restored, the phase of the order parameter disorders and thus the broken symmetry in the order parameter space is also restored. We show that for Bose-Condensate mixtures in optical lattices with negative dissipationless drag, a new situation arises. This state is a modulated vortex liquid which breaks translational symmetry in the direction transverse to the rotation vector.
An important property of a superfluid is its specific rotational response. Namely it comes into rotation by means of the formation of a vortex lattice. Under the influence of other factors such as temperature, multiplicity of superfluid components, inhomogeneities etc., different "aggregate" states of vortex matter may form, such as vortex liquids, glasses, etc [1]. The variety of states is even richer in multicomponent systems [2]. The transitions between the various "aggregate" states of vortex matter are related to various ordering processess of particles in condensed matter systems. For example, the process of thermal vortex lattice melting can be mapped onto an insulator-to-superfluid transition of bosons. In this mapping, a vortex line is viewed as a world line of a boson with the z-axis mapped onto a "time"-axis and the vortex liquid, which is also entangled, represents the delocalized/superfluid state of the dual bosons [3]. The central result of the present work is that we find evidence in large-scale Monte Carlo (MC) computations that in a two-component Bose-Einstein condensate (BEC), the vortex lines support a state possessing properties of a vortex liquid simultaneously with properties of vortex lattice i.e. breakdown of translational symmetry.
Recent progress in creating and observing various mixtures of multi-component BEC has produced much interest in these systems. When BEC components are not spatially separated, the generic type of interaction between them is the current-current interaction (Andreev-Bashkin effect) [4] describing non-dissipative drag between the two superfluid components. Such a system in units where = 1 is described by the free energy density [4,5,6] where m 1 , m 2 , θ 1 and θ 2 are the masses and the phases of the condensates while n 1 , n 2 control the phase stiffnesses of the two components. Further, the drag coeficient n d controls the density of one component dragged by the superfluid velocity of the other component. Since here we are interested in the physics of rotating system we include the field Θ which accounts for rotation with angular velocity Ω = ∇ × Θ = 2πfẑ, where m i f is the number of rotation-induced vortices per unit area of component i. We will use f = 1/64 throughout. In the following, we denote vortices with 2πl (i,j) windings in θ (1,2) with a pair of integers (∆θ 1 = 2πl 1 , ∆θ 2 = 2πl 2 ) = (l 1 , l 2 ). The last term in (1) is the current-current interaction [4] which may be caused by different reasons, such as intercomponent van der Waals interaction [4] or originating with the underlying optical lattice [5]. It was first considered in the contexts of the physics of 3 He -4 He mixtures and coexisting neutronic and protonic condensates in neutron stars. Various aspects of the rotational response of this system has been studied so far only for positive values of drag in context of 3 He − 4 He mixtures [4] and BEC mixtures [7]. However, it has been recently shown that in optical lattices there arises an intriguing possibility to produce a BEC mixture with a negative inter-species drag n d [5].
In this paper we address the physics of a rotating system with a negative intercomponent drag and find it being very rich. This is manifested in the situations where the usual notions from disordered versus ordered vortex states do not directly apply. Let us first briefly recapitulate the phase diagram of this system in the absence of rotation. Its main feature is that for significantly large drag |n d | > n c , the easiest topological defects to excite thermally are (1, 1) vortex loops. Proliferation of these composite defects leads to a state with order only in the phase difference, a so-called super-counter-fluid [5,6,8]. In order to estimate the phase stiffness which is left in the system after the (1,1) vortex loops proliferate, one has to extract from the Eq. (1) the term which depends only on the gradients of the phase difference, which stiffness is thus unaffected by the proliferation of (1,1) loops. The corresponding separation of variables in the presence of rotation is given by After proliferation of (1,1) vortices it is the first term in (2) which accounts for the only phase stiffness remaining in the system, and we can renormalize the coefficient of the second term to zero and discard it. The complexity of the situation arising under rotation is that along with the (1,1) vortex loops excitations there are rotationinduced vortex lines. Vortex loops and lines affect each other's orderings and proliferation. Thus, we may ask (i) what are the ordering patterns of rotation-induced vortex lines in the model (1), (ii) how do rotation-induced vortex lines contribute to renormalization of stiffness and thus to symmetry breakdown patterns, and (iii) can ordering of the rotation-induced vortices signal the presence of a negative drag effect.
To address these questions, we have performed largescale MC computations using discretization of Eq. (1) under rotation, in the Villain approximation [6]. Throughout, we use a temperature scale such that the temperature T at which two decoupled superfluids with equal masses and phase stiffnesses transition to normal fluids is T = 3.3. A negative intercomponent drag will tend to increase the temperature at which this transition occurs.
Consider first the simplest limit when m 1 = m 2 = 1 and n 1 = n 2 = n. Eq. (2) then simplifies to When n d = 0, the condensates are decoupled and a rotating system forms two hexagonal lattices of the types (1,0) and (0,1). For n d < 0, an attractive interaction between rotation-induced vortices results. Thus, in the ground state the system forms a triangular lattice of (1,1) vortices. Such a configuration minimizes the gradients in the first term in Eq. (3). In the simplest limit m 1 = m 2 = 1 and n 1 = n 2 = n we have found regimes where the vortex lattice melts while vortices nonetheless retain their composite character. The system retains a symmetry in the phase difference thereby representing a "rotationinduced" super-counter-fluid state. Introducing a mass and density disparity m 1 = m 2 and n 1 = n 2 gives an entirely different ordering and symmetry breakdown. This is the main focus of this paper. We study the spatial symmetry breakdown pattern and the effect of thermal fluctuations, by computing real space averages of vortex densities. They are produced by integrating the z-directed vortex segments along the z-axis, , with a subsequent averaging over typically 10 4 different configurations at a given temperature. ν z i (r ⊥ , z) is the vorticity of component i in the z-direction at r = (x, y, z) and r ⊥ = (x, y) is the position in the xy-plane. Thus, for an elementary vortex on the numerical grid directed along z-axis, the quantity ν z i (r ⊥ , z) is nonzero and positive in lattice plaquette which corresponds to the center of the vortex. It is nonzero and negative for an antivortex, whenceν i (r ⊥ ) gives the average xy-position density of the rotationinduced vortices.
Let us consider the case n 2 /n 1 = 4, n d /n 1 = −5.0 and m 2 /m 1 = 2. Now, since the vortex density is proportional to Ωm i [9], there are twice as many vortices in component 2 as in component 1 for these parameters. The system exhibits a striking vortex ordering. Component 1 forms a triangular lattice, while component 2 with twice as many vortices, forms a honeycomb lattice. Every second vortex in the honeycomb lattice is co-centered with a vortex of the other component. This can be also viewed as an ordered equal mixture of (1,1) and (0,1) vortices. We find that the structure with a honeycomb plus hexagonal vortex lattice persists for a significant range of temperatures. Fig. 1a shows a realspace average and a 3d snapshot of a typical configuration of this spatial symmetry breakdown pattern at T = 6.9. This ordering has broken down at T = 9.5, where we observe a partial meltdown manifested in the disappearance of every second vortex position peak in the real-space averages. However, every other vorticity peak corresponding to a hexagonal sublattice co-centered with vortex lattice of component 1 survives. The reduction of the number of vorticity peaks in component 2, the corresponding change in the structure factor, along with a 3d snapshot of a typical vortex configuration, is shown on Fig. 1b. The structure factor of component i is defined as Let us finally turn to the case where m 2 /m 1 = 2, but n 2 /n 1 = 16. Now, with n d /n 1 = −2.5, we find that at low temperatures, the system instead forms two square lattices. In the ground state the square lattice in component 2, which has twice as many vortices as component 1, is rotated 45 degrees with respect to the lattice of component 1, so that every second vortex is co-centered with a vortex of component 1 lattice, see Fig. 2a. Again this can be viewed as an equal mixture of (1,1) and (0,1) vortices. Note that, in contrast to the case of two-component vortex matter with only repulsive interactions [7,10], here the vortex lattices are not interlaced. Here, the appearance of square symmetry is caused by attractive interactions between vortices of different types.
As the temperature is increased from T = 11.0 to T = 13.3, cf. Fig. 2, the evolution of the system is particularly remarkable: we observe a discontinuous phase transition, where in the real-space averages the number of vortex position peaks in component 1 doubles. This should be compared with the previous case of lower disparity of stiffnesses. There, in contrast, the system undergoes a transition to a state where vortex position peaks of the other component was reduced by a factor of two. Furthermore, both lattices change symmetry by collapsing onto a hexagonal co-centered configuration, as seen in the right panel of Fig. 2. A 3d snapshot of a part of the system, shown in the lower panel in Fig. 2, reveals that the process is accompanied by a rapid increase of vortex loops in component 1. Furthermore, the figure 3 shows the central feature of this state. Namely, the helicity modulus, equivalently the superfluid density computed according to the procedures in Ref. [6], for component 1 disappears essentially simultaneously with the structure factor for the square lattice in component 1. However, at the same time there emerges a nonzero triangular structure factor in component 1. It extends for a significant range of temperature where the helicity modulus of component 1 is zero. Therefore, the above observations are not related to a standard vortex-loop proliferation transition in the 3dXY universality class. If this behavior were associated with a standard vortexloop proliferation transition of the vortices in component 1, the superfluid stiffness (helicity modulus) of this component would vanish simultaneously with the structure factor of the corresponding vortex lattice.
Thus, we have a quite remarkable situation. On the one hand, zero helicity modulus in z-direction indicates that vortices are entangled with each other like in a vortex liquid, a state which has a dual counterpart in superfluid bosons [3]. On the other hand, the vortex system nonetheless features a structure function which is characteristic of a vortex lattice, namely it has distinct peaks at reciprocal lattice vectors. Thus, the dual counterpart of the vortex system we found, is a bosonic superfluid density wave.
In terms of vortex matter this corresponds to the following situation. Vortices in component 1 in this state are largely co-centered with the vortex lattice of component 2, but at these temperatures constantly and freely switch from being co-centered with one to being cocentered with the another vortex at different points along the z-axis. In order for the number of vortex position peaks of component 1 to be double the number that is generated by the rotation, a large number of (1,0) vor-tex loops must be induced. Then the part of each (1,0) loop that is parallel to the (0,1) vortices have a tendency to be co-centered with a (0,1) vortex, breaking translational symmetry for this segment, while the remaining part of the (1,0) loop which is not parallel to the (0,1) vortices has a random position and does not break translational symmetry. Only at further elevated temperatures, a crossover takes place where vortex loops proliferate, and the vortices loose line tension and the structure factor vanishes, see Fig. 3.
sq , at the Bragg-peaks at k ⊥ = (0.7854, 0.00) ≈ (π/4, 0). The (red) crosses represent the structure function of the triangular lattice in component 1, S (1) tri , at the Bragg-peak at k ⊥ = (1.1781, 0.0982) ≈ (3π/8, π/64). Furthermore, the (blue) circles represent the helicity modulus in the z-direction for component 1. It vanishes at the same temperature as the square lattice ordering ceases. However at the same temperature there appears a nonzero structure factor for a triangular lattice S (1) tri . This is accociated with the vortex state dual to bosonic superfluid density wave. The parameters are n2/n1 = 16, n d /n1 = −2.5 and m2/m1 = 2. The system size is L × L × L, with L = 64. Periodic boundary conditions are used in all directions, 10 5 sweeps are used for thermalization and 10 6 sweeps for collecting average values with sampling every 100 th sweep.
In conclusion, we have considered two-component superfluids with a negative dissipationless drag. In the model Eq. (1), the underlying optical lattice plays only a microscopic role by providing a negative intercomponent drag through the mechanisms discussed in [5]. Thus , Eq. (1) describes the system at temperatures T larger than the vortex pinning energy of optical lattice E p and thus there is no lattice pinning effect [11]. The vortex ordering pattern in this system is strongly affected by the negative dissipationless drag, resulting in the formation of square and honeycomb lattices. Observation of these different ordering symmetries in experiments would be the hallmark of intercomponent drag. At finite temperature there are phase transitions between states with dif-ferent lattice symmetries. The main conclusion of our paper is that, apart from different patterns of spacial symmetry breakdown, the standard notions of vortex ordering single-component vortex matter do not directly apply in the case of two-component vortex matter with a negative drag. Namely, we have identified a state of vortex matter which is dual to a bosonic superfluid density wave, where one of the components breaks translational symmetry even though there is no symmetry broken in the order parameter space. In this regime, a standard experimental technique of a density snapshot with a significant averaging along the z-axis would indicate a vortex lattice even though this is not a superfluid state. Since this state is phase-disordered, it can be discriminated from superfluid vortex lattice via interference experiments. In an experimental situation, these effects will be naturally affected by density inhomogeneities present in traps. However, studies [12] of the effect of the presence of a trap on three dimensional vortex matter, suggest that the above states can be realizable in an extended area near the center of the trap.
|
2008-11-15T12:52:36.000Z
|
2008-10-21T00:00:00.000
|
{
"year": 2008,
"sha1": "8ea97a4d482eee77d09dc1d3c2c975ba65ecac4f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0810.3833",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8ea97a4d482eee77d09dc1d3c2c975ba65ecac4f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
238852618
|
pes2o/s2orc
|
v3-fos-license
|
The P2Y12 Receptor Antagonist Ticagrelor Ameliorates Pulmonary Hypertension
Background: Pulmonary arterial hypertension (PAH) is a disease that the pulmonary artery is abnormally elevated. P2Y12 is an adenosine diphosphate (ADP) receptor and it act as the target of thienopyridine antiplatelet drugs by controlling vascular remodeling. Inhibition of P2Y12 receptor in the process of PAH was explored in this study. Methods: The PAH model was established in Sprague-Dawley rats by single subcutaneous injection of 60 mg/kg monocrotaline (MCT). The ticagrelor solution (a selective P2Y12R inhibitor) was intraperitoneally injected into rats at a dose of 14 mg/kg from the time of MCT injection to day 28. Results: In the lung tissues of PAH rats, the marked P2Y12R was detected. Treatment with ticagrelor greatly decreased P2Y12R level and eciently abolished the upregulation of α-SMA as demonstrated by Western blot and RT-PCR. The wall thickness and occlusion score of the pulmonary arterioles showed that blockade of P2Y12R could relieve lung remodeling caused by PAH. The haemodynamic changes at 4 weeks determined that P2Y12R inhibition affected RV pressure and right heart hypertrophy. Conclusions: P2Y12R might be involved in the pathogenesis of PAH. Blockade of P2Y12R has potential in treating PAH.
Background
Pulmonary arterial hypertension (PAH) is a disease that the pulmonary artery is abnormally elevated and ultimately leads to pulmonary vascular remodeling. The proliferation of pulmonary arterial smooth muscle cell (PASMC) and the dysfunction of pulmonary arterial endothelial cell are determining factors involved in PAH pathogenesis. And it has been con rmed that, the two processes have signi cant roles in pulmonary vascular resistance, right heart failure, and death [1][2][3] . Besides that, various pathologic conditions have been revealed to be risk factors of PAH, such as hypoxia, oxidative, and infections.
P2Y12 receptor is one of the members of P2 receptor family. The P2Y12 receptor consist of ion-channel P2X and G-protein-coupled P2Y receptors. P2Y12 receptor was originally found to be expressed in platelet. Recent studies demonstrated that, P2Y12 receptor also expressed in vascular smooth muscle cells (VSMCs) 5 . In platelets, endothelial cells, or immune cells, the adenosine triphosphate (ATP) and adenosine diphosphate (ADP) generated from cell are able to active P2 receptors, including P2Y12 4 . The elevated P2Y12 suppresses adenylyl cyclase level and then participates in regulating the activation of platelet and thrombosis. Thus, the P2Y12 receptor has been clinically used as a target for thromboembolism treatment.
Antiplatelet drugs are widely used clinically, especially for cardiovascular events with thrombotic involvement. But recent clinical studies suggest that antiplatelet drugs may also be useful as agents for primary cardiovascular prevention 2,6 . VSMCs are one of the main cell types involved in most stages of PAH. Inhibition the migration and proliferation of VSMCs are critical in the treatment of PAH. Ticagrelor is a relatively novel antiplatelet agent that has been shown to reversibly inhibit P2Y12 receptors on platelets and smooth muscle cells (SMCs). Here, the role of ticagrelor on the pathogenesis of PAH was tested for the rst time, as well as the therapeutic role of ticagrelor on the treatment of PAH.
PAH model
SPF grade of Sprague-Dawley rats (all male, weighed 280-330g) were purchased from the Laboratory Animal Center, Chinese Academy of Science (Beijing, China). The rats were housed in a standard animal room at 21±1˚C temperature and 55±5% humidity. The animal room were under a 12-h light/dark cycle and the rats were free access to water and food. After feeding in the animal room for 7 days, he experiments were began. The animal studies performed were all approved by the Shandong University Institutional Animal Care and Use Committee and were conducted according to the standard protocols and guidelines. The rats were randomly divided into 4 groups: Sham group in which rats received water alone (n = 15); Sham + T group in which the rats were intraperi SPF grade of Sprague-Dawley rats (all male, weighed 280-330g) were purchased from the Laboratory Animal Center, Chinese Academy of Science (Beijing, China). The rats were housed in a standard animal room at 21±1˚C temperature and 55±5% humidity. The animal room were under a 12-h light/dark cycle and the rats were free access to water and food. After feeding in the animal room for 7 days, he experiments were began. The animal studies performed were all approved by the Shandong University Institutional Animal Care and Use Committee and were conducted according to the standard protocols and guidelines. The rats were randomly divided into 4 groups: Sham group in which rats received water alone (n = 15); Sham + T group in which the rats were intraperitoneally injected with 14 mg/kg ticagrelor solution (AstraZeneca) every day (n = 15); PAH group in which PAH was induced by left pneumonectomy plus MCT injection 7 (n = 30); and PH + T group in which PAH rats were injected with ticagrelor solution (n = 20). Ticagrelor solution (a selective P2Y12R inhibitor) was made using a 360 mg tablet diluted with 25.5 ml saline water and injected from the time MCT injection to day 28 9 .
oneally injected with 14 mg/kg ticagrelor solution (AstraZeneca) every day (n = 15); PAH group in which PAH was induced by left pneumonectomy plus MCT injection 7 (n = 30); and PH + T group in which PAH rats were injected with ticagrelor solution (n = 20). Ticagrelor solution (a selective P2Y12R inhibitor) was made using a 360 mg tablet diluted with 25.5 ml saline water and injected from the time MCT injection to day 28 9 .
The animals were anaesthetized using 2% xylazine (4 mg/kg)/ketamine (100 mg/kg). The rats received an adjusted rate of 60 breaths/min. Respiratory support was given to the rats using a small animal ventilator (HX-300S; Chengdu TME Technology Co., Ltd.) at a tidal volume of 1.1-1.3 ml/100 g, followed by a left unilateral pneumonectomy 8 . One week following surgery, the rats were subcutaneously injected with 60 mg/kg MCT. All rats were under monitored every day until the PAH symptoms were developed, such as body weight loss and tachypnea.
Echocardiography and haemodynamic measurements
Cardiac function was evaluated using a 14 MHz linear transducer equipped with an echocardiographic machine (Visual Sonics, Toronto, Canada). According to Simpson's method, cardiac output (CO) and Bmode long axis was used to detect stroke volume, and pulmonary artery diameter and M-mode were used to measured RV wall thickness. The acceleration time of the pulmonary artery was obtained by applying ultrasonic Doppler to the pulmonary artery 10 . According to the tail-cuff method, a blood pressure recorder (BP-98A; Softron, Tokyo, Japan), was used to measure the blood pressure of the rats 11 . Pulmonary artery pressure transduction was conducted with correct jugular vein by a 1.4F Millar Mikro-Tip catheter transducer (Millar Instruments Inc., Houston, TX) directed to the main pulmonary artery after insertion into the right ventricular out ow duct, although RV systolic pressure (RVSP) was detected with a power laboratory monitoring device (Miller Instruments). Hemodynamic values were accurately computed by LabChart 7.0 physiological data acquisition system (AD Instruments, Sydney, Australia). The rats were anaesthetised during this process.
Tissue processing and histology
Following the test of echocardiography and haemodynamic measurements, the animals were sacri ced by inducing cardiac arrest by injection of 2 mmol KCl through the catheter. The lungs were isolated. The left one was weighed and the right one was in ated with 0.5% low melting agarose at a constant pressure of 25 cm H 2 O, and xed in 10% formalin for 24 h. Subsequently, the heart was excised.
Western blot
The lysis buffer used for the extraction of proteins from tissues was a mixture of RIPA (Beyotime Institute of Biotechnology) and PMSF at a ratio of 100:1 11 . The extracted proteins were detected using a BCA protein assay reagent kit (Pierce). The proteins were then subjected to a 5-12% SDS-PAGE gel and transferred onto polyvinylidene di uoride (PVDF) membrane. After blocking in the TBST for 1 h at 4˚C, the target proteins were probed by incubation with following antibodies: 1:2000 for P2Y12R (Abcam, USA) and 1:1500 for α-SMA (Abcam, USA). Primary antibodies were detected using horseradish peroxidaseconjugated antibodies: 1:5000 for anti-mouse (ZSJQ-BIO, Beijing, China) and 1:5000 for anti-rabbit (ZSJQ-BIO, Beijing, China), at room temperature for 2 h. The enhanced chemiluminescence (ECL) detection kit (Millipore) was used for blot development. The blots were visualized by the FluroChem E Imager (Protein-Simple, Santa Clara, CA, USA) and semi-quanti ed using ImageJ software (National Institutes of Health).
Immunohistochemistry
The right lung tissues were formalin-xed, para n-embedded and used for HE or regular immunohistochemistry staining 2 . The OCT-embedded tissue was placed into a freezing microtome (CM3050; Leica Microsystems GbmH) and tissue samples were cut into 5 μm sections 8 . In each lung section, 30 small PAs (50-100 μm in diameter) were analyzed at × 40 magni cation in a blinded manner. The medial wall thickness was expressed as the summation of two points of medial thickness/ external diameter × 100 (%). Intraacinar (precapillary) PAs (20-30 μm in diameter, 25 vessels each) were assessed for occlusive lesions, de ned as Grade 0 when there was no evidence of neointimal lesion, Grade 1 when there was less than 50% luminal occlusion, and Grade 2 when there was more than 50% luminal occlusion 13 . There was no evidence of neointimal lesion formation in any PAs from normal rats (all PAs were graded as 0). Anti-α-SMA (1:200; Abcam) antibodies were used as primary antibodies. After xing the frozen sections with cold acetone at 25˚C for 5 min and blocking with QuickBlock™ Blocking Buffer for Immunol Staining (cat. no. P0260; Beyotime Institute of Biotechnology) for 10 min at 4˚C, they were treated overnight at 4˚C with anti-P2Y12R antibody (1:200; Novus) and α-SMA (1:200; Abcam). Following incubation with primary antibody, Alexa 546-conjugated donkey anti-rabbit (1:200; Invitrogen) and FITCconjugated rabbit anti-mouse (1:200; Abcam) secondary antibodies were added, respectively, and the sections were incubated for 2 h at room temperature. The sections were counterstained with DAPI (Life Technologies) to identify nuclei. The sections were then washed and placed under a uorescence microscope for observation and image capture. Nerve density was measured and evaluated using ImageJ software.
Statistics
Data are expressed as the mean ± SEM. The signi cant difference between two groups were analyzed by unpaired t-test. For three or more groups, analysis of variance (ANOVA) followed by a Newman-Keuls test was utilized. Statistical analyses were performed using SPSS 20.0 software (SPSS Inc. Chicago, IL, USA), and p-value < 0.05 was considered statistically signi cant.
PAH rats show signi cant high P2Y12R level in lungs
Co-staining of P2Y12R with α-SMA shown that P2Y12R was largely distributed in PASMCs from the hypertrophied media of pulmonary vessels in PAH lung tissue (Fig 1), indicating P2Y12R as a central risk factor of PAH. To further investigate the role of P2Y12R in PAH, a specific P2Y12R inhibitor, ticagrelor, was applied.
Effects of ticagreloron P2Y12R and α-SMA expression in lung tissues
The effects of ticagrelor on P2Y12R expression were assessed. The expression level of P2Y12R (Fig. 2B, D, F) was upregulated in PAH rats. Treatment with ticagrelor greatly decreased P2Y12R level and e ciently abolished the upregulation of α-SMA as demonstrated by Western blot and RT-PCR ( Fig. 2A, C, E). Results showed that there was little difference between the two sham groups, which con rmed that interference by ticagrelor to PAH may be related to inhibition of P2Y12R to expression of α-SMA.
P2Y12R inhibition inhibits pulmonary vascular remodeling
PAH leads to pulmonary vascular remodeling 14 , thus we further studied the effects of ticagrelor on remodeling. By measuring the wall thickness and occlusion score of the pulmonary arterioles, we found that wall thickness was remarkably increased from 60.8% ± 4.7% to 81.2% ± 4.4% (p < 0.05) in vessels with diameters ranging from 50 to 100 μm (Fig. 3). Treatment with ticagrelor suppressed the wall thickness to 67.6% ± 3.5% (p < 0.05; Fig.3C). Decreases in Grade I and II occlusion were also demonstrated (15 and 73% in PAH vehicle group vs. 25 and 29% in the ticagrelor administrated PAH group respectively; Fig.3D). Therefore, blockade of P2Y12R could relieve lung remodeling caused by PAH.
P2Y12R inhibition ameliorates pulmonary hypertension
As shown in Fig.4, RVSP was signi cantly inhibited by ticagrelor treatment in rats (39.3 ± 4.5 mm Hg, vs. 53.9±4.8 mm Hg the P/ MCT group, p < 0.05). Also, ticagrelor treatment prior to or after MCT administration signi cantly reduced the thickness of RV wall, RV area, and pulmonary artery diameter (table 1). It was also observed that, ticagrelor treatment increased the mean acceleration time of the pulmonary artery as compared with PAH group.
Discussion:
Since platelet P2Y12 ADP receptor has been considered as one important target of thienopyridine-type antiplatelet drugs, herein we investigated the impacts of ticagrelor (a selective P2Y12R inhibitor) in the pathogenesis of PAH. We for the rst time described a functionally active P2Y12 in SMC proliferation post pulmonary hypertension. Firstly, it was demonstrated that P2Y12R expression was updated in SMC in PH rats. Secondly, this upregulation positively enhanced vascular proliferation. Therefore, the application of antiplatelet drugs could be important for the treatment of PH.
VSMCs are the major cell type in vessel walls and they play central roles in the most stages of pulmonary hypertension. Initially, P2Y12 receptors were found to be expressed in platelets and microglia in the brain sub region. Recently, studies shown that, it was also expressed in a variety of cells, such as VSMCs 15 . This is consistent with the results presented here which show signi cant P2Y12 upregulation on VSMC in pulmonary hypertension rats. In the current study, MCT-challenged left pneumonectomised rats showed a marked increase in the P2Y12R expression into peri-vascular and peri-alveolar areas of pulmonary tissues and bronchoalveolar lavage samples. It seems that inhibition of P2Y12 may have additional therapeutic bene ts on pulmonary hypertension beyond anti-thrombotic effect, like anti-PAH.
Under the stimulations such as hypoxia or shear stress, and mediate vasodilatatory, in ammatory, and thrombotic responses, extracellular nucleotides including the purines ATP, ADP, and adenosine monophosphate (AMP) as well as pyrimidines uridine-5′-triphosphate (UTP) and uridine-diphosphate (UDP) are released within the pulmonary vascular bed and then involved in the pathogenesis of PH 16 . It has been reported that ADP induces VSMC contraction via P2Y12, and promotes proliferation. ADP elicits pulmonary vasoconstriction through P2Y1 and P2Y12 receptor activation 17 . It was shown here that P2Y12R was upregulated and co-stained with α-SMA in PAH rats. Furthermore, the P2Y12R level was positively related with α-SMA expression. The P2Y12 inhibitor ticagrelor reversed pulmonary hypertension, as well as α-SMA downregulation, indicating that activation of P2Y12 is required for proliferation of PASMCs.
The mechanism underlying P2Y12R mediated pulmonary remodeling may include cAMP/PKA signaling, which has been shown to be the key link in PASMCs proliferation 18 , and is the downstream pathway under the stimulation of ADP 19 . Besides, the P2 receptor mediated Ca 2+ signalosome of the human pulmonary endothelium may be implicated in pulmonary arterial hypertension 20 . The exact mechanism requires for further investigation.
Conclusion And Perspectives:
The vessel wall P2Y12 receptor promotes vascular remodeling in the PAH pathological process. Therefore, antiplatelet agents such as ticagrelor may be used as a therapeutic target for pulmonary hypertension.
It remains to determine whether P2Y12 receptor has potentials in regulating other cell types of PAH pathogenesis, like pulmonary arterial endothelial cell. Besides, a mass of clinical trials are required before Ticagrelor administration prevented the pulmonary hypertension and improves RV function of PAH rats. (A) and (B) RVSP changes of PAH rats which were treated with ticagrelor. (C) The RV/LV+ S ratio of PAH rats. The representative visual shape of the RV is shown (D). * *p<0.05 and *p< 0.05 mean that results had signi cant difference countered to sham and PAH group, respectively. RVSP= right ventricle systolic pressure.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.
|
2021-09-27T20:33:27.117Z
|
2021-08-05T00:00:00.000
|
{
"year": 2021,
"sha1": "c5084c1edbe53248d3610e60163594b570e7e6b7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-770369/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1fde95bd2aebb47dc1ba80d416d43091dcd65976",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211215303
|
pes2o/s2orc
|
v3-fos-license
|
The Floral Repressor GmFLC-like Is Involved in Regulating Flowering Time Mediated by Low Temperature in Soybean
Soybean is an important crop that is grown worldwide. Flowering time is a critical agricultural trait determining successful reproduction and yields. For plants, light and temperature are important environmental factors that regulate flowering time. Soybean is a typical short-day (SD) plant, and many studies have elucidated the fine-scale mechanisms of how soybean responds to photoperiod. Low temperature can delay the flowering time of soybean, but little is known about the detailed mechanism of how temperature affects soybean flowering. In this study, we isolated GmFLC-like from soybean, which belongs to the FLOWERING LOCUS C clade of the MADS-box family and is intensely expressed in soybean leaves. Heterologous expression of GmFLC-like results in a delayed-flowering phenotype in Arabidopsis. Additional experiments revealed that GmFLC-like is involved in long-term low temperature-triggered late flowering by inhibiting FT gene expression. In addition, yeast one-hybrid, dual-luciferase reporter assay, and electrophoretic mobility shift assay revealed that the GmFLC-like protein could directly repress the expression of FT2a by physically interacting with its promoter region. Taken together, our results revealed that GmFLC-like functions as a floral repressor involved in flowering time during treatments with various low temperature durations. As the only the FLC gene in soybean, GmFLC-like was meaningfully retained in the soybean genome over the course of evolution, and this gene may play an important role in delaying flowering time and providing protective mechanisms against sporadic and extremely low temperatures.
Introduction
Understanding the molecular mechanism driving the change from vegetative to reproductive growth is crucial for maximizing the yield of seed crops in a given environment. As one of the most important traits, flowering is considered the developmental transition from juvenile to adult phase, and the flowering process is also regulated by various internal signals and environmental cues. Five major pathways controlling flowering time, which include the photoperiod, vernalization, autonomous, ageing, and gibberellin pathways, have been identified in Arabidopsis (Arabidopsis thaliana) on the Soybean (Glycine max) is one of the most important crops worldwide for its nutritional qualities and oil content, and its flowering time is a critical agricultural trait determining successful reproduction and yields. Hence, identifying the functions of key soybean genes is of great importance for the genetic improvement of crops. Soybean is a typically SD plant whose flowering and maturation are strictly controlled by the photoperiod [39], and many studies have explored the fine-scale mechanisms of how soybean responds to the photoperiod [40][41][42][43]. In addition, although soybean is a non-vernalized plant, the temperature is also a major environmental factor affecting its flowering time. Low temperature could delay the flowering time of soybean, but little is known about the detailed mechanism of how temperature affects soybean flowering until now. In our previous study, we found that overexpression of AtDREB1A in soybean caused clearly delayed flowering [44]. qRT-PCR analyses of the expression of flowering time genes related to the vernalization pathway showed that Glyma11g13220 (GmVRN1-like) and Glyma05g28130 (designated as GmFLC-like) were strongly upregulated in the DEHYDRATION RESPONSE ELEMENT B1A (AtDREB1A)-overexpressing soybean [45]. Hence, we speculate that these genes may mainly account for the phenotype. Fortunately, we confirmed that the vernalization pathway gene Glyma11g13220 plays crucial roles in regulating flowering time [46]. In this study, we isolated Glyma05g28130 from soybean, which is intensely expressed in soybean leaves and is involved in long-term low temperature-triggered late flowering, belonging to the FLC clade of the MADS-box family. Additional experiments revealed that heterologous expression of GmFLC-like results in the phenotype of delayed flowering by inhibiting FT genes' transcription in Arabidopsis. In addition, yeast one-hybrid (Y1H), dual-luciferase reporter assay, and electrophoretic mobility shift assay (EMSA) revealed that GmFLC-like protein could directly repress the expression of FT2a by physically interacting with its promoter region. In brief, our findings underline the importance of GmFLC-like in the soybean response to low temperature and highlight the role of this gene as a floral repressor in delaying flowering time and providing protective mechanisms against sporadic and extreme low temperatures.
Glyma05g28130 Is a Homologue of AtFLC
Based on previous results from our laboratory, Glyma05g28130 plays crucial roles in modulating flowering time in soybean [45]. To identify the functions of Glyma05g28130 in regulating flowering time, we cloned the gene from the soybean cultivar "Huachun 5" referring to the sequence found in the Phytozome database [47]. The results showed that the cDNA sequence of Glyma05g28130 is 1513 bp in length, contains a 603 bp ORF and encodes 200 amino acid residues. Glyma05g28130 has a predicted DNA-binding MADS-domain in the N-terminus, followed by the K (keratin-like) regions, and the domains were predicted at amino acid residues 1-61 and 71-181, respectively ( Figure 1A). In Arabidopsis, the ancient MIKC-type MADS-box genes were further classified into 13 distinctive subfamilies based on their phylogeny, namely, AGL2, AGL6, SQUA, AGL12, FLC, TM3, AGL17, AG, AGL15, DEF, GLO, GGM13, and STMADS11 genes [48][49][50]. Phylogenetic analyses revealed that Glyma05g28130 fell within the FLC clade of the Arabidopsis MIKC-type MADS-box family and shared a close relationship with AtFLC (AT5G10140) ( Figure 1B). Hence, the gene corresponding to Glyma05g28130 was named GmFLC-like. In addition, we compared the amino acid sequence of GmFLC-like with FLC homologues of other species. The MADS-box domain sequence of GmFLC-like shared high conservation among different species, whereas conservation of the K domain was much weaker (Figure 2).
Expression Profile and Biochemical Properties of GmFLC-like
In an attempt to understand whether GmFLC-like expression has tissue specificity, we analyzed GmFLC-like transcripts in multiple tissues, including shoot apexes, roots, stems, fully expanded leaves, flowers, and pods, during the soybean development process under SD conditions by qRT-PCR. The expression of GmFLC-like was greater in the leaves than in the flowers and pods and was lowest in the stems. In the shoot apex and root, the transcript of GmFLC-like was higher in the unifoliate period than in the other developmental stages (Figure 3).
Expression Profile and Biochemical Properties of GmFLC-like
In an attempt to understand whether GmFLC-like expression has tissue specificity, we analyzed GmFLC-like transcripts in multiple tissues, including shoot apexes, roots, stems, fully expanded leaves, flowers, and pods, during the soybean development process under SD conditions by qRT-PCR. The expression of GmFLC-like was greater in the leaves than in the flowers and pods and was lowest in the stems. In the shoot apex and root, the transcript of GmFLC-like was higher in the unifoliate period than in the other developmental stages (Figure 3). . Expression analysis of GmFLC-like in different organs of soybean during multiple developmental stages under short-day (SD) conditions. U, the unifoliate period; T1, the first trifoliate period; T2, the second trifoliate period; T3, the third trifoliate period; T4, the fourth trifoliate period;
Expression Profile and Biochemical Properties of GmFLC-like
In an attempt to understand whether GmFLC-like expression has tissue specificity, we analyzed GmFLC-like transcripts in multiple tissues, including shoot apexes, roots, stems, fully expanded leaves, flowers, and pods, during the soybean development process under SD conditions by qRT-PCR. The expression of GmFLC-like was greater in the leaves than in the flowers and pods and was lowest in the stems. In the shoot apex and root, the transcript of GmFLC-like was higher in the unifoliate period than in the other developmental stages ( Figure 3). . Expression analysis of GmFLC-like in different organs of soybean during multiple developmental stages under short-day (SD) conditions. U, the unifoliate period; T1, the first trifoliate period; T2, the second trifoliate period; T3, the third trifoliate period; T4, the fourth trifoliate period; Figure 3. Expression analysis of GmFLC-like in different organs of soybean during multiple developmental stages under short-day (SD) conditions. U, the unifoliate period; T1, the first trifoliate period; T2, the second trifoliate period; T3, the third trifoliate period; T4, the fourth trifoliate period; Shoot apex (including apical meristem and immature leaves); Pod (14 days after flowering). Gmβ-tubulin (Glyma20 g27280) was used as an internal control. Error bar represents the means of three biological replicates, and the letters indicate significant differences according to Duncan's multiple range test (p < 0.05).
To determine the subcellular localization of GmFLC-like, we expressed a GmFLC-like-GFP protein together with a mCherry-labelled nuclear marker protein (NF-YA4-mCherry) in N. benthamiana leaves, and both proteins were driven by the 35S promoter. We observed that the GFP fluorescence signal pattern was consistent with the localization of the nuclear marker protein (mCherry) in the nucleus, indicating that GmFLC-like is a nuclear-localized protein (Figure 4). Similarly, transient expression of GmFLC-like in Arabidopsis protoplasts also confirmed that GmFLC-like is a nuclear protein ( Figure S1).
Shoot apex (including apical meristem and immature leaves); Pod (14 days after flowering). Gmβtubulin (Glyma20 g27280) was used as an internal control. Error bar represents the means of three biological replicates, and the letters indicate significant differences according to Duncan's multiple range test (p < 0.05).
To determine the subcellular localization of GmFLC-like, we expressed a GmFLC-like-GFP protein together with a mCherry-labelled nuclear marker protein (NF-YA4-mCherry) in N. benthamiana leaves, and both proteins were driven by the 35S promoter. We observed that the GFP fluorescence signal pattern was consistent with the localization of the nuclear marker protein (mCherry) in the nucleus, indicating that GmFLC-like is a nuclear-localized protein ( Figure 4). Similarly, transient expression of GmFLC-like in Arabidopsis protoplasts also confirmed that GmFLClike is a nuclear protein ( Figure S1). . Subcellular localization of GmFLC-like protein in tobacco leaves. GFP fused to the Cterminal region of GmFLC-like, and the fusion protein was driven by 35S promoter. A mCherry labeled fusion protein (NF-YA4-mCherry) was used as a nuclear marker driven by 35S, and 35S::GFP was used as negative control. At 3 d after infiltration, the fluorescence signals (GFP and mCherry) were visualized by confocal microscopy, and the excitation wavelengths for GFP and mCherry were 488 and 543 nm, respectively. Scale bar, 50 μM.
Overexpression of GmFLC-like Caused Late Flowering in Arabidopsis
To investigate the biological function of GmFLC-like in regulating flowering time, we overexpressed GmFLC-like in Arabidopsis (Col-0), and the transcript abundance of GmFLC-like in the transgenic lines was confirmed using qRT-PCR ( Figure 5A). Compared with the WT plants, the plants overexpressing GmFLC-like (L46 and L48) exhibited a clear delayed-flowering phenotype ( Figure 5B). At 30 days, flower buds emerged in WT plants, while flower bud emergence of L46 and L48 plants was observed at days 34 and 35, respectively ( Figure 5C). In addition, we also evaluated the expression of key downstream genes that are linked to flowering time by qRT-PCR analysis. The results indicated that, compared with the levels in the WT, transcript levels of floral activators, FT, SOC1, and AP1, in transgenic Arabidopsis (L46 and L48) decreased significantly ( Figure 5D). Additional evidence indicated that the transgenic lines (L1, L48, and L46) displayed lower germination rates than the WT plants ( Figure S2). Overall, GmFLC-like offers a similar function to AtFLC, and both of them function as a floral repressor. . Subcellular localization of GmFLC-like protein in tobacco leaves. GFP fused to the C-terminal region of GmFLC-like, and the fusion protein was driven by 35S promoter. A mCherry labeled fusion protein (NF-YA4-mCherry) was used as a nuclear marker driven by 35S, and 35S::GFP was used as negative control. At 3 days after infiltration, the fluorescence signals (GFP and mCherry) were visualized by confocal microscopy, and the excitation wavelengths for GFP and mCherry were 488 and 543 nm, respectively. Scale bar, 50 µM.
Overexpression of GmFLC-like Caused Late Flowering in Arabidopsis
To investigate the biological function of GmFLC-like in regulating flowering time, we overexpressed GmFLC-like in Arabidopsis (Col-0), and the transcript abundance of GmFLC-like in the transgenic lines was confirmed using qRT-PCR ( Figure 5A). Compared with the WT plants, the plants overexpressing GmFLC-like (L46 and L48) exhibited a clear delayed-flowering phenotype ( Figure 5B). At 30 days, flower buds emerged in WT plants, while flower bud emergence of L46 and L48 plants was observed at days 34 and 35, respectively ( Figure 5C). In addition, we also evaluated the expression of key downstream genes that are linked to flowering time by qRT-PCR analysis. The results indicated that, compared with the levels in the WT, transcript levels of floral activators, FT, SOC1, and AP1, in transgenic Arabidopsis (L46 and L48) decreased significantly ( Figure 5D). Additional evidence indicated that the transgenic lines (L1, L48, and L46) displayed lower germination rates than the WT plants ( Figure S2). Overall, GmFLC-like offers a similar function to AtFLC, and both of them function as a floral repressor. Arabidopsis β-tubulin (AT5G62690) was used as a negative control. Error bar represents the means of three biological replicates. Significant differences according to the t-test are denoted as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. WT means the wild-type of Arabidopsis; L1, L46, and L48 refer to independent transgenic lines.
GmFLC-like is Responsive to the Photoperiod and Low Temperature
To further understand GmFLC-like potential functions, we analyzed the putative cis-acting elements in its promoter region (1500 bp sequence upstream of the start codon) by using the PlantCARE database. Different cis-acting regulatory elements involved in the light response, hormones, and development as well as abiotic stress were found (Table 1). Light response elements included AE-box, CATT-motif, G-box, TCT-motif, AT1-motif, Box-4, and so on. Hormone-and development-related elements included ABRE, GARE, TCA, and so on. HSE, MBS, ARE, and CE3, associated with abiotic stress responses, were also identified in the GmFLC-like upstream region. A variety of cis-acting regulatory elements in the GmFLC-like upstream region implies that the gene may be regulated by endogenous and external environmental signals. Arabidopsis β-tubulin (AT5G62690) was used as a negative control. Error bar represents the means of three biological replicates. Significant differences according to the t-test are denoted as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. WT means the wild-type of Arabidopsis; L1, L46, and L48 refer to independent transgenic lines.
GmFLC-like Is Responsive to the Photoperiod and Low Temperature
To further understand GmFLC-like potential functions, we analyzed the putative cis-acting elements in its promoter region (1500 bp sequence upstream of the start codon) by using the PlantCARE database. Different cis-acting regulatory elements involved in the light response, hormones, and development as well as abiotic stress were found (Table 1). Light response elements included AE-box, CATT-motif, G-box, TCT-motif, AT1-motif, Box-4, and so on. Hormone-and development-related elements included ABRE, GARE, TCA, and so on. HSE, MBS, ARE, and CE3, associated with abiotic stress responses, were also identified in the GmFLC-like upstream region. A variety of cis-acting regulatory elements in the GmFLC-like upstream region implies that the gene may be regulated by endogenous and external environmental signals. Table 1. cis-acting elements of the GmFLC-like promoter.
cis-Element
Sequence ( Based on many light-responsive cis-elements found in the GmFLC-like promoter region, we focused on the GmFLC-like function in response to the photoperiod (Table 1). Here, we chose the soybean variety "Huachun 5" as the research material, which is sensitive to the photoperiod because its flower buds emerge earlier under SD conditions [46]. Surprisingly, GmFLC-like functions as a floral inhibitor, but it displays a high transcript level under SD conditions. Further investigation concluded that GmFLC-like expression was markedly high under SD conditions at 15, 18 DAE (days after emergence), and showed the highest peak at 27 DAE, while GmFLC-like expression did not change outstandingly during the same period under the LD treatment ( Figure 6). We suspect that the photoperiod pathway plays a leading role in the flowering regulation of soybean variety "Huachun 5". Although the expression of GmFLC-like is upregulated under SD conditions, its relatively weak power cannot inhibit flowering accelerated by photoperiod. Based on many light-responsive cis-elements found in the GmFLC-like promoter region, we focused on the GmFLC-like function in response to the photoperiod (Table 1). Here, we chose the soybean variety "Huachun 5" as the research material, which is sensitive to the photoperiod because its flower buds emerge earlier under SD conditions [46]. Surprisingly, GmFLC-like functions as a floral inhibitor, but it displays a high transcript level under SD conditions. Further investigation concluded that GmFLC-like expression was markedly high under SD conditions at 15, 18 DAE (days after emergence), and showed the highest peak at 27 DAE, while GmFLC-like expression did not change outstandingly during the same period under the LD treatment ( Figure 6). We suspect that the photoperiod pathway plays a leading role in the flowering regulation of soybean variety "Huachun 5". Although the expression of GmFLC-like is upregulated under SD conditions, its relatively weak power cannot inhibit flowering accelerated by photoperiod. Figure 6. Expression analysis of GmFLC-like under SD and LD conditions at 12, 15, 18, 21, 24, 27, and 30 DAE (days after emergence). All seedlings were grown under SD conditions for 10 DAE, and then a portion of the seedlings were transferred to LD conditions. Fully expanded trifoliate leaves were sampled at the appointed time from three individual plants growing under SD and LD conditions. Significant differences according to the t-test are denoted as follows: ** p < 0.01, *** p < 0.001, **** p < 0.0001.
In Arabidopsis, AtFLC functions as a floral repressor and is a key gene responding to low temperature [29,51,52]. Our previous research showed that long-term low-temperature treatment resulted in delayed flowering in soybean [46]. To investigate whether GmFLC-like is responsive to low temperature, we tested GmFLC-like expression under low-temperature treatment by qRT-PCR. With respect to a long-term low-temperature treatment, GmFLC-like expression was significantly higher in treated plants than in controls after treatment for 2, 4, 6, and 8 days. Surprisingly, compared with the untreated plants, GmFLC-like expression was decreased in the treated ones after 10 days treatment ( Figure 7A).
In contrast, short-term low-temperature treatment does not trigger the phenotype of delayed flowering. GmFLC-like expression also showed a remarkable difference when subjected to short-term low-temperature treatment. Compared with the controls, the plants displayed significantly lower GmFLC-like expression when subjected to a short-term low-temperature treatment for 2, 4, 6, 8, and 10 h ( Figure 7B). Together, these results suggest that the expression pattern of GmFLC-like is clearly different between long-term and short-term low-temperature treatments. In other words, GmFLC-like may play a role in long-term low temperature-triggered delayed flowering in soybean based on its expression being strongly upregulated by long-term low-temperature treatment and downregulated by short-term low-temperature treatment. . All seedlings were grown under SD conditions for 10 DAE, and then a portion of the seedlings were transferred to LD conditions. Fully expanded trifoliate leaves were sampled at the appointed time from three individual plants growing under SD and LD conditions. Significant differences according to the t-test are denoted as follows: ** p < 0.01, *** p < 0.001, **** p < 0.0001.
In Arabidopsis, AtFLC functions as a floral repressor and is a key gene responding to low temperature [29,51,52]. Our previous research showed that long-term low-temperature treatment resulted in delayed flowering in soybean [46]. To investigate whether GmFLC-like is responsive to low temperature, we tested GmFLC-like expression under low-temperature treatment by qRT-PCR. With respect to a long-term low-temperature treatment, GmFLC-like expression was significantly higher in treated plants than in controls after treatment for 2, 4, 6, and 8 days. Surprisingly, compared with the untreated plants, GmFLC-like expression was decreased in the treated ones after 10 days treatment ( Figure 7A).
In contrast, short-term low-temperature treatment does not trigger the phenotype of delayed flowering. GmFLC-like expression also showed a remarkable difference when subjected to short-term low-temperature treatment. Compared with the controls, the plants displayed significantly lower GmFLC-like expression when subjected to a short-term low-temperature treatment for 2, 4, 6, 8, and 10 h ( Figure 7B). Together, these results suggest that the expression pattern of GmFLC-like is clearly different between long-term and short-term low-temperature treatments. In other words, GmFLC-like may play a role in long-term low temperature-triggered delayed flowering in soybean based on its expression being strongly upregulated by long-term low-temperature treatment and downregulated by short-term low-temperature treatment.
Identification of GmFT2a as a Downstream Target of GmFLC-like
In Arabidopsis, FLC is responsible for regulating floral activator FT expression. FLC suppresses flowering mainly by repressing the expression of these floral activators [25][26][27]. According to a previous study [53] and the newest information from the Phytozome database, we discovered nine FT homologues, including FT1a, FT1b, FT2a, FT2b, FT3a, FT3b, FT4, FT5a, and FT5b. The floweringinhibiting genes FT1a and FT4 exhibited higher accumulation under LD, and represented opposite expression patterns of other FT genes [54]. To better understand whether GmFLC-like has a consistent function with AtFLC, we detected the expression profile of nine FT genes after the beginning of longterm low-temperature treatment for 8 days. QRT-PCR confirmed that, except for FT1b and FT5a, other FT homologues were downregulated after the beginning of long-term low-temperature treatment compared with the control ( Figure 8A). In this study, we found that GmFLC-like expression was greatly elevated, and GmFT1a, GmFT2a, GmFT2b, and GmFT2a expression was significantly decreased after the beginning of the long-term low-temperature treatment. Previous research revealed that GmFT2a is responsible for inducing flowering under SD conditions, and the expression profiles of both GmFLC-like and GmFT2a were matched with the phenotype of delayed flowering (Figures 7A, 8A). Next, we selected GmFT2a as a candidate gene for in-depth analysis to verify whether GmFT2a is a potential downstream target gene of GmFLC-like.
To better understand the regulatory mechanism between GmFLC-like and GmFT2a, a 1385-bp promoter region and a 792-bp first intron region of the GmFT2a sequence was identified. Sequence analysis using the New PLACE database (https://www.dna.affrc.go.jp/PLACE/?action=newplace) found that six CArG motifs (CWWWWWWWWG) exist in these sequences. Based on these findings, we designed the following experiments. First of all, we performed Y1H. The promoter sequence
Identification of GmFT2a as a Downstream Target of GmFLC-like
In Arabidopsis, FLC is responsible for regulating floral activator FT expression. FLC suppresses flowering mainly by repressing the expression of these floral activators [25][26][27]. According to a previous study [53] and the newest information from the Phytozome database, we discovered nine FT homologues, including FT1a, FT1b, FT2a, FT2b, FT3a, FT3b, FT4, FT5a, and FT5b. The flowering-inhibiting genes FT1a and FT4 exhibited higher accumulation under LD, and represented opposite expression patterns of other FT genes [54]. To better understand whether GmFLC-like has a consistent function with AtFLC, we detected the expression profile of nine FT genes after the beginning of long-term low-temperature treatment for 8 days. QRT-PCR confirmed that, except for FT1b and FT5a, other FT homologues were downregulated after the beginning of long-term low-temperature treatment compared with the control ( Figure 8A). In this study, we found that GmFLC-like expression was greatly elevated, and GmFT1a, GmFT2a, GmFT2b, and GmFT2a expression was significantly decreased after the beginning of the long-term low-temperature treatment. Previous research revealed that GmFT2a is responsible for inducing flowering under SD conditions, and the expression profiles of both GmFLC-like and GmFT2a were matched with the phenotype of delayed flowering (Figures 7A and 8A). Next, we selected GmFT2a as a candidate gene for in-depth analysis to verify whether GmFT2a is a potential downstream target gene of GmFLC-like.
is the binding site for plant MADS domain protein. The fusion protein GST-GmFLC-like was purified from Escherichia coli, and then co-incubated with biotin-labeled and non-labeled probes. Finally, GmFLC-like was found to bind to the biotin-labeled proGmFT2a probe, furthermore, binding capacity was slowly weakened by increasing concentrations of non-labeled probe, while it was not affected by mutated unlabeled GCC probe ( Figure 8D). These results indicated that GmFT2a was potentially one of the direct targets of GmFLC-like GmFTs genes in soybean after the beginning of low-temperature treatments at 8 DAE. The soybean accession numbers are as follows: GmFT1a (Glyma18g53680), GmFT4 (Glyma08g47810), GmFT1b (Glyma18g53690), GmFT2a (Glyma16g26660), GmFT2b (Glyma16g26690), GmFT3a (Glyma16g04840), GmFT3b (Glyma19g28390), GmFT5a (Glyma16g04830), and GmFT5b (Glyma19g28400). Gmβ-tubulin (Glyma20g27280) was used as an internal control. The mean values ± SD from three biological replicates are shown. Significant differences according to the t-test are denoted as follows: ** p < 0.05, ** p < 0.01, *** p < 0.001. (B) Interaction of GmFLC-like protein and CmFT2a promoter and the intron region, as revealed using a yeast one-hybrid system. The yeast transformations were plated onto SD/-Ura (upper panel) and SD/-Leu containing 300 ng/mL AbA (lower panel). pGADT7 with pAbAi-proFT2a-1 (from −1275 to −1156 bp), pAbAi-proFT2a-2 (from −671 to −552 bp), pAbAi-intFT2a-1 (from 410 to 551 bp), and pAbAi-intFT2a-2 (from 654 to 841 bp), were used as negative controls. The experiment was performed independently three times. (C) Relative reporter activity (LUC/REN) in N. benthamiana leaves. the relative luciferase activity (LUC/REN) in tobacco leaves were measured after 48 h of Agrobacterium infiltration. Experiments were repeated five times and mean value ± SD is plotted on the graph. The letters indicate significant differences according to Duncan's multiple range test (p < 0.05). (D) Gel-shift analysis of GmFLC-like binding to the promoter region of GmFT2a. The sequence fragment from −663 to −628 of the GmFT2a promoters was used as a probe, and the core sequences are underlined. Purified protein (3 µg) was incubated with 25 picomoles biotin-labeled probe. For competition test, non-labeled probes at varying concentrations (from 10-to 100-fold excess), and mutated unlabeled CArG probe were added to the above experiment.
To better understand the regulatory mechanism between GmFLC-like and GmFT2a, a 1385-bp promoter region and a 792-bp first intron region of the GmFT2a sequence was identified. Sequence analysis using the New PLACE database (https://www.dna.affrc.go.jp/PLACE/?action=newplace) found that six CArG motifs (CWWWWWWWWG) exist in these sequences. Based on these findings, we designed the following experiments. First of all, we performed Y1H. The promoter sequence (−1275 to −1156 bp; −671 to −552 bp) and the first intron region (410 to 551 bp; 654 to 841 bp) of GmFT2a were amplified and inserted into the pAbAi vector, and the ORF sequence of GmFLC-like was inserted into the pGADT7 plasmid. The yeast one-hybrid (Y1H) assay confirmed that the GmFLC-like protein could target the promoter region (−671 to −552 bp) instead of the first intron region of GmFT2a by observing the transformants growth on SD/-Leu supplemented with 300 ng/mL AbA ( Figure 8B). Additional evidence from a dual-luciferase reporter assay in N. benthamiana revealed that the activity of the promoters of GmFT2a could be inhibited by overexpression of GmFLC-like driven by the CaMV 35S promoter ( Figure 8C). EMSA was also performed to verify the binding of GmFLC-like to the GmFT2a promoter. The 36 bp sequence fragment spanning positions −663 to −628 of the GmFT2a promoter was used as probe. The probe contained the predicted CArG motif (CAATTAATTG), which is the binding site for plant MADS domain protein. The fusion protein GST-GmFLC-like was purified from Escherichia coli, and then co-incubated with biotin-labeled and non-labeled probes. Finally, GmFLC-like was found to bind to the biotin-labeled proGmFT2a probe, furthermore, binding capacity was slowly weakened by increasing concentrations of non-labeled probe, while it was not affected by mutated unlabeled GCC probe ( Figure 8D). These results indicated that GmFT2a was potentially one of the direct targets of GmFLC-like.
Discussion
For plants, temperature is a main environmental cue that has a strong influence on flowering time through different pathways, which mainly including vernalization and ambient temperature pathways [1,4,55]. In many species, FLC, encoding a MADS-box transcription factor, is the core gene in the vernalization way, which prevents flowering through inhibiting several floral activators, including FT and SOC1 [13]. In Brassica plants, FLCs have been well studied over many years [56][57][58]. The researchers found that vernalization is closely correlated with FLCs epigenetic silencing, such as antisense RNA COOLAIR. The transcripts of COOLAIR are polyadenylated at multiple sites with proximal polyadenylation promoted by components of the autonomous promotion pathway. Use of the proximal poly(A) site results in quantitative downregulation of FLC expression in a process requiring FLD, an H3K4me2 demethylase [59]. Furthermore, the functions of FLC homologues show common features among different species [31,34]. However, no functional FLC homologues have been characterized in crops thus far [60]; one possible reason for this is that some crops have lost their FLC homologues. For example, FLC homologues may be absent from the rice genome [61]. For winter cereals, a gene called VRN2, which is downregulated in vernalization and functions in inhibiting FT expression [62], plays a similar role instead of genes orthologous to FLC.
Soybean is considered a typical short-day plant that does not require vernalization. Interestingly, it still retains only one homologue of FLC, Glyma05g28130by comparative genomic analysis [63,64]. In our previous study, we found that Glyma05g28130 were strongly upregulated in delayed-flowering AtDREB1A-overexpressing transgenic soybean [45]. Hence, we speculate that Glyma05g28130 is involved in flowering time regulation. In the paper, we found Glyma05g28130 belongs to a member of the MADS-box family transcription factor, containing two conserved domains, the MADS-box domain and the K domain ( Figure 1A). Phylogenetic analysis revealed that Glyma05g28130 falls within the FLC clade and is closely related to AtFLC in Arabidopsis, so named GmFLC-like ( Figure 1B). In addition, the MADS-box domain between the GmFLC-like and FLC proteins from other species shared high conservation (Figure 2). These cues imply that GmFLC-like may also be a floral repressor and responsible for directly targeting and repressing the expression of floral activators FT, showing similar functions with Arabidopsis [25][26][27]. Then, heterogeneous overexpression of this gene in Arabidopsis indeed resulted in a marked late-flowering phenotype compared with the WT (Figure 5B,C). In addition, the expression of SOC1, AP1, and FT was significantly decreased in transgenic Arabidopsis (L48 and L46) ( Figure 5D). Previous studies also confirmed that AtFLC homologues, such as BvFL1 and CiFL1, perform the same functions in different species [23,29,30,34,65,66]. In summary, these findings suggest that some biological functions of AtFLC homologues are conserved among different species.
For vernalization-requiring plants, the transcripts of FLC are very high before vernalization but are subsequently inhibited by epigenetic modification under cold conditions and further accelerate flowering. While for soybean, a non-vernalized plant, shows the contrary phenomena. We found that the popular variety "Huachun 5", which is suitable for growing in south china, shows sensitivity to low temperature. Under the long-term low-temperature treatment, GmFLC-like expression was induced and attributed to the delayed-flowering phenotype ( Figure 7A), As we know, the soybean originated from north china, where the temperature is low. Therefore, we speculate that the soybean genome retains the low-temperature sensitive genes in order to cope with disadvantageous environments. Afterwards, to enlarge the planting range of soybean, the breeders selected out the varieties which are available to grow in early spring of south china. Meanwhile, the low-temperature-responsive gene FLC may be retained under the selection process for fitting the relative low temperatures.
In Arabidopsis, FLC delays the flowering time by repressing the expression of the floral integrator gene FT [12,13,25,67]. In this study, we found that GmFLC-like expression was greatly increased, meanwhile the GmFT2a expression was significantly decreased under the long-term low-temperature treatment ( Figures 7A and 8A), the results imply that the FLC-FT model may exist in soybean as well. Afterwards, we confirm that the GmFLC-like protein binds to the promoter region of GmFT2a promoter in vivo and vitro ( Figure 8B−D). The FLC-FT2a model is confirmed, revealing that the molecular mechanism of flowering time is conversed among different species once again. Besides the existing variability in the FT2a-FLC model, these results imply that GmFLC-like may play an important role in delaying flowering time and providing protective mechanisms against sporadic and extremely low temperatures.
Plant Materials and Growth Conditions
The soybean cultivar "Huachun 5", which was bred by the Guangdong Sub-center of the National Center for Soybean Improvement, was used as the experimental material. Mature soybean seeds were surface sterilized and germinated in vermiculite. Uniform soybean seedlings were selected and grown in plastic pots containing turf soil and vermiculite at a ratio of 3:1 (v/v) in a growth chamber at 27 ± 2 • C, 40% relative humidity, and 100 µmol m −2 s −1 illumination with fluorescent lamps. Different regimes of day length as following: SD conditions (8 h of light/16 h of dark) and LD conditions (16 h of light/8 h of dark). The Arabidopsis wild-type (WT) and transgenic plants were of the Columbia (Col-0) ecotype. Arabidopsis seeds were surface sterilized and sown on half-strength MS medium for 2 days at 4 • C to relieve dormancy. Subsequently, the plates were transferred to a growth chamber at 22 ± 1 • C under LD conditions (16 h of light/8 h of dark) for 7 days, and then seedlings were transferred to pots containing turf soil and vermiculite (v/v 3:1).
Total RNA Isolation and qRT-PCR Analysis
The total RNA of the second trifoliate soybean leaf was isolated using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). Then, 1 µg of total RNA was reverse-transcribed using HiScript II Q RT SuperMix for qPCR (R233-01, Vazyme Biotech Co., Nanjing, China). QRT-PCR analysis was carried out on a StepOne Real-Time PCR System (Applied Biosystems, Foster, CA, USA) using a Kapa SYBR Fast Universal qPCR Kit (Kapa Biosystems, Boston, MA, USA). Soybean β-tubulin (Glyma20g27280) and Arabidopsis TUB2 (AT5G62690) were used as internal controls. The experiments were performed in three biological replicates, and the data were evaluated by the 2 −∆∆Ct methods [68]. All primers used in this study are listed in Table S1.
Plasmid Construction
The open reading frame (ORF) of GmFLC-like was amplified using specific primers K-FLC-F and K-FLC-R and inserted into the pCAMBIA1301 plasmid by the BamHI and KpnI restriction sites to form the expression vector 35S::GmFLC-like-GFP. Genomic DNA was isolated from leaf samples of soybean using the modified cetyltrimethyl ammonium bromide (CTAB) method [69]. The promoter sequence of GmFLC-like was amplified using specific primers designed as Pro-GmFLC-F and Pro-GmFLC-R, and genomic DNA of soybean was used as the template. The amplified PCR product was inserted into a pZeroBack/blunt vector (Tiangen Biotech Co., Beijing, China) for sequencing. All primers used are listed in Supplementary Table S1.
Sequence Analysis
The conserved domains of GmFLC-like were predicted by using InterProScan [70]. The phylogenetic tree was generated based on alignment results using the neighbor-joining algorithm with 1000 bootstrap replicates in MEGA 5.0 software [71]. Multiple amino acid sequences were aligned using ClustalW with default parameters, and the comparison result was displayed by the software BioEdit [71]. Protein sequence logos were created using WebLogo 3.3 [72]. Homologous protein sequences of GmFLC-like were searched in the Phytozome databases [47]. The cis-acting elements in the GmFLC-like promoter (1000 bp upstream of the start codon) were analyzed using the PlantCARE program [73].
Subcellular Localization of GmFLC-like Protein
The ORF sequence of GmFLC-like without the stop codon was fused to the N-terminal region of the GFP protein, and the resulting fragment was inserted into the pCAMBIA1301 plasmid to form the expression vector 35S::GmFLC-like-GFP. The fused vector was transformed into epidermal cells of Nicotiana benthamiana leaves by Agrobacterium infection and expressed for 3 days. The fluorescence signals were monitored by confocal microscope (Olympus FluoView FV1000). The GFP and mCherry protein were imaged using 488 and 543 nm excitation, respectively. The plasmids 35S::GFP and 35S::NF-YA4-mCherry were used as the negative control and nuclear marker, respectively [74].
Ectopic Expression of GmFLC-like in Arabidopsis
The expression vector 35S::GmFLC-like-GFP was transformed into the Agrobacterium tumefaciens GV3101 strain by electroporation. Arabidopsis (Col-0) was used for transgenic material and performed using the floral dip method [75]. Transgenic Arabidopsis seeds were grown on half-strength MS medium supplemented with 25 mg/L hygromycin, and three homozygous T 2 transgenic lines with different GmFLC-like expression levels were chosen for further studies, including phenotype investigation and expression analyses of potential downstream genes.
Photoperiod Treatment
Soybean seeds were grown under SD conditions for 10 days, and then, part of uniform seedlings was transferred to LD conditions. Fully expanded trifoliate leaves were sampled at 12,15,18,21,24,27, and 30 DAE from plants growing under SD and LD conditions, respectively. All samples were immediately frozen in liquid nitrogen and stored at −80 • C for further study.
Low-Temperature Treatment
Uniform soybean seedlings were grown in a growth chamber at 28 • C/26 • C (day/night) under SD conditions until the fourth trifoliate stage. Then, the seedlings were divided into two groups for the following low-temperature treatments: the long-term treatment group and the short-term treatment group. For the long-term treatment group, plants were continuously treated at 15 • C/13 • C (day/night) under SD conditions for 10 days and leaves were sampled every 2 days from three individual plants. For the short-term treatment group, plants were continuously treated at 15 • C/13 • C (day/night) for 10 h in the darkness, and leaves were sampled every 2 h from three individual plants. The control seedlings were grown at 28 • C/26 • C under the same conditions corresponding to the long-and short-term treatment groups. Leaves were sampled from three individual control plants every 2 days at the same collection time used for the low temperature-treated plants.
Yeast One-Hybrid Assay
A yeast one-hybrid assay (Y1H) assay was performed as previously described [76,77]. The promoter and the first intron sequence of GmFT2a were amplified and inserted into the pAbAi vector to form the bait plasmid, and the obtained bait plasmids were recognized as pAbAi-proFT2a-1, pAbAi-proFT2a-2, pAbAi-intFT2a-1, pAbAi-intFT2a-2, respectively. After linearization by BstBI, the bait plasmids were transformed into yeast strains Y1H Gold according to the manufacturer's instruction of the Matchmaker TM Gold Yeast One-Hybrid Library Screening System Kit (Clontech, Mountain View, CA, USA). The GmFLC-like ORF sequence was inserted into the pGADT7 vector to form the prey plasmid. The prey plasmid was co-transformed into yeast strain Y1Hgold. Subsequently, the yeast transformants were plated onto synthetic dropout (SD) medium lacking uracil (Ura) or leucine (Leu) but supplemented with 300 ng/mL Aureobasidin A (AbA). After 3 days of incubation in a 28 • C incubator, the interactors were defined on the basis of transformants growth on SD/-Leu medium with 300 ng/mL AbA.
Dual-Luciferase Reporter Assay
For dual-luciferase assay, we introduced two vectors: pGreenII 0800-LUC and pGreenII 0029 62-SK. The GmFT2a promoter was amplified and inserted into the pGreenII 0800-LUC vector to drive the LUC reporter gene expression, and the GmFLC-like ORF sequence was inserted into the pGreenII 0029 62-SK vector. The pGreenII 0800-LUC vector carries a renilla luciferase (REN) gene driven by the 35S promoter to serve as an internal control. The two plasmids were co-expressed into tobacco leaves by Agrobacterium infiltration as previously described [72]. At 48 h after transformation, LUC and REN luciferase activities were measured using a Dual-Luciferase ® Reporter Assay System (Promega, Madison, WI, USA) and a Glomax ® -20/20 signal Tube Luminometer (Promega, Madison, WI, USA). The relative activity of luciferase was calculated based on the ratio of the luciferase activity of the sample to that of the Renilla luciferase control.
Electrophoretic Mobility Shift Assay
Briefly, the E. coli BL21 cells carrying the GST-GmFLC-like fusion protein was incubated in 100 mL LB medium supplemented with isopropyl IPTG to achieve the rated concentration (0.2 mM), and the cell cultures were incubated at 30 • C for 6 h. The fusion protein was extracted from E. coli cells and purified according to the manufacturer's instructions of glutathione beads (Glutathione-Sepharose 4B, GE Healthcare, Chicago, IL, USA). EMSA was performed according to the description of the LightShift ® Chemiluminescent EMSA Kits (Pierce, Milan, Italy). The biotin-labeled DNA labeled fragments (5 -CACTCAAGTGTTGCCAATTAATTGACAAAAAATGGT-3 ) were synthesized, annealed, and used as probes, with unlabeled DNA of the same sequence and mutant sequence (5 -CACTCAAGTGTTGCAAATTAATTTACAAAAAATGGT-3 ) used as the competitors.
Data Analysis
For data analysis and graphs generation, GraphPad Prism 8.0 (GraphPad Software Inc., San Diego, CA, USA) was used. For comparison of two groups of data, the two-sided Student's t-test was used, thereinto, asterisks *, **, ***, and **** indicate significant differences at p < 0.05, p < 0.01, p < 0.001, and p < 0.0001, respectively. For comparison of multiple groups of data, one-way ANOVA and Bonferroni's post hoc test was used, and p < 0.05 was considered significant.
Data Availability
The GenBank accession numbers for soybean genes used in this study are as follows: GmFLC-like (MK913903); Promoter of GmFLC-like (KY203812).
Conflicts of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
2020-02-20T09:17:17.903Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "bc2d52692439dfd336449f2adabb85a67a417c00",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms21041322",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2cf19f19206ce41e55d9568dde0192da9fdbc38",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
31072408
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiology and Genetic Diversity of Rotavirus Strains in Children with Acute Gastroenteritis in Lahore, Pakistan
Pakistan harbors high disease burden of gastro-enteric infections with majority of these caused by rotavirus. Unfortunately, lack of proper surveillance programs and laboratory facilities have resulted in scarcity of available data on rotavirus associated disease burden and epidemiological information in the country. We investigated 1306 stool samples collected over two years (2008–2009) from hospitalized children under 5 years of age for the presence of rotavirus strains and its genotypic diversity in Lahore. The prevalence rate during 2008 and 2009 was found to be 34% (n = 447 out of 1306). No significant difference was found between different age groups positive for rotavirus (p.0.05). A subset of EIA positive samples was further screened for rotavirus RNA through RT-PCR and 44 (49.43%) samples, out of total 89 EIA positive samples, were found positive. Phylogenetic analysis revealed that the VP7 and VP4 sequences clustered closely with the previously detected strains in the country as well as Belgian rotaviruses. Antigenic characterization was performed by analyzing major epitopes in the immunodominant VP7 and VP4 gene segments. Although the neutralization conferring motifs were found variable between the Pakistani strains and the two recommended vaccines strains (Rotarix TM and RotaTeq TM), we validate the use of rotavirus vaccine in Pakistan based on the proven and recognized vaccine efficacy across the globe. Our findings constitute the first report on rotavirus' genotype diversity, their phylogenetic relatedness and epidemiology during the pre-vaccination era in Lahore, Pakistan and support the immediate introduction of rotavirus vaccine in the routine immunization program of the country. Copyright: ß 2013 Alam et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors have no support or funding to report.
Introduction
Diarrheal infections are considered as a significant cause of infant and childhood morbidity and mortality in both developing as well as developed countries [1]. Most of these infections have a viral etiology principally rotaviruses that alone account for approximately 527,000 (475000-580000) deaths per annum among children less than 5 years of age [2,3]. Globally, more than 2 million children are hospitalized every year for rotavirus infection and 90% of rotavirus associated mortalities occur in Africa and Asia [4,5]. Rotaviruses have been classified into seven heterogenic groups (A to G) based on their genetic and antigenic properties. Group A rotaviruses are the most important and frequently detected viruses among the three groups A, B and C that infect humans [6]. The virus belongs to the family reoviridae with triple-coated icosahedral virion particle, enclosed within is the 11 segmented double stranded RNA genome. The triple layered capsid consists of 2 proteins in the outer shell (VP7 and VP4), the intermediate layer constitutes VP6 while VP2 forms the inner layer enclosing two proteins VP1 and VP3. Based on VP6 capsid gene, the virus has been classified into seven major genogroups while VP7 and VP4 are the basis of a binary-system for further classifying the genogroup A viruses into 27 G-and 35 P-genotypes respectively [7]. Globally, the most important types causing majority of infections are G1P [8], G2P [4], G3P [8], G4P [8] and G9P [8] [8,9]. However, significant diversity of rotavirus genotypes have emerged with several novel combinations due to accumulation of point mutations, genome re-assortments or zoonotic transmission of animal strains resulting in the introduction of new antigenic variants [10,11].
Rotavirus strain identification is considered as the key component of epidemiological surveys, disease distribution, genotype prevalence studies, vaccine administration and efficacy monitoring programs. Many past studies have highlighted the significance of continued monitoring of circulating rotavirus strains in order to maintain sufficient population immunity [12]. In Pakistan, there is no well-developed surveillance system for rotavirus strain identification although country's Ministry of Health has initiated a hospital based surveillance network to serologically test the stool samples from children presented with gastroenteritis at central district hospitals in 3 major cities; Karachi (Sindh province), Lahore and Rawalpindi (Punjab province). These sentinel sites perform ELISA for the diagnosis of rotavirus infection without any further analysis for viral genotype identification. This study is in continuation to our previous work where we identified an emerging rotavirus genotype G12 in two children admitted to a hospital in Rawalpindi [13]. To further explore the epidemiology and genotypic diversity of rotavirus in Pakistan, here we report the findings of rotavirus subtypes detected in children hospitalized, due to severe dehydrating diarrhea, at Children's Hospital Lahore.
Materials and Methods
Samples from hospitalized children suspected of rotavirus gastroenteritis were collected as per World Health Organization's standard case definitions that describe a suspected case as a child ,5 years of age, admitted to a designated sentinel hospital for treatment of gastroenteritis while a confirmed case is a suspected case in whose stool the presence of rotavirus is demonstrated by means of an enzyme immunoassay. The study concept and design was approved through the Pakistan's National Institute of Health Internal Review Board. The samples were collected after informed and written consent from the patient's parents/guardians.
A total of 1306 stool samples were collected from hospitalized patients at 'The Children's Hospital' Lahore during January 2008 to December 2009. Majority of these samples were collected from children below than 5 years of age who were hospitalized with suspected rotavirus gastroenteritis. The collected stool samples were processed for the detection and confirmation of Rotavirus antigen. ELISA test was performed using the ProSpecT TM Rotavirus Microplate Assay (Oxoid Ltd., Basingstoke Hants, UK) as per World Health Organization's recommendations (http://whqlibdoc.who.int/publications/2011/ 9789241502641_eng.pdf).
A subset (20%) of EIA positive samples were transported to Department of Virology, National Institute of Health, Islamabad for rotavirus RNA detection through RT-PCR and further genotype determination on the basis of VP7 and VP4 gene segments using the protocol as described by Gouvea et al 1990 [14] and Gentsch et al 1992 [15] respectively. Amplified products from round 1 PCR reactions were purified using QIAquick PCR purification kit (Qiagen, Germany) and were directly sequenced for VP7 and VP4 genes using the Big dye terminator sequencing kit v3.0 by automated Genetic analyzer ABI 3130xl (Applied Biosystems).
Phylogenetic analyses of VP7 and VP4 sequences were performed in comparison to the strains belonging to different geographical regions as retrieved from GenBank. Evolutionary tree and distances (number of base substitutions per site) were generated by Neighbor Joining method with Kimura-2 parameter using MEGA 4.0 (http://megasoftware.net/). The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) is shown next to the branches. The GenBank accession numbers, country, year of sample collection and respective genotype information has been given where available. The VP7 sequences obtained in this study have been submitted to GenBank under the accession numbers KC896141-KC896157 and the VP4 sequences under accession numbers KC896127-KC896140.
Prevalence, Epidemiology and RVA-Genotypes
Screening of total 1306 samples collected during two years period (2008-2009) for the presence of rotavirus antigen yielded 447 (34.22%) samples positive for group A major inner capsid protein (VP6 antigen) common to all known rotavirus genotypes. The prevalence rates during 2008 and 2009 were 34.12% (n = 243 out of 712) and 34.34% (204 out of 594) respectively. No significant difference (P.0.05) was found between different age groups positive for rotavirus through ELISA; however, high infection rate was observed among children between 12-17 and 24-59 months during 2008 and 2009 respectively. The rotavirus positive cases appeared to occur throughout the year with peaks during January to April and July to September (Figure 1a and 1b).
A subset of EIA positive samples, based on availability, was further screened for the presence of rotavirus RNA through RT-PCR. Out of total 89 EIA positive samples available from archived stock kept at -80uC, 44 (49.43%) samples were found positive for rotavirus through PCR. G and P types were found with the prevalence as G1P
Phylogenetic Analysis Based on RVA-VP7 and -VP4 Gene Segments
The genetic sequence of 18 viruses included in this study was determined for the VP7 gene. These samples were randomly selected on the basis of sample quantity and resource limitations which showed that all viruses are highly identical (99-100%) to each other within their respective genotypes; G1, G2 and G9. The G1 strains shared close sequence similarity with the previously reported viruses from Pakistan (JN001862) as well as to those from Belgium (HQ392261, HQ392204) and USA (JN258346). The VP7 sequences of G9 strains were found 100% identical among themselves as well as highly similar level of homology was found with the viruses from South Africa (GQ338887), Russia (FJ447573) and Australia (AY307090). These viruses also hold 98% similarity to previously reported G9 strains (JN001865, JN001866) from Faisalabad city of Pakistan (unpublished data). Likewise, the G2 viruses showed 99-100% identity among themselves and a similar genetic closeness was found among the G2 strains from Bangladesh (EF690778, EF690782) as well as the already identified G2 strains from Pakistan (JN001883; unpublished data).
The genetic relationships of rotavirus strains from this study were also determined on the basis of VP4 gene. 14 viruses were grouped among three genotypes P [4], P [6] and P [8] with the highest prevalence of P [8] in combination with G1, G2 and G9. The genotype P [6] was found with G1 and G9 counterparts while P [4] was found with G2 as the most common circulating strains during the study period with 48% prevalence. In a pattern similar to the G-types, P [6] and P [8] strains detected from Lahore during 2008 and 2009 revealed closest identity to already reported viruses from Pakistan (JN001876-79 unpublished data).
Phylogenetic reconstruction revealed clustering of all rotavirus strains from Children Hospital, Lahore with viruses (JN001862 (G1P [8]); JN001865 (G9P [8]); JN001883 (G2P [6]) previously detected from the Faisalabad district, located in the South-West of Lahore at a distance of 118 kilometers (Figure 2a and 2b). The viruses classified as G1 are categorized into two separate lineages. Lineage A contains rotavirus strains from both Lahore and Faisalabad. Two of the strains (NIHPAK-210 and NIHPAK-417) were distinguished as Lineage B clustering with viruses from USA (JN258346) and Belgium (HQ392204).
Comparison of RVA-VP7 Antigenic Regions between Pakistani and Vaccine Strains
The effectiveness of rotavirus vaccines can be determined by analyzing the amino acid (aa) differences, in the neutralizing epitopes of VP7 and VP4 proteins, between vaccine and the circulating strains. We compared the VP7 aa sequences of Pakistani viruses, constituting the antigenic epitopes with those of the two available vaccines strains (RotaTeq TM and Rotarix TM ). The VP7 protein contains three antigenic epitopes defined as 7-1a, 7-1b and 7-2 which comprise of 29 amino acid residues from positions 87 to 291 based on rhesus Rotavirus strain numbering (GenBank accession No. AF295303) [16]. The comparison of all Pakistani rotavirus strains found in Lahore and vaccine strains showed that out of total 29 amino acids, only five are conserved (Figure 3a).
Analysis of Pakistani G1 strains showed that the 29 amino acid residues across the three antigenic sites are highly conserved when compared to Rotarix TM G1P [8] strain while 02 differences were found at positions 97 (D.E; Aspartic acid to Glutamic acid) and 147 (S.N; Serine to Asparagine) within the 7-1a and 7-2 regions when compared to RotaTeq TM G1P [5] strain.
Among the G9 viruses presented in this study, the VP7 protein was compared with Rotarix TM G1 strain as well as all constituent strains of RotaTeq TM vaccine (G1-4 with P [5] and G6P [8]). Thirteen aa changes were found when compared to Rotarix TM strain across all three epitopes. On comparison with the RotaTeq TM G1 strain, 14 variations were found in Pakistani viruses. In comparison to G2 RotaTeq TM strain, 19 aminoacid changes were noticed. Similarly, 11 variations were observed between G3 RotaTeq TM strain and the Pakistani G9 viruses with majority of differences found in 7-1b epitope in which all the aa residues were changed except Glutamine at position 201. For G4 RotaTeq TM strain, 10 aa changes were found in study strains Figure 1. Phylogenetic analysis of VP7 gene segment of the study viruses isolated in hospitalized children at Lahore. Each of the G1, G2 and G9 genotype viruses from this study are indicated with round circle, while the previous available sequences in GenBank from Pakistan have been given with an arrow-head. The closely matched sequences have retrieved from NCBI GenBank and included to reconstruct a phylogenetic tree (Figure 1a). Phylogenetic analysis of VP4 gene segment of the study viruses isolated in hospitalized children at Lahore. Each of the P [4], P [6] and P [8] genotype viruses from this study are indicated with round circle, while the previous available sequences in GenBank from Pakistan have been given with an arrow-head. The closely matched sequences have retrieved from NCBI GenBank and included to reconstruct a phylogenetic tree (Figure 1b). doi:10.1371/journal.pone.0067998.g001 across the three epitope regions. When compared to G6P [8] RotaTeq TM strain, 12 dissimilarities were noticed across the three antigenic regions mainly within the 7-1b region.
Comparison of RVA-VP4 Antigenic Regions between Pakistani and Vaccine Strains
The VP4 protein of rotavirus contains four antigenic regions defined as 8-1 to 8-4 spanning between aminoacid positions 88 to 196 based on rhesus Rotavirus strain (GenBank accession No. AY033150) [17]. The VP4 sequences of Pakistani viruses were compared to those of vaccine strains and found significant variations among the four regions except aminoacid residue 180 and 132 which were well conserved amongst all Pakistani and vaccine strains (Figure 3b).
When compared to Rotarix TM P [8] strain, the Pakistani P [8] strains with G1 and G2 counterparts showed higher degree of disparity than those with G9. Importantly, the Pakistani strains contained 3 to 12 residues that differed from the RotaTeq TM P [8] strain while Rotarix TM strain P [8] was more variable with 7 to 13 aminoacid differences. Interestingly, all the three aminoacid residues (87-89; NTN) of 8-4 epitope were well conserved among all P [8] Pakistani viruses.
A high degree of variation was observed for P [4] and P [6] strains as well. For P [4] strains, there were 13 aminoacid differences when compared to Rotarix TM while 22 to 23 residues were different from RotaTeq TM strains. Comparison of P [6] strains showed 17 differences with Rotarix TM whereas 20 aminoacid differences were found when compared to RotaTeq TM strains.
Discussion
Diarrhea is accountable for approx. 5% of all deaths among children below five years of age. Viral pathogens are the most common cause of gastroenteritis in both developing and developed countries [18][19][20] especially rotaviruses, noroviruses, astroviruses and adenoviruses [21]. Three rotavirus serogroups (A, B and C) are known to cause gastroenteritis in humans [22]. The most commonly associated VP7 genotypes of group-A rotavirus with different combinations of VP4 genetic counterparts are G1, G2, G3, G4 and G9 [23]. However, unusual genotypes with different G and P specifities have recently been reported from various parts of the world [1].
Seasonality of rotavirus gastroenteritis in Pakistan has never been determined although the cases have been detected round the year [24]. In our study, rotavirus was found during all seasons especially early and late calendar months. This may be attributed to the rainy and post-monsoon season in our country and support the previous findings that low air temperature and dry environment increase the rotavirus incidence [25]. The same pattern is peculiar to tropical areas where rotavirus infections appear throughout the year without any significant epidemic peaks [26]. Similar findings have been reported from Bangladesh, South Asian regions, Bahrain and Costa Rica where rotavirus is detected all year round with less obvious seasonality [27][28][29][30]. However, our findings contrast those in USA, Japan, Northern Asian region, Australia and Europe where rotavirus diarrhea peaks in winter and is rarely identified in summer [31][32][33][34][35].
The higher percentage of samples positive for rotavirus was found among the children between 12 and 17 months of age. These findings are slightly variable than the previous study conducted in Karachi city of Pakistan [36] where most of the rotavirus infections were found among children less than 12 months of age. This variation might reflect a different study design and target population i.e. our study includes the samples from patients hospitalized with severe gastroenteritis whereas Qazi et al. [36] studied the incidence of rotavirus associated infections in lowincome communities of Karachi.
During the last 15 years, very few reports about circulating genotypes of rotavirus are available from Pakistan [36][37][38] in which circulation of G1, G2, G4 and G9 in combination with P [4], P [6], P [8], P [11] have been described. The current study is focused on the detection of circulating genotypes of rotavirus causing gastroenteritis in Lahore city of Punjab province and provides relatively comprehensive epidemiological as well as genetic information. Although G1 is the most common genotype found in diarrhea or gastroenteritis cases worldwide, we found G2P [4] as the most prevalent genotype in 48% of study samples. Genotype G9 was found among 20% of our samples in combinations with P [4], P [6] and P [8]. A similar increased prevalence of G9 strains have been found in neighboring countries including India and Bangladesh [39]. This genotype has recently gained sufficient epidemiological concern worldwide due to its variable vaccine response and infectivity rate [9,40]. We found G9P [8] as the third most prevalent genotype which is consistent with the recent findings from India [41]. 20% of our samples were found positive for G1P [6] which has not been reported previously from Pakistan. We did not detect any G3 and G4 genotypes in our samples which substantiate the global scenario of their significant decline during the recent years [41]. G1P [8] was found in only 1 sample even though it has remained the most common strain detected globally and constitutes one of the currently recommended vaccine, Rotarix TM . Surprisingly, we did not find any G12 genotype in this study which has recently emerged with numerous reports of global spread [42,43]. We cannot speculate the absolute absence of this genotype from our population due to limited time period and number of samples selected for this study. Such variation and vast genetic differences of rotavirus strains evoke the need for continued surveillance to assess and consider the role of strain variability in the design of new vaccine candidates as well as to measure the impact of vaccine introduction in the community. The continued surveillance programs and characterization of rotavirus strains during the post-vaccination era is needed to monitor the emergence of 'escape' strains due to longterm pressure exerted by homotypic immunity [44].
In the present study, we compared the amino acid motifs constituting the neutralizing epitopes of VP7 and VP4 proteins between the circulating Pakistani rotaviruses and available vaccine strains. Multiple antigenic variations were noticed among both VP7 and VP4 epitopes highlighting the need for detailed antigenic mapping of prevalent rotavirus genotypes in the country. Complete genome sequencing will therefore be required to generate more comprehensive information on molecular epidemiology and evolutionary dynamics of rotaviruses on account of its segmented genome and the possible role of internal genes in immunity [45]. Although the aminoacid residues within the VP7 protein of G1 strains showed higher similarities with the Rotarix TM strains, significant differences were found amongst all four epitopes of VP4 proteins. In addition, a comparable degree of disparities was noticed among the RotaTeq TM and the Pakistani strains in both VP7 and VP4 proteins. Although such differences do not change the genotype specificity of the rotavirus strain, they may significantly influence the binding of neutralizing antibodies and hence viral fitness through selective pressure [45]. Similar findings have been reported from countries such as Belgium where rotavirus immunization program was initiated in 2006 followed by rapid massive coverage of up to 88% [46].
The vaccine introduction has significantly reduced the rotavirus disease burden in many countries like Australia, Austria, Belgium, Brazil, El Salvador, Nicaragua, Panama and the United States and both vaccines have been found to carry equivalent efficacy against G1, G3, G4 and G9 strains with P [8] specificity [47]. The Rotarix TM vaccine efficacy against G2 strains is somewhat lower as compared to other genotypes while data pertaining to G12 strains is not available as yet [47]. In addition, the vaccine introduction has modified the epidemiological patterns of circulating strains in certain regions such as the upsurge of G3 strains in the United States and Australia after introduction of RotaTeq TM [48][49][50]. Therefore, further detailed epidemiological studies must be planned to monitor the vaccine response during pre-and postvaccination periods in Pakistani Population.
The mutations in the antigenic regions of rotavirus play an important role in the outcome of vaccine response. When compared to the respective prototype strains, our data substantiates the previous findings [51,52] that the currently circulating G9 genotypes are different from their prototypes that were isolated during 1980s. There are about 15 known mutations in the antigenic regions of G9 genotype that can modify the antigenicity of the respective region [53]. Similarly, for G1 genotypes, the substitutions at positions 94, 97, 147 and 291 of VP7 protein are found to play a significant role in antigenic recognition [54]. In respect to G2 rotavirus genotypes, the substitution D96N within the antigenic region A was found responsible for failure of G2specific monoclonal antibodies to react [55], but none of these mutations were found in Pakistani strains, except for G2 strains where D96N; Aspartic acid to Asparagine, substitution was found amongst all viruses from Pakistan (data not shown). Jin et al reported that most of the currently circulating G1 strains and the Rotarix TM vaccine RRV-S1 strain differ in their antigenic properties and mutations in these antigenic sites may ultimately cause vaccine failure [56].
Despite these variable epidemiological reports, it has been now established that rotavirus vaccines are equally effective against vast diversity of rotavirus genotypes by generating heterotypic immune response. In the recent vaccine trails, both Rotarix TM and RotaTeq TM have been proven as equally effective in regions with high child and adult mortality [57]. Multiple reviews from high and middle income countries have reported a substantial reduction in disease burden within a few years of vaccine implementation through decreased magnitude of rotavirus associated diarrhea and deaths [58,59]. For instance, studies in Mexico and Brazil reported a reduction in diarrhea related deaths in infants and young children after the introduction of rotavirus vaccine [60,61]. Rotarix TM efficacy has been evaluated in a large clinical trial of more than 63,000 infants from 11 Latin American countries and Finland, and was found to be safe and highly immunogenic [62,63]. Similarly, in a randomized, double-blind, placebocontrolled study conducted in 6 European countries, Rotarix TM was observed to be highly immunogenic [64]. RotaTeq TM efficacy has also been evaluated in two phase-III trials among healthy infants, including a large clinical trial of more than 70,000 infants enrolled primarily in the United States and Finland, and found to be highly immunogenic [65,66]. As a final point, in April 2009, the World Health Organization's (WHO) Strategic Advisory Group of Experts (SAGE) on immunization recommended the inclusion of rotavirus vaccination of infants into all national immunization programs, with a stronger recommendation for countries where ''diarrheal deaths account for $10% of mortality among children aged ,5 years'' [67]. In addition, it has been emphasized that the introduction of rotavirus vaccine should be accompanied by measures to ensure high vaccination and timely administration of each vaccine dose.
Conclusion
Our present findings conclude that the rotavirus genotypes circulating in different geographical areas of Pakistan are quite variable and large scale studies must be conducted to calculate the burden of disease as well as the epidemiological understanding of contributing viral genotypes. Similar diversity of prevalent rotavirus strains within the Indian subcontinent including Pakistan has been reported by Miles et al., [41]. The annual birth rate in Pakistan is approximately five million; the right rotavirus vaccine would be greatly helpful to protect newborns from this serious disease and its associated mortality. Although our report does not describe a thorough representation of the prevailing genotypes throughout the country, but it provides significant information for the health policy makers to review and implement informed immunization policy in the country.
|
2018-05-08T18:22:37.204Z
|
2013-12-03T00:00:00.000
|
{
"year": 2013,
"sha1": "9069877695fc5b0cabe78fc8f5b9d5176bfd6911",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/annotation/68a1d471-b3b1-45e7-9b81-d242f1c20ad1",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "4c5ef6c05d1b5c72e471860d9257a9e862532545",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
261926035
|
pes2o/s2orc
|
v3-fos-license
|
Examination of Physical Activity Patterns of Children, Reliability and Structural Validity Testing of the Hungarian Version of the PAQ-C Questionnaire
Introduction: Several studies report on the importance of physical activity (PA) in childhood, which influences attitudes towards health in adulthood. For monitoring PA, trustworthy measurement tools are needed. The study aimed to adapt the Physical Activity Questionnaire for Children (PAQ-C) to the Hungarian language and assess its validity, reliability, and factor structure. Methods: A total of 620 children (the average age was 10.62 (SD 2.36)) participated in the cross-sectional study. To assess physical activity, the PAQ-C questionnaire was used. The collected data were analysed using IBM SPSS version 28.0 and IBM SPSS AMOS 29.0 software. Results: The internal consistency was acceptable (alpha = 0.729) and the test-retest reliability showed acceptable agreement (ICC = 0.772). The confirmatory factor favoured a one-factor structure of the questionnaire. The average PAQ-C score for girls was 2.87 (SD 1.07), and for boys it was 3.00 (SD 1.05), which showed a significant difference (p = 0.005). Discussion: Based on our findings, our study tested the validity and reliability of the one-factor PAQ-C questionnaire, a valid and reliable measurement tool to test the physical activity patterns of primary school children in a Hungarian sample. Further research is needed to develop physical activity monitoring of Hungarian children.
Introduction
Physical inactivity is associated with a significant risk of cardiovascular and chronic diseases such as obesity, hypertension, diabetes mellitus, and musculoskeletal conditions such as osteoporosis, osteoarthritis, and mental health [1].According to the World Health Organization (WHO), in 2003, about 60% of the world's population lead inactive lifestyles, the consequences of which placed a heavy burden on healthcare systems worldwide [2].
Several research findings report on the importance of physical activity in childhood, which influences attitudes towards health in adulthood [3][4][5].Physical activity (PA) in youth has several benefits, including cardiorespiratory fitness, normal muscle strength, and mental status, and reduces the risk of obesity and certain metabolic diseases [6].In addition, physical activity at a young age has been shown to have an impact on the development of cognitive function and, thus, academic performance [7].
The recommendations for physical activity levels in children vary slightly between the WHO and European and American guidelines.The WHO recommends that children and adolescents aged 5-17 engage in at least 60 min of moderate-to-vigorous-intensity physical activity daily.This can include active play, sports, recreational activities, and structured exercise.Additionally, the WHO suggests incorporating activities that strengthen muscles and bones at least three times a week.The European guidelines align closely with the WHO recommendations.They recommend that children and adolescents aged 5-17 get a minimum of 60 min of moderate-to-vigorous-intensity physical activity daily.The guidelines emphasize various activities, including aerobic exercise, muscle-strengthening activities, and activities promoting bone health.However, the American guidelines provide more specific recommendations based on age groups; for children and adolescents (6-17 years), the recommendation is at least one hour of moderate-to-vigorous-intensity physical activity daily.This should include a combination of aerobic exercises and muscle-strengthening and bone-strengthening activities [8,9].
Several studies have shown that a significant proportion of children in Europe do not meet the WHO's recommended physical activity levels or are inactive.The European Union-funded project called the Healthy Lifestyle in Europe by Nutrition in Adolescence Study (HELENA) examined adolescents' physical activity levels across several European countries.The study found that many adolescents fell short of meeting the recommended 60 min of daily moderate-to-vigorous-intensity physical activity and spent more than 9 h/day sedentary [10].
Similarly, the Global Matrix 3.0, which assesses physical activity levels and behaviours in children and youth across multiple countries, including several European nations, reported low levels of physical activity among children.Furthermore, joint monitoring tools are needed for better comparison and monitoring [11].
The Health Behaviour in School-aged Children (HBSC) international survey conducted in 2017/18 provided valuable insights into the health behaviours of school-aged children across various countries.The HBSC survey is conducted every four years.It involves gathering data on multiple aspects of young people's health and well-being, including physical activity, sedentary behaviours, nutrition, mental health, and social relationships.The survey typically covers various countries, providing a comparative analysis of health behaviours among school-aged children.The study found that higher levels of social media use were associated with an increased likelihood of experiencing mental health problems among school-aged children.Specifically, excessive social media use was linked to higher levels of psychological distress, depressive symptoms, and poor self-esteem [12].
Adequate tools are needed to monitor physical activity levels accurately, but this can be challenging in many cases given that physical activity is a complex, multidimensional human behaviour [13,14].There are many objective measures of physical activity levels, but these have yet to be shown to be applicable to large study samples [15].For all these reasons, self-completion questionnaires and physical activity questionnaires (PAQs) are becoming increasingly common, as they are inexpensive, multisided, and easy-to-use measurement tools.However, PAQs can present more limitations in understanding the questions of the measurement tools, and calculating the time spent physically active within one week could be difficult for children and could mean an overestimation of the physical activity level [16,17].
There are several questionnaires for assessing physical activity in children, such as the Health Behaviour in School-Aged Children Questionnaire and the Youth Activity Profile [18][19][20][21].However, the most frequently used PAQ for children is the Physical Activity Questionnaire for Older Children (PAQ-C), ranked as one of the few self-report instruments with acceptable validity, reliability, and practicality for assessing physical activity levels among children.The PAQ-C questionnaire was developed in Canada in 1997 by Kowalski et al. for use in the ALPHA study.Therefore, it can provide a high standard for comparisons beyond the European standards [22][23][24].The questionnaire measures the weekly average physical activity patterns of the children aged 8-14 years during school time (except summer holidays) about sports activities, leisure time activities, physical education classes, and during school time.
In the previous studies, the PAQ-C questionnaire showed acceptable validity and reliability.The measurement tool is adapted and validated into more languages, such as German, Dutch, Greek, Italian, Croatian, Chinese, Czech, Spanish, Japanese, Turkish, and Saudi Arabian.The results of these studies investigated the internal consistency of the questionnaire, and more studies assessed the test-retest reliability.The convergent validity of the PAQ-C compared with accelerometer (generally measured by Actigraph GT3X) results showed moderate correlation and acceptable reliability between the PAQ-C scores and vigorous, moderate, or moderate to vigorous physical activity (MVPA) weekly minutes or metabolic equivalent (MET)/minutes or daily steps.Furthermore, the measurement tool's structural validity using confirmatory factor analysis (CFA) was examined in Saudi Arabian and Turkish validation studies, where the first study confirmed the one-factor structure and the second confirmed the two-factor structure of the PAQ-C [25][26][27][28][29][30][31][32][33].
Before the current study, there was no validated subjective measurement to examine the PA patterns of children in Hungary.The HBSC study also measured Hungarian children's physical activity patterns, and they found that less than 20% of them met the recommended level of PA [34].However, the last decade has seen significant changes in monitoring children's activity in Hungary.The developed National Standardized Student Fitness Test (NETFIT) system, in four fitness profiles, examines nine measurements to characterize students' physical fitness, strength, flexibility, and body composition.NETFIT is used by more than 3700 schools, 800,000 children, and 13,000 teachers.Current research supports the existing monitoring of children's physical fitness with the tracking of physical activity patterns [35,36].
This study aimed to adapt the PAQ-C to the Hungarian language for assessing physical activity among children and to determine the measurement tool's psychometric properties by testing its validity, reliability, and factor structure.
Materials and Methods
A cross-sectional survey was conducted with the participation of primary school students.The data were collected from January 2020 to March 2023 in the south-Danubian region of Hungary.
The applied sampling procedure was convenient sampling.The calculation of the minimum sample size was based on previous studies [30,37], a ratio of ten participants per item for CFA [33], and a recommendation of 300-500 participants [32].We recruited children from 7 public primary schools.Male and female, 7-14-year-old school-going children were recruited.Regarding exclusion criteria, the sample could not include children with disabilities regarding physical activity patterns.
In total, 650 children were recruited into the study, and the final sample consisted of 620 participants involved in the final statistical analysis in Phase 1. Thirty participants were excluded because of age requirements, sick leave, or missing data.From the recruited sample, 20 children were involved in the test-retest measurement in Phase 2 of the research, where all participants filled out the questionnaire again after 7 days and were included in the final analysis.The sample and recruitment procedure were summarised in a flow chart (Figure 1).
Ethical Approval and Consent to Participate
Participation in the research was voluntary.All participants and their parents were informed about the details of our study on a written informed consent form.Children were able to enter the study after written parental consent.
The study was approved by the ETT TUKEB, Hungary (15117-9/2018/EÜIG).All the methods used were carried out under relevant guidelines (Beaton's and COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN) guidelines and regulations) [38,39].The data were processed anonymously and confidentially based on the Data Protection Act of Hungary.The research was conducted by the principles of the Declaration of Helsinki.
Ethical Approval and Consent to Participate
Participation in the research was voluntary.All participants and their parents we informed about the details of our study on a written informed consent form.Childr were able to enter the study after written parental consent.
The study was approved by the ETT TUKEB, Hungary (15117-9/2018/EÜIG).All t methods used were carried out under relevant guidelines (Beaton's and COnsensus-bas Standards for the selection of health Measurement Instruments (COSMIN) guidelines an regulations) [38,39].The data were processed anonymously and confidentially based the Data Protection Act of Hungary.The research was conducted by the principles of t Declaration of Helsinki.
PAQ-C
The PAQ-C questionnaire is a ten-item, self-administered, 7-day recall questionna for children aged 8-14.The total score of the measurement tool is the mean score of t measured scales ranked between 1 and 5, where a higher value means a higher level moderate and vigorous intensity physical activity based on the first 9 questions of t PAQ-C and 1 question about the health status of children.The first question contained t different activities and ratings, how often they participated in them, and children's rati between 1 and 5, and an average score was calculated from the results of the first questio The following 8 questions were rated between 1 and 5, and the final total score of t questionnaire was the average score of the 9 questions of PAQ-C, where a higher val means a higher level of physical activity [23].
Demographic questions and body composition
The study obtained the gender, age, parent's education, and general health status the participants.Trained physiotherapists measured the body composition of the studen using InBody770 (InBodyUSA, Cerritos, CA, USA); their weight was measured in kg a their height in cm.Their body mass index (BMI, kg/m 2 ), body fat, body fat %, and skele muscle index were calculated.
Adaptation and Validation Procedure
The Hungarian version of the PAQ-C questionnaire was obtained from the origin
Assessment Tools PAQ-C
The PAQ-C questionnaire is a ten-item, self-administered, 7-day recall questionnaire for children aged 8-14.The total score of the measurement tool is the mean score of the measured scales ranked between 1 and 5, where a higher value means a higher level of moderate and vigorous intensity physical activity based on the first 9 questions of the PAQ-C and 1 question about the health status of children.The first question contained the different activities and ratings, how often they participated in them, and children's rating between 1 and 5, and an average score was calculated from the results of the first question.The following 8 questions were rated between 1 and 5, and the final total score of the questionnaire was the average score of the 9 questions of PAQ-C, where a higher value means a higher level of physical activity [23].
Demographic Questions and Body Composition
The study obtained the gender, age, parent's education, and general health status of the participants.Trained physiotherapists measured the body composition of the students using InBody 770 (InBodyUSA, Cerritos, CA, USA); their weight was measured in kg and their height in cm.Their body mass index (BMI, kg/m 2 ), body fat, body fat %, and skeletal muscle index were calculated.
Adaptation and Validation Procedure
The Hungarian version of the PAQ-C questionnaire was obtained from the original author of the questionnaire via email [23].First, the PAQ-C questionnaire was adapted to the Hungarian language using Beaton's guidelines.After the pilot testing (N = 20) of the new measurement tool, the psychometric properties of the questionnaires were measured.The language adaptation of the questionnaire was developed by independent professional translators from the field of health science and education, and then the translators prepared the back translation of that.Based on the developed versions of the PAQ-C with the agreement of the translators, the research committee finalized the final version of the questionnaire.The expert committee consisted of experts in physical activity monitoring, physical education (PE) teachers, physiotherapists, English and Hungarian language teachers, and statisticians.
The activity list of the questionnaire was modified; a few activities were deleted (street hockey), and cross-country skiing was limited to skiing/snowboarding.In the pilot test phase of the validation procedure, a group of students (aged 8-14 years, N = 20) answered and filled out the PAQ-C based on the results of the pilot testing of the measurement tool.
The Flesch reading index was applied to test the readability of the PAQ-C.The scores range from 0 to 100, and a value of more than 60 means participants can readily understand the tool [40].Based on the results of the reading score, the PAQ-C questionnaire was considered a straightforward reading measurement tool (Flesch Reading Ease Score = 97.8).
To examine the test-retest reliability of the measurement tool, one primary school was selected, recruiting 20 participants from more school classes in Phase 2 of our study.The participants filled in the questionnaire two times, and after 7 days of the first measurements, they repeated the filling out of the PAQ-C questionnaire.
Statistical Analysis
For descriptive statistics, we utilized various statistical calculations, including minimum, maximum, mean (± standard deviation (SD)), and median (interquartile range) to provide characterization.The descriptive analysis of measured items was carried out separately for the total sample and for girls and boys.The CFA was used to test the factor structure and structural validity of the PAQ-C questionnaire.The applied fit indexes were the chi-square test, chi-square/degree of freedom (df) ratio, comparative fit index (CFI), Tucker Lewis index (TLI) and Root Mean Square Error of Approximation (RMSEA) indexes.The criteria for the tests were the following based on the previous research and recommendations: a χ 2 and χ 2 /df ratio less than 3 indicated a good fit, RMSEA < 0.05 excellent, between 0.05 and 0.08 acceptable, TLI and CLI above 0.95 excellent, above 0.90 acceptable [31,41].
To measure the internal consistency of the questionnaire, Cronbach's alpha was calculated.The value was consisted acceptable >0.70 [42,43].
The absolute agreement was calculated to examine the test-retest reliability intraclass correlation coefficient (ICC) with 95% confidence interval (95%CI) with a two-way mixed effect model.The ICC results were considered excellent above 0.80 [44,45].
The discriminant validity of the questionnaire between male and female subgroups was tested using the Mann-Whitney U test to test the difference between the total score of PAQ-C by gender [47,48].
The statistical analysis of the study involved using IBM SPSS version 28.0 and IBM SPSS AMOS 29.0 (SPSS Inc., Chicago, IL, USA) software.The significance level was set at p < 0.05.
Results
The average age of the respondents was 10.62 (SD 2.36) years.The average BMI was 18.46 (SD 3.71) kg/m 2 .A total of 45.48% of the students were girls and 54.52% were boys.The parent's education level was relatively high: 60.12% of mothers and 50.81% of the fathers had college or university degrees.In addition, 84.19% of the children lived in cities.
The main characteristics of the sample are summarized in Table 1.
In our study, we also measured the body composition of the sample; Table 2 presents, separately for male and female participants and for the total sample, the results of that, where significant differences were found in body fat (p < 0.001), skeletal muscle index (p = 0.005), and body fat % (p < 0.001) by gender.
Physical Activity Patterns
The average PAQ-C score was 3.00 (SD 1.05).The descriptive analysis of the PAQ-C questionnaire's items by male and female participants and for the total sample is presented in Table 3.
Internal Consistency and Test-Retest Reliability
The two measured reliability scores' results showed good test-retest reliability of PAQ-C.The Cronbach alpha value of the total sample was at an acceptable level, 0.729.The test-retest reliability analysis showed excellent reliability, and the total PAQ-C score's ICC value was 0.772 (95% CI 0.373-0.841).The two measured reliability scores' results showed good test-retest reliability for the Hungarian version of the PAQ-C (Table 4).
Structural Validity of PAQ-C
To measure the structural validity of the questionnaire, a CFA was conducted.
The CFA conducted on the Hungarian sample showed a good fit.
The examined model's specifications for the sample resulted in high and significant χ 2 values due to the relatively large sample size.The one-factor model was justified after comparing the goodness-of-fit indicators for the models.
In Model 1, based on the previous literature, a bifactorial structure was performed, which showed a partly acceptable factor structure.
Concurrent Validity of PAQ-C Questionnaire
The concurrent validity of the PAQ-C was measured using Spearman's rank correlation (Spearman's r) between the PAQ-C total score and body composition indexes.A significant weak correlation was found between body fat, body fat %, and sceletal muscle index and the PAQ-C scores in the total sample, and among male participants, there was no significant association between body fat, body fat %, and PAQ-C score among female participants (Table 6).
Discriminant Validity
The Hungarian version of the PAQ-C questionnaire showed a significant difference between girls and boys (p = 0.005).The average PAQ-C score for girls was 2.87 (SD 1.07), and for boys it was 3.00 (SD 1.05).
Discussion
The current study provided the first valid measurement tool to examine the physical activity patterns of Hungarian children.The examination of the psychometric properties showed a good-fit factor structure for the unifactorial PAQ-C questionnaire and acceptable internal consistency and test-retest reliability.
The translation procedure was performed based on the guidelines and previous studies.The questionnaire was successfully adapted to the Hungarian language with minor modifications to the original questionnaire, only in the first question's activity list.
The descriptive analysis of the questionnaire showed a 3.00 (SD 1.05) mean value for the Hungarian total sample.The relatively high mean values compared with the other studies were due to the increased physical education mean value of the children, which was 4.17 (SD 0.99).The Chinese (4.04 SD 0.80) and Turkish (4.52 SD 1.00) research found similar average values, like the current study, higher than 4 points [28,30] The second-highest activity score was 3.23 (SD 1.26) for the after-school activities.The lowest value was measured in the case of the spare time activity list, similarly to other previous studies, and it was 1.01 (SD 0.12).The Hungarian children selected only a few types of activities from the list on the questionnaire [28,30].
The internal consistency of the questionnaire, as measured by the Cronbach alpha, was acceptable, similar to the other studies' results.According to the criteria applied by George and Mallery, the Cronbach value was acceptable above 0.70, at most 0.80 [47].In previous studies, the test-retest reliability of the questionnaire was measured between 0.70 and 0.85 (Table 3) [22,[30][31][32].
Examining the factor structure of the questionnaire based on previous studies', the uniand bifactorial structures were tested by confirmatory factor analysis.Similar to Sirajudeen et al.'s study, our findings proved the unifactorial model of the measurement tool to have an acceptable fit [33].
The concurrent validity of the PAQ-C questionnaire was tested in association with the body composition data, where weak correlations were found between body composition and the questionnaire.The findings proved that more active children had significantly higher skeletal muscle indexes and lower body fat and body fat % indexes.We found similar results to Isa et al., where body mass index (r = 0.09) and body fat (r = 0.19) were significantly correlated with PAQ-C score [32].
The discriminant validity proved gender differences, where boys showed significantly higher activity scores than girls (p < 0.001).The average PAQ-C score for girls was 2.87 (SD 1.07), and for boys it was 3.00 (SD 1.05).Our results showed similar patterns to the study by Gobbi et al. (Table 7) [26].The Global Matrix 3.0 highlighted the need for increased efforts to promote physical activity in European children and identified areas for improvement in policies and initiatives related to physical activity promotion [11].Based on the Global Matrix 4 results, 50% of the Hungarian children spent enough time doing outdoor activities, and the screen time recommendation was not more than 2 h a day, which was fulfilled by only 28.9% of children on weekdays and 32.7% at weekends [48].
Lang et al. explained in their study the importance of physical fitness measurements in children's monitoring.The importance of monitoring and surveillance methods for children's physical fitness is essential in a longitudinal way, especially because information is a baseline parameter for decision-making procedures and regarding monitoring, the use of valid and reliable measurement tools is needed [49].Ács et al.'s studies also aimed at developing the monitoring of PA with validation of the most important PA subjective measurement tools for the Hungarian adult population [16,17].The Hungarian NETFIT system was developed a decade ago to monitor the physical fitness of the children using different fitness tests.Furthermore, an important fact in students activity patterns was that, since 2012, an every-day physical education method has been applied in all primary and secondary schools in Hungary.All children must be involved daily during physical education classes [50].
Additionally, measuring physical activity helps inform the development of interventions and policies aimed at promoting physical activity among children.It provides valuable data for evaluating the effectiveness of programs and interventions, guiding improvements for future initiatives.
In the cross-sectional study by Pogrmilovic et al., the researchers aimed to assess the availability, comprehensiveness, implementation, and effectiveness of national physical activity (PA) and sedentary behaviour (SB) policies worldwide.Data were collected from 76 countries, with a response rate of 44%.The findings revealed that 92% of the countries had formal written policies for PA, while 62% had policies for SB.Additionally, 62% of countries had national PA guidelines, while 40% had SB guidelines.Only 52% of countries had quantifiable national targets for PA, and 11% had targets for SB [51].The ministries/departments most involved in promoting PA and reducing SB were those related to sports, health, education, and recreation/leisure.The comprehensiveness of PA policies received a median score of four out of ten, while SB policies scored two.The implementation score for PA and SB policies was six, while the effectiveness score was four for PA and three for SB.Overall, PA policies were better developed and implemented in high-income countries, European countries, and countries in the Western Pacific region compared to low-and lower-middle-income countries.The study concludes that there is a need for increased investment in developing and implementing comprehensive and effective PA and SB policies, particularly in low-and lower-middle-income countries.
The measurement also helps us understand the patterns, contexts, and influences of children's activity behaviours, identifying factors that promote or hinder their participation in physical activity.Furthermore, accurate measurement contributes to scientific knowledge and research in the field, enabling comparisons across studies and a deeper understanding of the relationship between physical activity and various health outcomes in children.The PAQ-C questionnaire could be a useful part of research examining the health behaviour of children to assess the association between health status and health determinants such as physical activity and to understand the physical activity patterns of schoolchildren for not only researchers but also for physical education teachers to improve physical fitness and motivate students to follow an active lifestyle.Overall, measuring children's physical activity is crucial to promoting their health, informing interventions, and advancing scientific knowledge in the field [52].
Limitations of the Study
The study has several limitations.The PAQ-C questionnaire measures the physical activity patterns but not the time spent physically active.Furthermore, our study did not compare PA patterns with objective measurement tools.An objective measurement of PA was also prepared, but due to the lockdown of the COVID-19 pandemic and further consequences, the measurements with accelerometers were miscarried.Consequently, only Phases 1 and 2 of our study were conducted to measure internal consistency, test-retest reliability, and structural, concurrent, and discriminant validity.The questionnaires were filled out using self-reported methods by children, but with the support of a group of experts.The sample was not based on a random selection.Furthermore, seasonal changes in PA patterns were not tested.
Conclusions
The findings of our study were provided by the Hungarian version of the PAQ-C questionnaire, which is appropriate for measuring the physical activity patterns of 7-14year-old children.The questionnaire could be a useful measurement tool for extensive population studies.The findings showed acceptable internal consistency and test-retest reliability of the confirmatory factor analysis, which showed the best fit for the one-factor model of the PAQ-C questionnaire.The newly adapted and tested measurement tool could support the development of new self-reporting questionnaires to measure children's physical activity levels and provide more relevant information about the health behaviours of the target group.
Figure 1 .
Figure 1.Flow chart of sample and recruitment.
Figure 1 .
Figure 1.Flow chart of sample and recruitment.
Table 1 .
Summary of the main characteristics of the sample (N = 620).
Table 3 .
Illustrating the descriptive analysis of the PAQ-C questionnaire.
Table 4 .
Internal consistency and test-retest reliability of the PAQ-C questionnaire.
Table 5 .
Structural validity fit indexes of the uni-and bivariate structures of the PAQ-C.
Table 6 .
Concurrent validity of PAQ-C total score and body composition indexes by gender (N = 620).
Table 7 .
Summary table of previous studies' findings about the PAQ-C questionnaire validation procedure.Measuring children's physical activity is essential for several reasons.It allows for the monitoring of overall health and well-being by assessing if the children meet recommended activity guidelines and maintain an active lifestyle.
|
2023-09-16T15:02:38.682Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c5a5d362a754f7444c72a43185432d5c7245fade",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/10/9/1547/pdf?version=1694669242",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e04ef09fd0323cc17577db819706b799ad99902a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
105209968
|
pes2o/s2orc
|
v3-fos-license
|
Effects of 10% and 15% Carbamide Peroxide on Extrinsic Enamel Discoloration caused by Black Tea
To determine the effects of 10% and 15% carbamide peroxide on extrinsic enamel discoloration caused by black tea. Thirty-two extracted human premolars randomly divided into two groups were soaked in a black tea solution for eight days, then mounted in microwax. The specimens were subjected to 14 applications of bleach, each of seven-hour duration. Color changes after the 7th and 14th applications were measured using the CIE L*a*b* method with VITA Easyshade®. Independent t-tests showed no significant difference in the color change value between groups. The data showed there was no difference in the effectiveness of bleaching using 10% or 15% carbamide peroxide on extrinsic enamel discoloration caused by black tea.
Introduction
Tooth discoloration can affect the enamel surface alone or also other tooth structures such as the pulp. Many causes of tooth discoloration have been found, such as the consumption of the antibiotic tetracyclin during tooth development, excessive fluoride intake, or the physiological aging process. The main cause of discoloration of the enamel surface is the habitual consumption of foods or drinks that can stain the enamel surface [1]. Extrinsic discoloration could be due to agents present in drinks such as coffee, tea, wine, and soft drinks [2].
Tea is a beverage obtained by processing the leaves of the tea plant or Camellia sinensis, belonging to the family Theaceae. Tea is the second most frequently consumed drink after water [3]. In 2013, the domestic consumption of tea in Indonesia reached 6,153 kg per capita per year [4]. Therefore, the high consumption of tea in Indonesia could be one of the predisposing factors in tooth discoloration.
Several methods of tooth whitening have been developed and are currently available, including inoffice bleaching performed by dentists and home bleaching performed by patients and supervised by dentists [5]. Home bleaching is preferred by patients because of its ease of use and affordability compared to in-office bleaching. Home bleaching most commonly uses carbamide peroxide [6]. Carbamide peroxide is a strong oxidizing agent that can break down stain molecules attached to the teeth [7], resulting in a whiter tooth color. However, carbamide peroxide can also affect the roughness of the enamel surface, decreasing enamel hardness and increasing tooth sensitivity to temperature, which may cause pain [8].
The concentration of carbamide peroxide as a bleaching agent considered safe and effective is 10-20%. Some studies have shown that factors such as concentration, thickness of application, or 2 1234567890 ''"" viscosity of carbamide peroxide may affect the outcome of tooth whitening [9]. However, the effects of different concentrations of carbamide peroxide used in home bleaching on extrinsic tooth discoloration caused by black tea are still unknown. To address this question, we studied the effects of 10% and 15% carbamide peroxide on extrinsic tooth discoloration caused by black tea.
Materials and Methods
This study was conducted from August to October 2014 at the Dental Material Laboratory, Faculty of Dentistry, Universitas Indonesia. Thirty-two extracted and caries-free human premolars were used in this study. First, the specimens were cleaned and immersed in saline solution, then colorless nail polish was applied on the root, from the cemento enamel junction to the apex. Two bags of Sosro® branded black tea (four grams) were dipped in a glass beaker containing 320 ml boiling water with a threeminute brewing time. The pH of the solution was measured by immersing pH indicator strips into the solution. Next, the black tea was poured into pots, making the volume up to 10 ml in each. The tooth specimens were immersed in the black tea solution for 24 h and then incubated at 37°C. The black tea solution was replaced daily with an identical tea solution for eight consecutive days. The tooth specimens were randomly divided into two groups: one group was treated with 10% carbamide peroxide and the other with 15% carbamide peroxide.
The initial color was measured using the VITA Easyshade® device. In each measurement, teeth were mounted in microwax with the buccal surface facing up. Specimens were placed on white HVS paper layered with a rag. The color was measured once for each specimen using the tip of the VITA Easyshade® probe covered in plastic wrap. Then, it was placed perpendicularly and attached to the central one-third of the buccal surface to obtain L*0, a*0, and b*0 data.
Carbamide peroxide treatment was performed on the buccal surface of the teeth, and a brush with ±0.5 mm bristle thickness was then used to flatten them. All specimens were covered using a drug strip attached to the tooth surface, placed in a tray, then incubated at 37°C for seven hours. Next, the specimens were removed from the incubator, cleaned using tissue, and rinsed with aquadest until no more carbamide peroxide remained. All specimens were then placed in a dry pot. The color was measured after the 7th application to obtain L*1, a*1, and b*1 data, and after the 14th application to obtain L*2, a*2, and b*2 data. The color was measured following the same procedure used in the initial color measurement.
All L*, a*, and b* data were processed to obtain the values of ∆E*, ∆L*, ∆a*, and ∆b* in each group. Statistical analyses were performed using independent sample t-tests with normally distributed and homogeneous data, to determine the differences in color changes between the groups. The ΔE*, ΔL*, Δa*, and Δb* values in each group at different applications were analyzed using paired sample ttests with normally distributed data, to determine the differences in color changes between the 7th and 14th applications. Because the color change value (ΔE*) could not help determine the direction of color change, the mean values of L*, a*, and b* were analyzed using repeated measures ANOVA to determine the direction of the color changes.
Results
Visual observations showed that after immersion in a black tea solution, the color of all teeth became dark and brownish. After the 7th application of bleach, the tooth color in both groups became lighter and whiter. Further, the tooth color in both groups after the 14th application of bleach was lighter and whiter than that after the 7th application of bleach. The average values of ΔE after the 7th and 14th applications of bleach are shown in Table 1. Both groups showed increases in ΔE after the 7th and 14th applications of bleach, with paired sample t-tests confirming that either 10% or 15% carbamide peroxide resulted in a significant increase.
The mean ΔE values in the 15% carbamide peroxide group after the 7th day of application were higher than those in the 10% carbamide peroxide group. However, after the 14th day of application, the ΔE values in the 15% carbamide peroxide group were lower than those in the 10% carbamide peroxide group. These values were then analyzed using independent sample t-tests and no significant difference was found between them.
The mean values of L* after the 7th and 14th applications of bleach are shown in Table 2. Both groups showed an increase in the mean value of L* after the 7th and 14th applications of bleach compared to that immediately after discoloration, with repeated measures ANOVA confirming a significant difference.
The values of ΔL* after the 7th and 14th applications of bleach are shown in Table 3. Independent sample t-tests showed no significant difference between the two bleaching groups. Paired sample t-tests showed significant differences in L* value change after the 7th and 14th applications in both groups. The mean values of a* after the 7th and 14th applications of bleach are shown in Table 4. When the a* degree was initially measured, immediately after the teeth were immersed in the black tea solution, the mean value of a* was high. After the 7th and 14th applications, the mean values of a* in each group were at the +a* coordinates, which lie in the red region. Repeated measures ANOVA showed a significant difference in the mean value of red-green chroma (a*) between the groups. A decrease in the mean value of red-green chroma indicates a color closer to neutral.
The values of Δa* after the 7th and 14th applications of bleach are shown in Table 5. Table 5 shows the differences in the values of red-green chroma changes (Δa*) in the groups. Independent sample t-tests showed no significant difference between the two bleaching groups. Paired sample t-tests showed significant differences (decreases) in red-green chroma values after the 7th and 14th applications in both groups.
The mean values of b* after the 7th and 14th applications of bleach are shown in Table 6. In each bleaching group, the mean initial value of b* was at the + b* coordinates, which indicates that all specimens were in the yellow chroma. After the 7th and 14th applications, there were decreases in the mean value, but these were still in the yellowish range. Repeated measures ANOVA showed significant differences in the yellow-blue chroma values (b*) between the groups, but pairwise comparisons showed no significant decrease between the initial measurement and the measurement after the 7th application. The values of Δb* after the 7th and 14th applications of bleach are shown in Table 7. Table 7 shows the differences in the values of yellow-blue chroma change (Δb*) in the groups. Independent sample t-tests showed no significant difference between the two bleaching groups. Paired sample t-tests showed significant differences (decreases) in yellow-blue chroma values after the 7th and 14th applications in both groups.
Discussion
After the immersion of teeth specimens in a black tea solution and the formation of stains on their buccal surface, both groups showed a low L* value, indicating low brightness level (darker color) of the specimens. The a* and b* values lay in the positive and high ranges in both bleaching groups, indicating that the color of the teeth became more reddish and yellowish [10]. These reddish and yellowish changes are related to pigments in black tea: thearubigins (red), theaflavins (yellow) [11], and tannins (dark brown) [12]. The attachment of pigments to the tooth surface leads to low brightness (L*) due to light being absorbed by the pigments [13].
After bleach applications, all color components (L*, a*, b*) changed in both the groups (L* increased, and a* and b* decreased). Carbamide peroxide thus succeeded in reducing the black tea stains on the tooth surface by oxidation. H2O2 (in carbamide peroxide) breaks down into HO2 and O; HO2 binds to black tea stain pigment molecules (tannins), interfering with electron binding and altering energy absorption by the organic molecules in enamel, and forming simple molecules that do not reflect light. Consequently, the brightness of the teeth increases in clinical measurements [1,14].
The present study found that after the 7th application of bleach, significant color changes (ΔE) occurred in both bleaching groups. The mean values of ΔE in the 15% carbamide peroxide group were higher than those in the 10% carbamide peroxide group. However, statistical analysis showed no significant differences between these values, indicating that both concentrations had the same effectiveness in teeth whitening. After the 14th bleach application, the mean ΔE values in the 15% carbamide peroxide group were lower than those in the 10% carbamide peroxide group, but statistical analysis showed no significant differences.
Other studies have shown that 15% carbamide peroxide yielded the same final color change as did a concentration of 10% after 14 days of application [15]. However, 15% carbamide peroxide has been shown to provide a faster color change than 10% [15,16].
Both concentrations of carbamide peroxide resulted in a change in the ΔE value after the 7th and 14th days of application. The ΔE value was the difference in each color component (L*, a*, b*) between the initial status and after bleach application. The mean ΔE value after the 14th day of bleaching was higher than that after the 7th day of bleaching at both peroxide concentrations with statistically significant differences, indicating that both concentrations were more effective after the 14th application than after the 7th application. The continued application of carbamide peroxide reduced the black tea stain pigments attached to the tooth surface (ΔE decreased) [14]. The decrease in the ΔE value is due to reduced pigmented organic compounds [1]. During the initial bleaching of pigmented compounds, carbon ring bonds are broken and transformed into simpler and less lightreflecting chemical bonds [1]. As the bleaching process continues, the compounds will attain saturation and become ineffective [1]. Thus, timing is an important factor in tooth whitening. As more bleaching material is applied to teeth, tooth color becomes brighter, but as the bleaching process continues, the bleaching material will become saturated and ineffective.
Both bleaching groups showed significant changes in color in terms of all three components (L*, a*, b*). The value component (brightness level) was higher in both the 10% and 15% carbamide peroxide groups after the 7th and 14th days of application, indicating that both concentrations were equally effective at stain removal by oxidation [1]. This value is expected to increase as the amount of light reflected increases [17]. If color pigments were accumulated on the tooth surface, the amount of light reflected would be less because the light would be absorbed by the pigments, causing a darker tooth color [13]. As oxidation by peroxide continues, more color pigments are oxidized and removed from the tooth surface, causing more light to be reflected and a brighter tooth color. However, the L* values did not differ significantly between the 10% and 15% concentration groups, indicating no significant increases in the tooth brightness level.
The chroma components (a* and b*) decreased in both the 10% and 15% carbamide peroxide groups after the 7th and 14th applications. The ability of bleaching materials to oxidize color pigments causes this decrease in chroma values [1]. The chroma value increases when the number of reflected wavelengths increases [17]. In the present study, the chroma value decreased, indicating that the number of reflected wavelengths decreased, causing a reduction of reddish and yellowish chroma of the teeth to a more neutral color.
The mean a* value decreased to a range of 4.81-5.88 after the 7th application, and to a range of 3.53-3.71 after the 14th application, which lie in the neutral color range [18]. The mean a* values in each group were at the +a* coordinates, which are in the reddish region, although more toward a neutral color. Thus, the process of tooth whitening causes teeth to become less reddish.
The mean b* value decreased after the 7th application, but this was not statistically significant. However, at the end of the 14th application, the b* value decreased significantly until it reached a range of 34.56-36.96, which still lies in the yellowish range because it lies within the +b* coordinates. This is due to red stain pigments (thearubigins) being more abundant than yellow stain pigments (theaflavins) in black tea [12]. Tooth whitening causes teeth to become less yellowish. However, there was no significant difference between 10% and 15% carbamide peroxide in chroma degradation, because the measured values of a* and b* did not differ between the groups.
Based on the results of this study, it can be concluded that there is no difference in effectiveness between 10% and 15% carbamide peroxide treatment of one or two weeks in whitening extrinsic tooth discoloration caused by black tea.
Conclusion
In conclusion, the present study evaluated carbamide peroxide bleaching of black tea-induced extrinsic enamel discoloration, and found that color changes toward a lighter level after 7 and 14 days of 10% or 15% carbamide peroxide application. Further, there was no difference between the effectiveness of 10% and 15% carbamide peroxide.
|
2019-04-10T13:11:58.236Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "ab7d388b1a044e38c7afc8cc6af7689f8f5cee1b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1073/6/062005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "81000165d310a5774878e8d4518ae31dd72d34c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
247057560
|
pes2o/s2orc
|
v3-fos-license
|
Deep learning forecasting using time-varying parameters of the SIRD model for Covid-19
Accurate epidemiological models are necessary for governments, organizations, and individuals to respond appropriately to the ongoing novel coronavirus pandemic. One informative metric epidemiological models provide is the basic reproduction number (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0$$\end{document}R0), which can describe if the infected population is growing (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0 > 1$$\end{document}R0>1) or shrinking (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0 < 1$$\end{document}R0<1). We introduce a novel algorithm that incorporates the susceptible-infected-recovered-dead model (SIRD model) with the long short-term memory (LSTM) neural network that allows for real-time forecasting and time-dependent parameter estimates, including the contact rate, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β, and deceased rate, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document}μ. With an accurate prediction of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document}μ, we can directly derive \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0$$\end{document}R0, and find a numerical solution of compartmental models, such as the SIR-type models. Incorporating the epidemiological model dynamics of the SIRD model into the LSTM network, the new algorithm improves forecasting accuracy. Furthermore, we utilize mobility data from cellphones and positive test rate in our prediction model, and we also present a vaccination model. Leveraging mobility and vaccination schedule is important for capturing behavioral changes by individuals in response to the pandemic as well as policymakers.
In this work, we combine a compartmental model with a recurrent neural network that incorporates mobility data as well as the positive test rate. We (1) predict the time-dependent parameters β and µ using a neural network; (2) forecast the infection rates when mobility decreases or increases; and (3) forecast the change in infection rate based on different vaccination schedules. The goal of this paper is to provide a method to predict time-varying parameters β and µ (and hence R 0 ) as well as to solve the SIRD equations.
The method under consideration in our paper combines the two aforementioned approaches. We first introduce a version of recurrent neural networks to predict the time-varying parameters β and µ . Since γ is assumed to be constant, one can easily find R 0 = β/γ from the neural network. We then obtain the compartments, S, I, R, and D, by numerically solving the SIRD equation over a certain time period (e.g. 7 days). To test the performance of our approach, we used publicly available data for different countries, France, United Kingdom, Germany, and South Korea, provided by Johns Hopkins University. For more detail, we provide an illustration of the algorithm in Fig. 9. We also include two additional datasets: mobility data from cellphones and the positive test rate. Studies reveal that both mobility and positive test rate have been shown to influence the spread of Covid-19 considerably [22][23][24][25] .
In this paper, we present an accurate computational scheme to predict the reproduction number which enables Covid-19 forecasting. We use this scheme to forecast different scenarios by increasing or decreasing the mobility parameter. In doing so, our model can help study the effect of government-imposed lockdowns on R 0 . Furthermore, we make use of a SIRD model with vaccination to see how vaccination affects the spread of the virus. Among many other vaccination models 26,27 , our study focuses on the model introduced in 28,29 as it is sufficient to capture important dynamics in the experiments. By leveraging parameters relative to the vaccination rates, our simulations show how the vaccination rate affects the number of infectious cases. Such experiments can show how different public health interventions may affect the outcome of the epidemic.
Results
In this section, we describe a sequence of numerical experiments of our algorithm further detailed in the Method section below. First, we present the estimated values of our time-dependent parameters β and µ using the Levenber-Marquardt algorithm. Then, the accuracy of the algorithm is demonstrated using in-sample data, and out-of-sample predictions for the next 10 weeks. Lastly, forecasting depending on mobility and vaccination rate is examined. In summary, our main contributions consist of three key findings; (i) our SIRD-LSTM combined network outperforms classical prediction models; (ii) we incorporate the mobility and vaccination as inputs of our neural network to increase the accuracy of our parameters predictions; (iii) we forecast Covid-19 trends when mobility decreases or increases.
Parameter Estimates. A significant finding of our paper is that treating the parameters β and µ as timedependent increases model accuracy. Figure 1 shows (β, µ) for four countries (France, United Kingdom, Germany, and South Korea) generated by the Levenberg-Marquardt algorithm. From this, we can find the basic reproduction number, R 0 = β γ , with γ = 1/14 , which is useful to study the dynamics of the infectious class 30 . We compare real infection data from France, the United Kingdom, Germany, and South Korea with a SIRD model using constant β or time-dependent β . Figure 2 shows the difference between a β and µ constant, that we estimate using the Levemberg-Marquardt algorithm over one year, with β and µ estimated over just 1 week. The time-dependent model more accurately forecasts the infection rate over seven days across each country regardless of the time period. Therefore, it is necessary to consider β and µ as time-dependent variables. Accuracy of our model. To test the forecasting capability of the SIRD-LSTM combined network, we compare the number of predicted confirmed Covid-19 cases under various measures for within sample scenarios. The in-sample fit of the model is an essential indicator for the validity of the model's prediction of the parameters, whereas the out-of-sample forecasts can provide an important guideline for decision/policymakers. Figure 3 depicts the prediction of the time varying parameters (β, µ) compared with (β, µ) from the dataset. We randomly choose N T test data amongst 365 days, and make use of them as a test set. To measure the accuracy, we use the relative-L 2 errors of β , µ , S, I, R, and D such that where Y i true is the ith true dataset of β , µ , S, I, R, or D, and Y i pred is ith predicted values from our algorithm. We observe that the predicted and true parameters are close to each other. Table 1 demonstrates quantitative results on accuracy of our computation. Table 1 shows the relative L 2 error of β is between 3.13 × 10 −3 and 6.29 × 10 −2 , the relative L 2 error of µ is between 9.26 × 10 −2 and 1.73 × 10 −1 . The relative L 2 error with N T = 14 (2 weeks), of the compartments, S, I, R, and D, is also displayed in Table 1. Figure 4 depicts mobility, positive test rate, cumulative infectious individuals, and contact ratio β against the time. The positive test rate and cumulative infectious individuals follow similar trends as opposed to mobility and the positive test rate. The countries under consideration enforce lockdowns as cumulative infectious individuals increased. Hence, the trend plots reveal that greater mobility leads to an increase in infectious individuals. www.nature.com/scientificreports/ Out-of-sample forecast. We next conduct an out-of-sample forecast analysis of our SIRD-LSTM combined model. Figure 5 demonstrates a prediction of R 0 of each country using β generated by the LSTM networks. By forecasting β , in Fig. 6, we show a short-term prediction of the SIRD model up to 10 weeks. In the simulation, we assume that the positive test rate and mobility are the same as the final observation from the dataset. Both the SIRD and vaccinated SIRD models are computed and demonstrated in Fig. 6. In France, Germany, and South Korea, the depicted curves of the infections for the next 10 weeks are increasing, while the infection curve for the next 10 weeks tends to slightly decrease in the United Kingdom. In fact, it has been reported from various sources in May 2021 that the vaccination strategy and lockdowns in the United Kingdom were successful 31 .
Forecasting depending on mobility. Policymakers have sought to decrease the rate of infection in their populations by decreasing population mobility through lockdowns and, more recently, increasing vaccinations. Here, we model the effect of decreasing mobility and increasing vaccination rate on infection rate. If the mobility is increased by 30% of the normal mobility (baseline mobility), the model shows that the peak of infectious individuals increases drastically, see Fig. 7. The data show how visits to places are changing compared to the baseline. A baseline day represents a normal value for that day of the week. The baseline day is the median value www.nature.com/scientificreports/ from the 5 weeks Jan 3-Feb 6, 2020; for more information, see e.g. 32 . Figure 7 shows that in France, South Korea, and Germany, increased mobility results in a drastic change in the number of new Covid-19 cases. On the other hand, if mobility restrictions are decreased to 30% normal mobility, the model predicts that the peak of infectious individuals decreases compared to the baseline mobility.
Forecasting depending on the vaccination rate. In addition, with vaccination, the Covid-19 cases are noticeably decreasing for all of the countries under study in our work. The countries whose reproduction number ( R 0 = β/γ ) is close to 1 such as the United Kingdom and South Korea, have a better vaccination effect than the other countries. Figure 8 displays forecasting of infectious cases under various vaccination schedules within 70 days. In the experiment, we assume that the vaccine is evenly distributed with respect to time. The plots reveal that high vaccination rates are important in reducing the number of infectious cases. Figure 7 shows the models' forecast for infections with different mobility levels in each country. Given mobility information, the combined SIRD-LSTM model can predict the time-varying parameters (β, µ) . With those predicted parameters, the number of infectious individuals are implemented with or without vaccination. Based on the projected forecasts, we observe that a continuation of quarantine level mobility will result in low case counts.
Discussion
We introduced a novel algorithm that incorporates deep learning and compartmental models allowing for forecasts and evaluation of the current Covid-19 outbreak worldwide. We combined the SIRD model with the LSTM network and observed advantages of real-time forecasting and parameter estimation. The new algorithm integrates the forecasting accuracy of LSTM networks with the epidemiological model dynamics of the SIR-type model. Compared to the classical SIRD model in the literature, we forecast time-varying parameters predicted by the LSTM neural network. To forecast the parameters, mobility and positive test rate data are used in the architecture. We find that these inputs are important in improving the model's ability to fit the data. In addition, incorporating these data is essential for capturing behavioral changes by individuals in response to the pandemic as well as to observe the effect of policy decisions to increase vaccination and decrease mobility. As in other approaches, we conduct our research on publicly available datasets. We demonstrate how a new algorithm can be developed to better exploit quantitative measures in the fight against Covid-19. By utilizing reliable metrics and infection dynamics, we provide an approach that is deeply data-driven and computer-based. The proposed simulations can provide a tool for forecasting the effects of different mobility scenarios. Furthermore, as the proposed algorithm is compatible and generalizable, this allows for additional compartments in the SIR model or additional input datasets in the network which makes the method accessible to policymakers. Our developments point towards several extensions of great importance. In particular, we evaluated the impact of the imposition and relaxation of lockdown measures by inputting these changes into the LSTM neural network. We found that employing lockdown rules for each country can help to capture interesting regional dynamics of the Covid-19, and may give specific information to the policymakers. Another direction is to study www.nature.com/scientificreports/ highly nonlinear capabilities of the neural network can be used to conduct inference on latent parameters of the SIR model.
Methodology
In this section, we explore our numerical method and prediction algorithm considered in this research. To begin, we describe the compartmental models, the SIRD equations, and the Runge-Kutta method. Then, we present the Levenberg-Marquardt algorithm. Lastly, we illustrate the combined SIRD-LSTM architecture which is the heart of our approach. We confirm that all methods were performed in accordance with the relevant guidelines and regulations.
Compartmental model: SIRD model. In this study, we represent the spread of Covid-19 using the susceptible-infected-recovered-dead (SIRD) model. Compartmental models have been used to simplify the mathematical modeling of infectious diseases 34,35 . One of the well-known (and simplest) models is the SIR model, and many models including SIRD are derivatives of this basic form [36][37][38] . The SIRD model predicts how a disease spreads, the total number infected, or the duration of an epidemic, and estimate important epidemiological parameters such as the reproductive number. Regarding the compartmental model, the population is assigned to compartments with labels: In addition, N is the total number of people in the area at time t with N = S(t) + I(t) + R(t) . The SIRD model is given by the following expressions 15 : where the parameter β , called the contact ratio, represents the effective contact rate, i.e. expected number of people infected by an infectious person, and γ is defined as recovery rate, i.e. expected number of people removed from the infected state. The ratio of β and γ is called as reproduction number, i.e. R 0 = β/γ . The reproduction number ( R 0 ) shows the average number of secondary infections coming from an infected person. The parameter µ is defined as a deceased rate. We assume that the recovered subjects are no longer susceptible to infection; the number of deaths due to other reasons is neglected. Further, the region under consideration is assumed to be isolated from other regions. This is a reasonable assumption as containment measures such as travel restriction have been enforced in most countries. By introducing the vaccination rate, the S(t) and R(t) terms can be modified for the vaccination model. We add the vaccination rate, ν , and the vaccine efficacy factor, ε , into our SIRD model to study an extended SIRD model with vaccination. For instance, ε = 0.95 for the Moderna and Pfizer vaccine 39 . More precisely, we introduce a multiplier factor δ = (1 − ε).
(2) www.nature.com/scientificreports/ We now write the following SIRD model which incorporates vaccination 28,29 With the SIRD model, we generate a deep neural network to predict β and µ . Subsequently, the SIRD with vaccination model provides the dynamics of the vaccination with predicted parameters β and µ. The contact rate, β = β(t) , and death rate, µ = µ(t) , of many acute infectious diseases varies significantly in time and frequently exhibits significant seasonal dependence 40,41 . Epidemiological models can be used to predict contact and death rate, which are important for measuring the spread of disease. A substantial body of research predicts the contact and death rate, β and µ , of infectious diseases via the discrete compartmental model [42][43][44] . The rest of this section introduces an algorithm to compute the time-dependent parameters directly from our data and the discrete SIRD model.
Levenberg-Marquardt algorithm.
To estimate the contact rate, β , and the death rate, µ , we use the Levenberg-Marquardt algorithm. To apply the algorithm, we solve the SIRD equations using a numerical approximation. In the present study, we use the fourth-order Runge-Kutta methods (RK4) which give the following discrete version of the SIRD model. For simplicity, we set then (2) can be recast Figure 8. Forecasting of of the number of Covid-19 infections for France, the United Kingdom, Germany, and South Korea under various vaccination schedules. Here, " 12% ", " 20% ", and " 38% " mean 12%, 20%, 38% of the population is vaccinated, respectively. www.nature.com/scientificreports/ The RK4 of (2) can be written as Given a dataset (y(t)) , using the Levenberg-Marquardt algorithm, we aim to find the parameters (β n , µ n ) := (β(t n ), µ(t n )) of the model curve with the least-squares curve-fitting 45 , We note that the Covid-19 dataset for each country is obtained from the Google mobility report 32 .
Neural network architecture. Long short term memory networks-so-called LSTM-are variants of recurrent neural network (RNN), capable of learning long-term dependencies. They were introduced by Hochreiter and Schmidhuber 46 , and are widely used in many fields such as time series prediction 47 , speech recognition 48 , and robot control 49 among many other applications.
Classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. However, there is a computational drawback to the standard RNNs. In standard RNNs, this repeating module will have a very simple structure, such as a single layer. When training a classical RNN with back-propagation, the gradients which are back-propagated may tend to zero (vanish gradient problem) because the RNN remembers data for just a small duration of time. In other words, if we need the information after a small-time it may be reproducible, but once a lot of information is fed in, this information may get lost somewhere. This issue can be resolved by applying a variant of RNNs such as the LSTM network. The LSTMs are explicitly designed to avoid the long-term dependency problem as remembering information for long periods is practically their default behavior.
The compact forms of the LSTM with a forget gate can be described by the following system of equations: where x t is input vector, c t is a memory cell, and {i t , f t , o t } denote the input, forget, and output gates, respectively; for more details, see for instance 46,50,51 . Here, the operator • denotes the Hadamard product (element-wise product), and subscript t indexes the time step.
In the proposed neural network, we couple the SIRD model (2) and the LSTM network. By the Levenberg-Marquardt algorithm, predictions on β and µ are made by curve-fitting methods. With this, input data consists of x t = {S t , I t , R t , D t , p t , m t , β t , µ t } where p t is a positive rate (the percentage of all coronavirus tests performed that are actually positive) and m t is a mobility trend at time t obtained from Google's mobility report. The reports chart movement trends over time by geography, across different categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential. The parameters β t and µ t are predicted by the Levenberg-Marquardt algorithm. The output of the LSTM network is ( β t+1 , µ t+1 ) . When implementing cost functions, we apply a mean-squared forecasting error metric as well as mean-absolute percentage errors.
The network structure and activation of each hidden unit in the hidden layers are determined by the neurons in the previous layers. The activity of each layer is given by the nonlinear activation function σ such as a sigmoid function or ReLU function. The final output of the coupled model is obtained by combining the network output of confirmed cases with the SIR model forecast. More precisely, the collective dataset generated from the SIRD model is used as inputs for the LSTM whose outputs provide the parameters β and µ for the next time period. By predicting the parameters, we are able to solve the SIRD moded, which gives {S, I, R, D} for the next time period. The coupled models given in Fig. 9 illustrate the Neural LSTM-SIRD architecture. The network architecture we use is an LSTM with ReLU activation functions, and is trained by using Adam optimizer with a mean-squared error loss function. The model is not constrained to a particular setup and we could search over various hyperparameters to manipulate the number of neurons, with similar results.
(5) (β,μ) = arg min β,µ m i=1 y n+1,i − y n,i + h 6 (k 1 + 2k 2 + 2k 3 + k 4 ) 2 . (6) i t = σ g (W i x t + U i h t−1 + b i ), c t = σ c (W c x t + U c h t−1 + b c ), h t = o t • σ h (c t ), Figure 9. A description of the combined SIRD-LSTM model structure with Covid-19 community mobility (mobility) and positive test rate (Pos. Test Rate) to generate forecasts of time varying parameters (β, µ) . The ODE solver based on the Runge-Kutta fourth order method makes use of the predicted parameters in the numerical discretization.
|
2022-02-24T06:23:09.910Z
|
2022-02-22T00:00:00.000
|
{
"year": 2022,
"sha1": "b2c79f7060f355c2604c7378a21319b475c3a171",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-06992-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2746ad1d21de1ac487204042221f39603e6f721a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247437646
|
pes2o/s2orc
|
v3-fos-license
|
Succession of the wheat seed-associated microbiome as affected by soil fertility level and introduction of Penicillium and Bacillus inoculants in the field
Abstract During germination, the seed releases nutrient-rich exudates into the spermosphere, thereby fostering competition between resident microorganisms. However, insight into the composition and temporal dynamics of seed-associated bacterial communities under field conditions is currently lacking. This field study determined the temporal changes from 11 to 31 days after sowing in the composition of seed-associated bacterial communities of winter wheat as affected by long-term soil fertilization history, and by introduction of the plant growth-promoting microbial inoculants Penicillium bilaiae and Bacillus simplex. The temporal dynamics were the most important factor affecting the composition of the seed-associated communities. An increase in the relative abundance of genes involved in organic nitrogen metabolism (ureC and gdhA), and in ammonium oxidation (amoA), suggested increased mineralization of plant-derived nitrogen compounds over time. Dynamics of the phosphorus cycling genes ppt, ppx and cphy indicated inorganic phosphorus and polyphosphate cycling, as well as phytate hydrolysis by the seed-associated bacteria early after germination. Later, an increase in genes for utilization of organic phosphorus sources (phoD, phoX and phnK) indicated phosphorus limitation. The results indicate that community temporal dynamics are partly driven by changed availability of major nutrients, and reveal no functional consequences of the added inoculants during seed germination.
Introduction
Substantial efforts have been devoted to the characterization of plant-associated microbiomes from the rhizosphere and phyllosphere (Mendes et al. 2013, Compant et al. 2019. In comparison, bacterial communities in or on the seed and in the surrounding spermosphere have only received limited attention even though they represent the starting point for assembly of other plant microbiomes (Torres-Cortés et al. 2018). Interactions between the plant and its associated microbiota are crucial during the dynamic phase of seed germination and seedling development (Nelson et al. 2018, Eyre et al. 2019. Hence, seed application of plant growth promoting microbial inoculants is becoming increasingly used to improve plant health and development at these important early stages (Berninger et al. 2018).
Moreover, the seed contains phosphorus (P) reserves (predominantly as phytate) contributing to the P nutrition of the young seedling (White and Veneklaas 2012). These P reserves may also become available for seed-associated microbiota, i.e. the microorganisms in the seed, on the seed and in the surrounding spermosphere. Thus, the seed habitat represents a nutrient-rich battlefield, where competition takes place between microorganisms coming with the seed, native soil microorganisms and, when applied, introduced microbial inoculants (Nelson 2004).
Studies of seed-associated bacterial communities have primarily been carried out in axenic or well-controlled soil conditions (Barret et al. 2015, Yang et al. 2017, Torres-Cortés et al. 2018) revealing a dominance of Proteobacteria, Actinobacteria and Firmicutes across several plant species (Nelson et al. 2018). Several taxa including Paenibacillus, Pantoea, Pseudomonas and Xanthomonas can be transmitted with the seed as endophytes (Links et al. 2014, Yang et al. 2017, Nelson et al. 2018) and some of these can be beneficial to plant growth due to their antagonism against seed-transmitted fungal pathogens (Links et al. 2014). Seed-associated bacteria may also be recruited from the soil and later be part of the rhizosphere communities (Johnston-Monje et al. 2016). However, insight into the composition and temporal dynamics of seed-associated bacterial communities under realistic field conditions is currently lacking.
In agricultural systems, mineral and organic fertilizers are applied to ensure and improve nutrient availability to crops. Organic fertilizers such as animal manure, can increase both nutrient and organic carbon content in the soil, enhance microbial activity and typically raise soil pH (Zhong et al. 2010, Li et al. 2015, Blanchet et al. 2016. In contrast, amendment with inorganic fertilizers increases nutrient availability, while it typically decreases soil pH either due to oxidation of ammonium-based compounds or due to hydrolysis of orthophosphoric acids (Zhao et al. 2014). It is well described that long-term fertilizer amendments have large effects on the diversity and composition of soil bacterial communities (Francioli et al. 2016, van der Bom et al. 2018, while it remains unknown whether they are equally important in shaping seed-associated communities and their functional potential in relation to nutrient cycling. Wheat (Triticum aestivum) is a stable crop for human diet with an increasing world-wide demand (Shewry and Hey 2015). However, grain yield is stagnating in many regions of the world and the application of chemical fertilizers is not always sustainable. Consequently, there is currently a considerable focus on developing sustainable, plant beneficial microbiological solutions for this crop so the gap between demand and production can be filled.
Although the inoculation to agricultural soils of free-living microorganisms often seems to cause only minor impact on rhizosphere community structure (Ambrosini et al. 2016, Silva et al. 2021, their impact on the native seed microbiota composition and functional potential in the field remains largely unknown (Ambrosini et al. 2016). Using the fungal phosphate solubilizing biofertilizer Penicillium bilaiae (Asea et al. 1988, Kucey 1988, Kucey and Leggett 1989, Wakelin et al. 2004, Leggett et al. 2015 and two recently isolated P. bilaiae hyphae-associated Bacillus simplex strains, shown to stimulate fungal growth and P solubilization in vitro (Ghodsalavi 2016, Ghodsalavi et al. 2017), as model inoculants, the aims of the current field study were to (i) determine the composition of winter wheat seed-associated bacterial communities as affected by time and by long-term soil amendments with mineral or mineral plus organic fertilizers, (ii) analyze the N and P cycling potential of the bacterial communities as affected by time and soil fertility level, (iii) determine the establishment and persistence of P. bilaiae alone and together with B. simplex, and (iv) assess the impact of the added inoculants on the indigenous bacterial communities. We hypothesized that (i) time affected the composition and functionality of seed-associated microbiota more than the legacy effect of long-term soil fertilizer amendments, as the exudation from the seed during germination changes substantially over time, and (ii) the added inoculants persisted in the seedassociated community over the entire sampling period.
Field site description
The current field study was conducted at the Long-Term Nutrient Depletion Trial (LTNDT) field established in 1964 at the Experimental Research Farm of the University of Copenhagen in Taastrup, Denmark (55 • 40 N, 12 • 17 E). The experimental history, design and management practices have been described in detail by van der Bom et al. (2018). Briefly, the entire field received no P or potassium (K), but moderate N fertilizer during 1964-1995, and then the current LTNDT design consisting of seven different fertilizer treatments was established in 1996 (and expanded to 14 treatments in 2010 as described in van der Bom et al. 2018), each represented by four replicate plots (50 m × 20 m) in a block design (Fig. S1, Supporting Information). The current study, involved three soil fertility levels: two fully mineral fertilizer (as calcium-ammonium nitrate, triple-superphosphate and potassium chloride, all relatively soluble fertilizers) amendments: (i) N 1 K 1 (120 kg N, 0 kg P, 120 kg K ha -1 y -1 ) that corresponded to a very low P fertility level; (ii) N 1 P 2 K 2 (120 kg N, 40 kg P, 240 kg K ha -1 y -1 ) that resembled a medium P fertility level, a common Danish agricultural practice in medium P and K soils; and one mixed mineral plus organic fertilizer amendment: (iii) M 1 P 1 (equivalent to an average 120 kg NH 4 -N and 20 kg P and 100 kg K in animal slurry + 20 kg mineral P ha -1 y -1 ) resembling a medium to high P fertility level with organic amendment, corresponding to the common application rate for animal slurry in Danish agriculture with supplemental mineral P fertilization. Animal slurry and mineral fertilizer is applied every year during spring. In our experimental period in September-October 2016, no fertilizers were applied right before sowing the winter wheat crop studied, but the preceding crop received animal slurry and first dose of the mineral fertilizer (half the N, all P and K) in April 2016 and the second dose of mineral fertilizer (other half of N) in May 2016; however, soluble nutrients from these were completely depleted in the soil by the time of harvest of the preceding crop and the subsequent sowing of the winter wheat. The soil was a sandy loam with 164 g kg -1 clay, 173 g kg -1 silt, 333 g kg -1 fine sand, 312 g kg -1 coarse sand and 17 g kg -1 organic matter. The chemical properties of the soils with the selected long-term fertilizer treatments are shown in Table 1.
Microbial inoculants and experimental design
The microbial inoculants tested in this experiment were (i) the fungus Penicillium bilaiae strain DBS5, provided by Novozymes A/S (Denmark) and selected for its ability to solubilize poorly soluble P sources and (ii) the Bacillus simplex strains 313 and 371, originally isolated from hyphae of P. bilaiae strain ATCC 20851, and shown to stimulate growth and P solubilization of P. bilaiae under laboratory conditions (Ghodsalavi 2016) and improve P uptake of wheat in a pot experiment (Hansen et al. 2020).
In each of the four 50 m × 20 m replicate plots representing the selected soil fertility levels (N 1 K 1 , N 1 P 2 K 2 and M 1 P 1 ), we established four mini-plots (10 m × 3 m) for testing the different microbial inoculants individually and in combination in the following treatments: (i) noninoculated control (C), (ii) single inoculation with P. bilaiae (PB), (iii) single inoculation with B. simplex strains 313 and 371 (BS) and (iv) combined inoculation with P. bilaiae and B. simplex strains 313 and 371 (PB+BS). All mini-plots were randomized within each soil fertility level. This resulted in 48 mini-plots (3 soil fertility levels, 4 replicate plots per soil fertility level, and 4 mini-plots per replicate plot, one for each of the inoculum treatments).
Introduction of microbial inoculants
Microbial inoculation was performed by coating the winter wheat seeds (var. Benchmark) with P. bilaiae spores and/or liquid fermentations of B. simplex prior to sowing. Briefly, seeds were coated with the dry spores of P. bilaiae in a formulation containing 20 g of P. bilaiae spores (3.9 × 10 10 spores g -1 ) and 180 mL of a carrier solution consisting of 66.54% (w/w) sterile water, 0.10% K 2 HPO 4 , 0.02% KH 2 PO 4 , 21.67% maltodextrin and 11.67% maltose monohydrate. The formulation was added to 20 kg of wheat seeds in a compulsory mixer (Soroto maskiner Aps, Glostrup, Denmark) and mixed for 15 min. For the B. simplex treatment, 200 mL of a liquid solution consisting of B. simplex strains 313 and 371 spores resuspended in carrier solution (total concentration of 1.4 × 10 7 spores mL -1 ; 0.7 × 10 7 spores mL -1 of each strain) was added to 20 kg of wheat seeds in a compulsory mixer as earlier. For the combined treatment, the used inoculation doses were the same as for the single treatments. The inoculation was done combining the procedures described earlier for the individual strains. For the noninoculated control treatment, 200 mL of carrier solution were added to 20 kg of wheat seeds and coated in the same manner as earlier. After seed treatment, colony-forming units per seed (CFU seed -1 ) were determined by recovering the seed organisms by shaking the seeds in deionized water with 0.1% Tween 80 for 20 min at 250 rpm. Subsequently, dilution series were plated on tryptone yeast agar with 10 mg L -1 nystatin (for treatments with B. simplex) and potato dextrose agar with 10 mg L -1 of streptomycin and penicillin (for treatments with P. bilaiae). The control treatment was plated in both media to identify the seed natural occurring community. Determined values were as follows: control, 1.68 × 10 4 bacterial CFU seed -1 + 3.30 × 10 2 fungal CFU seed -1 ; P. bilaiae treatment, 1.83 × 10 5 CFU seed -1 ; B. simplex treatment, 8.13 × 10 4 CFU seed -1 ; and combined treatment, 1.02 × 10 5 P. bilaiae CFU seed -1 + 7.96 × 10 4 B. simplex CFU seed -1 . The presented values reflect the natural occurring community on seed for the control samples and the actual size of the inocula on the seeds for the treatments with B. simplex and P. bilaiae as no other microbial colonies were detected on the plates for those samples (plate pictures in Fig. S2, Supporting Information). The inoculated as well as noninoculated seeds were sown with an experimental sower at a seeding rate of 170 kg ha -1 in mid-September 2016, and subsequently managed according to conventional practices.
Sampling and DNA extraction
Samples of bulk soil were collected 3 days before sowing (DBS). Briefly, from each of the 48 mini-plots, 25-30 soil cores were collected from the plough layer (0-20 cm) following a 'W'-shaped sampling pattern, and pooled and mixed to one composite sample for each mini-plot. The 48 bulk soil samples were processed for DNA extraction as described later for the seed samples. Furthermore, samples of coated seeds (n = 5 per inoculant treatment; total 20 samples) were collected prior to planting and processed as described later.
After planting, seed samples were collected for as long as the seed could be clearly discerned in the field, i.e. 11, 14, 17 and 31 days after sowing (DAS). At each sampling day, 48 samples were recovered (corresponding to three soil fertility levels, four inoculation treatments and four replicates), resulting in 192 samples in total. Briefly, for each sample, 10-12 seedlings were randomly selected and gently removed from the soil with the help of a shovel. Loosely adhered soil was carefully removed from the seedlings by shaking and manually disrupting bigger attached soil aggregates until only closely adhered soil, was left. Shoots and roots were cut off and discarded while the seeds, turning into seed remains during the time of sampling, were collected into a composite sample. Hence, each sample consisted of 10-12 seeds/seed remains with closely adhered soil, comprising the microorganisms in the seed/seed remains, on the seed/seed remains and in the sur-rounding spermosphere, i.e. soil closely adhered to the seed/seed remains.
The collected samples were freeze-dried for 24 h and finely crushed using zirconium oxide grinding beads or a mortar and pestle prior to DNA extraction. Samples of 0.5 g were used for DNA extraction using the NucleoSpin ® 96 Soil kit (Macherey-Nagel, Düren, Germany) adapted to a Biomek ® FCP Laboratory Automation Workstation (Beckman Coulter, Brea, CA, USA). Negative controls (500 μL of PCR graded water) were included in the extraction rounds and used for 16S rRNA gene amplicon library preparation together with the seed samples. All extracts were quantified using a Qubit 3.0 fluorometer (Invitrogen, Life Technologies, Naerum, Denmark) with a Qubit ® dsDNA HS Assay Kit (range 0.2-100 ng; Invitrogen) and stored at -20 • C. DNA concentrations of soil and seed extracts ranged from 35 to 111 ng μL -1 .
Shoot and root biomass samples were collected once during the study period (17 DAS) to determine if inoculants had any effect on early development of winter wheat. Whole plants with shoots and roots were excavated using a spade from two 0.5-m long rows at different places in each block and combined to obtain one composite sample per plot. The shoots were separated from the roots at the crown, and the roots gently washed under running water. Shoots and root samples were dried in an oven for 48 h at 60 • C to determine dry biomass.
16S rRNA gene sequencing and bioinformatics
16S rRNA amplicon libraries were prepared from the extracted DNA using the primers 341F (5 CCTACGGGNGGCWGCAG-3 ) and 805R (5 GACTACHVGGGTATCTAATCC-3 ) with adapters, targeting the V3-V4 regions of the 16S rRNA gene (Yu et al. 2005). Detailed library preparation protocol can be found in the Supporting Information. The extracted DNA from the 20 samples of coated seeds were sequenced in an independent run. Paired-end library sequencing was performed using the MiSeq reagent kit v3 (600 cycles) and a MiSeq sequencer (Illumina Inc., San Diego, CA, USA).
The obtained 16S rRNA gene sequences were processed using the UPARSE bioinformatics pipeline version 10.0.240_i86linux64 (Edgar 2013). The paired-end forward and reverse reads were merged using the -fastq_mergepairs followed by trimming 16 base pairs of the left and 21 base pairs of the right end, corresponding to the PCR primer sequences using -fastq_truncate (-stripleft 16 -stripright 21). Quality filtering was performed using -fastq_filter with a maximum expected error of 1 (-fastq_maxee 1). All amplicons were reduced to unique sequences using -fastx_uniques and counted (-sizeout). To recover correct biological sequences, Zero-radius OTUs (zOTUs) were generated with -unoise3, removing chimeras and reads with sequencing and PCR errors. Reads with sequencing and PCR errors were remapped to the zOTUs using -usearch_global with a minimum sequence similarity of 97% (-id 0.97) mapping to the plus strand only (-strand plus). The taxonomic classification of the zOTUs was conducted using QIIME2 2019 v. 10 (Bolyen et al. 2019) and classify-sklearn and the SILVA database v. 3 (Quast et al. 2012). zOTUs classified as Mitochondria or Chloroplast were removed from the dataset. Furthermore, zO-TUs classified as Unknown at kingdom level or unclassified Bacteria at phylum level also represented plant DNA contamination based on BLAST searches and were removed. Archaeal sequences were also removed from the dataset. The raw sequences were uploaded to NCBI Sequence Read Archive (SRA) under the bioproject number PRJNA649549. The number of sequences before and af-ter cleanup, the final number of zOTUs, and the individual SRA sample accession numbers are presented in Table S1 (Supporting Information).
Functional gene qPCR array
A panel of 30 bacterial and archaeal functional genes involved in cycling of N or P, as well as the 16S rRNA gene were quantified in parallel by a high-throughput qPCR (HT-qPCR) quantitative microbial element cycling (QMEC) chip employing a SmartChip real-time PCR system (WaferGen Biosystems, Fremont, CA, USA) (Zheng et al. 2018) (primer sequences can be found in Table S2, Supporting Information). The thermal program was as follows: 10 min at 95 • C, followed by 40 cycles of 30 s at 95 • C, 30 s at 58 • C and 30 s at 72 • C. The WaferGen software automatically generated melting curves. Three technical triplicates were included in HT-qPCR, and only samples showing positive amplification of all technical triplicates was considered for further data analysis. Gene copy number was calculated using the equation: gene copy number = 10 ((31-Ct)/(10/3)) , where Ct is the threshold cycle (Looft et al. 2012). Functional genes were normalized to the amount of 16S rRNA genes to obtain relative abundances (copy numbers of functional genes per 16S rRNA gene). Water samples were included as negative controls.
Twenty microliters of the reaction mixture were loaded into a sample well of a DG8 cartridge (Bio-Rad Laboratories Inc., CA, USA), while 70 μL of Droplet Generation Oil for EvaGreen ® (Bio-Rad Laboratories) were loaded in the corresponding oil well. The cartridge was transferred to a QX200 Droplet Generator (Bio-Rad Laboratories) to generate up to 20 000 droplets, according to the manufacturer's instructions. The emulsion was then gently transferred to a twin.tec semi-skirted 96-well PCR plate (Eppendorf, Hamburg, Germany), which was sealed with Pierceable Foil Heat Seal (Bio-Rad Laboratories) using a PX1 PCR Plate Sealer (Bio-Rad Laboratories). The plate was immediately put in a T100 Thermal Cycler (Bio-Rad Laboratories) where end-point amplification was performed under the following conditions: initial denaturation at 95 • C for 3 min, followed by 5 cycles touchdown starting at 64.6 • C and decreasing by 0.4 • C per cycle, after which 25 cycles of 30 s at 95 • C, 30 s at 63 • C, 30 s at 72 • C were carried out before a final extension for 7 min at 72 • C. After completion of the PCR, the sealed plates were moved into the QX200 Droplet Reader (Bio-Rad Laboratories) where droplets and respective signals were analyzed according to manufacturer's recommendations.
For B. simplex strains 317 plus 371, their joint persistence on the seeds was assessed using the 16S rRNA gene amplicon data by comparing the relative abundance of the zOTU 22 in control and inoculant treatments. The relative abundance of the Bacillus zOTU 22 was used as a proxy for persistence of these two strains in the experiment as the genomic similarity between the two Bacillus simplex strains used in this experiment is not sufficient to be able to distinguish them using the V3-V4 regions of the 16S rRNA gene.
Data analysis and statistics
All statistical analyses were performed with R version 3.6.1 (R Core Team 2019) and making substantial use of the phyloseq (Mc-Murdie and Holmes 2013), vegan (Oksanen et al. 2018) and gg-plot2 (Wickham 2016) packages. For the sequence data, rarefaction curves at zOTU level were computed using the vegan package (Oksanen et al. 2018). The rarefaction curves indicated that the number of zOTUs increased with the number of sequences although they did not reach a true plateau (Fig. S3, Supporting Information). Hence, a deeper sequencing effort is needed to fully cover the diversity of the bacterial communities in the current environment. The rrarefy function was used to rarefy the generated annotation tables to 2000 reads per sample, and diversity indices (Richness (S = number of different zOTUs) and Shannon's (H = − R i=1 p i ln p i ) at zOTU level were calculated using the phyloseq package (McMurdie and Holmes 2013). Differences in the diversity indices of the seed-associated samples (collected at 11, 14, 17 and 31 DAS) were determined using three-way ANOVA. Post-hoc tests were then done using the emmeans package v. 1.7.2. (Lenth 2022) on adjusted linear models using the identified significant factors. P values were adjusted using the Tukey honest significant difference (HSD) test and considered significant when <0.05. Diversity of bulk soil samples collected at 3 DBS was plotted side by side with the seed-associated samples for reference. Downstream analyses were performed using a non-rarefied dataset where samples with <1000 reads were removed. The exception to this were the 20 seed samples collected prior to sowing that harbored a very limited microbial community and a high plant DNA content leading to <1000 bacterial reads per sample, corresponding to 1-27 zOTUs per sample (Table S1, Supporting Information). Samples with <10 reads were removed from the downstream analysis. The analysis of those samples was therefore done using non-rarefied data without the >1000 reads threshold. The low number of reads does not allow for a complete characterization of the initial seed microbiome, as several taxa have not been detected. However, by sequencing the coated seed samples independently of the rest of the samples, we avoided risk of cross-contamination and indexhopping. Hence, we argue that the detection of specific zOTUs in these samples highlight their presence in the initial seed community, although we cannot make any inferences on relative abundance. Beta-diversity at zOTU level was represented by nonmetric multidimensional scaling (NMDS) ordination using Bray-Curtis dissimilarities on relative abundances. In parallel, PERMANOVA (1000 permutations; Bray-Curtis dissimilarity index) was used to evaluate the effects of soil fertility level, time and inoculation treatment to the seed-associated samples (collected at 11, 14, 17 and 31 DAS).
The differential abundance of genera between inoculant treatments was analyzed, while controlling for soil fertility level at the four sampling days. The differential abundance of classes and phyla between sampling times were analyzed for each soil fertility level, while controlling for inoculation treatment. The differential abundance was determined using beta-binomial regression with the Corncob package v.1.0 (Martin et al. 2020). Only genera that had an estimated differential abundance of < -1 or >1, and P-values adjusted for multiple testing <0.05 (FDR < 0.05) were considered significant.
For the analysis of the functional genes, only genes that were above the detection limit in three out of four biological repli-cates were admitted into the analysis. This excluded the genes hzo, bpp and hzsA. The relative abundances of functional genes in different conditions were compared using Bray-Curtis dissimilarities. PERMANOVA (1000 permutations, Bray-Curtis dissimilarity index) was used to test for the effect of soil fertility level, inoculation treatment and time on the composition of the functional potential. Differences in relative abundance at 11 and 31 DAS were tested using a Mann-Whitney test as the data did not fit a normal distribution. Obtained P-values were corrected using the Benjamini-Hochberg correction to account for multiple testing. Differences with adjusted P-values < 0.01 were considered significant.
For the analysis of the microbial inoculant persistence, twoway ANOVA was used to determine the effect of soil fertility level and inoculation treatment within each time point, for both P. bilaiae ddPCR data and for Bacillus strain 313+317 (both included in zOTU 22) relative abundance data. For the analysis of shoot and root biomass, two-way ANOVA was used to determine the effect of inoculation treatments and soil fertility level. A linear mixed model was used to account for both fixed and random field block effects. Differences with P-values < 0.05 were considered significant.
Seed development and climatic data
Winter wheat with different inoculant treatments was sown in plots with different soil fertility levels. Independent of treatment or soil fertility, seeds germinated 8 DAS. Seeds with closely adhered soil were subsequently sampled from 3 to 23 days after germination, i.e. 11, 14, 17 and 31 DAS. During the 31 days of sampling, the seeds slowly decomposed. We refer to bacteria recovered from these samples as seed-associated later. During this time period, the plants developed to reach Zadock growth stage 12 (the two-leaf stage) across all soil fertility levels. The average temperature dropped from ∼15 • C to ∼10 • C and the site received ∼85 mm of precipitation during this period (Fig. S4, Supporting Information). The microbial inoculation had no effect on early shoot or root biomass of winter wheat as measured at 17 DAS (Fig. S5, Supporting Information). The highest shoot biomass was achieved at N1P2K2 fertility level, the root biomass was equally high at N1K1 and N11P2K2 fertility levels. While the plant parameters are of high importance for evaluating inoculant performance, the main focus of the present study was the development of the seedassociated bacterial community as affected by inoculants and soil fertility levels.
Diversity and community structure are affected more by time than by soil fertility level and inoculation treatments
At 3 DBS, the Richness and the Shannon diversity for the soil bacterial communities were significantly higher than for the seedassociated bacterial communities at each time point ( Fig. 1; P < 0.001). However, the Richness and the Shannon diversity increased significantly with time for the seed-associated communities (Fig. 1). Moreover, a significant interaction between time and soil fertility level was noted for both the Richness and the Shannon diversity (Fig. 1). For M1P1 and N1P2K2, the Richness was significantly lower at 11 DAS compared with the following sampling time points (Tukey HSD; N1P2K2: P < 0.01; M1P1: P < 0.05), whereas 11, 14 and 17 DAS were significantly lower than 31 DAS for N1K1 (Tukey HSD; P < 0.001). The Shannon diversity was sig-nificantly higher at 31 DAS compared with 14 and 17 DAS for N1K1 (Tukey HSD; P < 0.05). No differences in Shannon diversity were observed between sampling time points for M1P1. Among fertility levels, Richness and Shannon diversity were significantly higher forM1P1 and N1P2K2 than N1K1 at 14 DAS (Tukey HSD, Richness: P < 0.05; Shannon Diversity: P < 0.05). At 17 DAS, the Richness differed significantly among the three fertility levels (M1P1 > N1P2K2 > N1K1; Tukey HSD; P < 0.05) and between two fertility levels for the Shannon diversity (M1P1 > N1K1; Tukey HSD; P = 0.0121). There were no significant effects of the introduced inoculants P. bilaiae, B. simplex or the combination of P. bilaiae and B. simplex on the bacterial community alpha diversity.
The compositions of soil and seed-associated bacterial communities were significantly different (Fig. 2). For the seed-associated communities, their composition was significantly affected by the tested factors time, soil fertility level and inoculum (Fig. 2) (PER-MANOVA; P = 0.001 for Time and Fertility Level; P = 0.002 for Inoculum; Table S3, Supporting Information). Time was the most prominent predictor of the community composition (R 2 = 0.27). Accordingly, the sample clustering indicated that the bacterial community changed gradually from 11 until 31 DAS and became more similar to the soil community ( Fig. 2A). While soil fertility level was also significant, it explained less of the variation (Fig. 2B) (R 2 = 0.036). Moreover, the interaction of these two factors also significantly influenced the community composition showing that soil fertility level shaped the community within each time point (R 2 = 0.065). The effect of the introduced inoculants was also significant, but this factor was the poorest predictor of community composition (Fig. 2C) (R 2 = 0.025). Thus, the introduction of inoculants had low impact on the seed-associated community structure compared with time and soil fertility level.
The communities at 11 DAS were dominated by Actinobacteria and Gammaproteobacteria for all soil fertility levels (Fig. 3). In contrast, the soil at 3 DBS was dominated by Actinobacteria, Acidobacteria, Alphaproteobacteria, Chloroflexi and Verrucomicrobia. At the N 1 K 1 soil fertility level, the Gammaproteobacteria significantly increased their relative abundance between 11 and 17 DAS (FDR < 0.01), after which the relative abundance decreased again to the level observed at 11 DAS. Their temporal dynamics coincided with a significant decrease in the relative abundance of Actinobacteria from 11 DAS to 14 and 17 DAS (FDR < 0.05), after which the Actinobacteria again increased in relative abundance at 31 DAS. For the N 1 P 2 K 2 soil fertility level, the dynamics in Actinobacteria and Gammaproteobacteria resembled those in N 1 K 1 , although the changes were smaller and not statistically significant. Considering the M 1 P 1 fertility level, the Actinobacteria showed relative abundances over 50% at 11, 14 and 17 DAS, and then showed a significant decline to ∼35% (FDR = 0.01). This again coincided with an increase in Gammaproteobacteria between 17 and 31 DAS. Among the less abundant taxa, the Alphaproteobacteria showed a significant increase in relative abundance after 11 DAS for the N 1 K 1 (FDR < 0.001) and N 1 P 2 K 2 fertility levels (FDR < 0.005). Furthermore, the relative abundance of Bacteroidetes was significantly higher, as compared with 11 DAS, at 14, 17 and 31 DAS or at 14 DAS for the N 1 P 2 K 2 and M 1 P 1 fertility levels, respectively; while the Betaproteobacteria showed the significantly highest relative abundances at 14 and 17 DAS across soil fertility levels (FDR < 0.05) (Fig. 3).
When the impact of the inoculation treatments was determined at the genus level for individual days, the genus Bacillus had significantly higher relative abundance in treatments with B. simplex and P. bilaiae plus B. simplex (Fig. S6, Supporting Information). No other genera were consistently significantly enriched Figure 1. Diversity indices calculated from 16S rRNA gene sequence data at zOTU level for soil at 3 days before sowing, and for seed-associated samples at 11, 14, 17 and 31 days after sowing and for each soil fertility level (5 < n < 16 except for N 1 P 2 K 2 at 11 DAS where n = 3): (A) Richness index (number of zOTUs) and (B) Shannon diversity index. Samples were rarefied to 2000 reads prior to analysis. Each sample is represented by a point where color identifies the applied inoculum and shape the fertility level where it was collected from. For the seed-associated samples, factors showing a significant impact to each diversity index (ANOVA followed by post-hoc tests using the Tukey method; P < 0.05) are depicted in the respective panel with the corresponding P values. DBS: days before sowing; DAS: days after sowing. or reduced at all sampling times (Fig. S6, Supporting Information). At the family level, the seed-associated microbiomes were dominated by Enterobacteriaceae, Micrococcaceae, Pseudomonadaceae, Streptomycetaceae, and Infrasporangiaceae across time, soil fertility level and inoculation (Fig. S7, Supporting Information).
Furthermore, the 15 most abundant zOTUS across seedassociated communities at 11-31 DAS and the soil community at 3 DBS were determined ( Fig. 4; Fig. S8, Supporting Information). The most abundant zOTU belonged to the genus Erwinia and in general we found high relative abundance of zOTUs belonging to the genera Arthrobacter, Erwinia, Pseudomonas, Sphingomonas and Streptomyces. Several zOTUs, including some within the genera Erwinia, Pseudomonas, Salinibacterium and Sphingomonas, were also detected in seeds prior to germination (Fig. 4). The Bacillus OTU22 was only detected in seeds that were coated with the Bacillus simplex strains (Fig. 4). Other zOTUs within Arthrobacter, unclassified Kouleothrixaceae, Streptomyces and Terracoccus were common for soil and seedassociated communities but not detected on original seeds (Fig. 4).
Penicillium bilaiae and B. simplex persist in the seed-associated community until 31 DAS
The inoculation treatments that had received P. bilaiae alone or in combination with B. simplex, had population sizes of P. bilaiae of ∼1 × 10 8 ITS copies g sample -1 throughout the sampling period (Fig. 5A). There were no consistent effects of the soil fertility levels on the population size for any sampling day. A significantly lower indigenous population of P. bilaiae below 1 × 10 6 ITS copies g sample -1 was found for the samples that had not been inoculated with P. bilaiae (P < 0.022, except at 11 DAS). The relative abundance of the Bacillus zOTU 22 was significantly higher for treatments that had received B. simplex inoculants than for control treatments throughout the sampling period ( Fig. 5B; P < 3 × 10 -5 ), indicating persistence of the inoculum. Moreover, a small effect of the soil fertility level was noted for the relative abundance of the Bacillus zOTU 22 17 DAS (P = 0.001), where values were higher for the N 1 K 1 fertility level (Fig. 4) compared with N 1 P 2 K 1 and M 1 P 1 (Fig. S8, Supporting Information). In summary, regardless of the soil fertility level the P. bilaiae and B. simplex strains were still found in or on the seeds at 31 DAS.
The potential of seed-associated communities for N and P cycling changes with time
A panel of functional genes involved in cycling of N and P were quantified using high throughput qPCR, and relative abundances were calculated after normalization with 16S rRNA gene copy numbers. For the relative abundances of these genes, time (R 2 = 0.40) and soil fertility level (R 2 = 0.022) were significant predictors (PERMANOVA, P < 0.001 for both factors), while the addition of inoculants had no significant effect (Table S4, Supporting Information). The temporal changes in the relative abundances of the panel of genes occurred gradually between 11 and 31 DAS (Fig. S9, Supporting Information), and we therefore describe the changes between these two time points later. Furthermore, the soil fertility level only affected the relative abundance of very few functional genes, and we consequently describe changes collectively across Error bars depict standard error of average values (7 < n < 12). DAS: days after sowing. soil fertility levels later, while making specific references to the genes affected by the soil fertility level. Several genes involved in N cycling decreased significantly over time (P < 0.01, Mann-Whitney test), showing the highest relative abundance at 11 DAS across all soil fertility levels (Fig. 6). This included genes involved in anaerobic ammonium oxidation (anammox) such as hydroxylamine oxidoreductase (hao) and hydrazine synthetase (hzsB), as well as aerobic ammonia oxidation (amoB). In addition, the nitrate reductases (nasA and narG) (Fig. 6A) decreased from 11 to 31 DAS in all soil fertility levels, while the nitrous oxide reductase (nosZ2) decreased from 11 to 31 DAS only in the N 1 P 2 K 1 soil fertility level. For the nitrite reductases nirS and nirK, some variants showed decreased abundance with time, while others showed increased abundance with time (Fig. 6B). The amoB gene was significantly lower in the N 1 K 1 soil fertility level at 31 DAS compared with 11 DAS.
Other genes increased in relative abundance over time, showing the highest relative abundance at 31 DAS (P < 0.01, Mann-Whitney test) (Fig. 6B). Among those were the genes encoding enzymes involved in aerobic ammonia oxidation (amoA) (both bacterial and archaeal) as well as genes involved in nitrite oxidation (nxrA). In addition, the ureC and gdhA genes involved in organic nitrogen mineralization increased in relative abundance with time. Some of these genes only differed in N 1 K 1 and M 1 P 1 (nirS2, napA), N 1 K 1 (nirK3), or N 1 K 1 and N 1 P 2 K 1 soil fertility level (nxrA). Only the relative abundance of nifH and nirS1 did not change with time.
For the selected genes involved in P cycling, ppk and ppx, involved in the formation and degradation of polyphosphate, respectively, were found in significantly higher abundance at 11 DAS (P < 0.01, Mann-Whitney test) (Fig. 7). Moreover, the gene coding for cysteine phytase (cphy), initiating the hydrolysis of phytate to release phosphate, was only detected in the samples at 11 DAS but not at 31 DAS (Fig. 7B). In contrast, phoD and phoX genes encoding alkaline phosphatases involved in mineralization of organic P sources, the phnK gene involved in utilization of phosphonate, and the pqqC gene involved in solubilization of inorganic phosphorus had higher relative abundance at 31 DAS (Fig. 7B). The gene phoX were only significantly different between 11 and 31 DAS in the N 1 P 2 K 1 soil.
The development of the seed-associated bacterial communities
The seed-associated microbiome is thought to be important for the establishment of the rhizosphere microbiome that has a pivotal role in plant health and yield (Shade et al. 2017). However, the development of seed-associated microbiome under field conditions is rarely studied. Here, we provide the first field study addressing the development of wheat seed-associated bacterial communities during seedling emergence and how they are affected by soil fertility levels and inoculants.
The seed-associated communities had lower alpha diversity than the bulk soil samples, in line with findings from maize and Brassica napus (Rochefort et al. 2021, Shao et al. 2021. The seed-associated communities were dominated by Actinobacteria, Gammaproteobacteria and Alphaproteobacteria, while the corresponding bulk soil samples were dominated by Actinobacteria, Acidobacteria, Alphaproteobacteria, Chloroflexi and Verrucomicrobia in agreement with previous reports for these soils (van der Bom et al. 2018, Zhang et al. 2020). Some of the dominating seedassociated taxa (Erwinia, Pseudomonas, Salinibacterium and Sphin-gomonas) might come with the seeds as endophytes (Özkurt et al. 2019(Özkurt et al. , Kuźniar et al. 2020 or as epiphytes (Links et al. 2014). Several zOTUs were found both in the original seeds and in the seedassociated samples even 31 DAS, in accordance with previous studies. As the seed and seed-associated samples were sequenced in different runs, we can exclude the possibility of cross contamination. Hence, we propose that some seed-associated taxa come from the seeds; although the low number of sequences from the original seeds prevent us from making definite conclusions on their relative abundance in the original seeds. Studies that addressed the epiphytic versus the endophytic bacterial communities of soil-free or surface sterilized wheat seeds found that dominating epiphytic taxa included Proteobacteria (the genera Massilia, Sphingobium, Sphingomonas and Xanthomonas; Links et al. 2014), while Proteobacteria and Firmicutes (Acinetobacter, Paenibacillus and Paracoccus) dominated in the endosperm, and members of the Enterobacteriaceae and Pseudomonadaceae (Pantoea and Pseudomonas) were abundant in both compartments (Links et al. 2014, Kuźniar et al. 2020. However, the current seed-associated communities also contained abundant genera such as Arthrobacter, Kaistobacter, Streptomyces, and Terracoccus that were shared with the soil. In addition, the alpha diversity of seed-associated communities increased with time and the communities increasingly resembled soil communities. Thus, our findings support the notion that seed-associated bacterial communities are assembled from both seed and soil-borne taxa. Previous studies have shown that rhizosphere communities include both seed and soil derived taxa (Johnston-Monje et al. 2016, Kavamura et al. 2019. Interestingly, Walsh et al. (2021) recently demonstrated that while seed microbiota contribute significantly to the wheat seedling bacterial community, the influence of soil-derived communities on the seedling microbiome is predictable, yet variable between soils. In our current study, several seed-borne taxa persisted in the seedassociated communities even 31 DAS, while several soil-borne taxa emerged during germination, with the potential to influence the health of the emerging seedling (Nelson 2017).
The composition and N/P cycling potential of the seed-associated communities are highly dynamic during germination High concentrations of resources are available at the seed during germination and emergence (Nelson 2004, Schiltz et al. 2015, Nelson et al. 2018, and the high abundance of copiotrophic genera as Erwinia, Pseudomonas, Salinibacterium and Sphingomonas suggest an enrichment in and on the seed of taxa able to degrade seed components and seed exudates (Lemanceau et al. 2017). Moreover, seed germination and early seedling development is a dynamic phase of a plant's life cycle (Eyre et al. 2019). In accordance, time was the most important factor shaping the composition of the seed-associated bacterial community in this study. The seeds germinated at 8 DAS, and the current sampling period hence covered the period of seed exudation that, under laboratory conditions, lasts a few days after germination (Schiltz et al. 2015), but is probably extended at the current ambient temperatures between 10 and 15 • C. At the N 1 K 1 soil fertility level, the Gammaproteobacteria increased in relative abundance with time up to 17 DAS, while the relative abundance of Actinobacteria decreased. For comparison, an increase in Gammaproteobacteria has been recorded during emergence for seeds of Brassica species, and a comparable increase concomitant with a decrease in Actinobacteria was seen for bean and radish seeds in laboratory systems (Barret et al. 2015, Torres-Cortés et al. 2018. Genes in red decreased in relative abundance and genes in blue increased. Genes in black did not change with time. (B) Relative abundances of gene copies normalized to copies of 16S rRNA genes at 11 and 31 days after sowing (DAS) (n = 48). Only genes that have significantly different relative abundance between the two sampling times are shown (P < 0.01, Mann-Whitney Test). Genes in the left panel had a significantly higher relative abundance at 11 DAS, while genes in the right panel had a significantly higher relative abundance at 31 DAS. The box plots show the median, the two hinges, which correspond to the 25th and 75th percentile, and the upper and lower whiskers, which extend from higher and lower hinges to the largest and smallest values no further than 1.5 times the interquartile range.
In contrast to time, the soil fertility level had a much smaller effect on the alpha diversity and composition of seed-associated communities. Further, soil fertility level did not have an effect on the alpha diversity of the bulk soil communities (3 DBS), in agreement with previous findings from the same fields (van der Bom et al. 2018). Overall, the current data support our first hypothesis that time has a larger effect on the community composition than long-term history of soil fertilizer amendment.
The relative abundances of genes involved in aerobic and anaerobic N cycling, as well as in P cycling, were used as proxies for the processes catalyzed by their predicted proteins. Despite 16S rRNA gene copies varies between bacterial taxa (Větrovský and Baldrian 2013), normalization to 16S rRNA copies is used as a general proxy for bacterial abundance, and hence used in the present study as a mean of comparing across samples. Genes related to anaerobic ammonia oxidation (hao and hzsB) carried out Genes in red decreased in relative abundance and genes in blue increased. (B) Relative abundances of gene copies normalized to copies of 16S rRNA genes at 11 and 31 days after sowing (DAS) (n = 48). At 31 DAS, cphy was below the detection limit for most of the samples. Only genes that have significantly different relative abundance between the two sampling times are shown (P < 0.01, Mann-Whitney Test). Genes in the left panel had a significantly higher relative abundance at 11 DAS, while genes in the right panel had a significantly higher relative abundance at 31 DAS. The box plots show the median, the two hinges that correspond to the 25th and 75th percentile, and the upper and lower whiskers, which extend from higher and lower hinges to the largest and smallest values no further than 1.5 times the interquartile range. by strictly anaerobic Planctomycetes (Lage and Bondoso 2014) significantly decreased from 11 to 31 DAS. Anammox bacteria are able to convert organic compounds to sustain their metabolism, most notably formate, acetate and propionate (Kartal et al. 2013), which could explain the abundance of these genes in or around the seeds in the beginning of the period, as seed exudates containing organic acids are released during germination and emergence (Nelson 2004, Schiltz et al. 2015. In contrast, genes involved in aerobic ammonia oxidation (amoA) increased with time. Furthermore, the findings suggest that ammonium was available for the seed-associated communities throughout the sampling period as seen from an increase of genes involved in ammonium-generating organic N transformation (ghdA and ureC). This increase could reflect ammonification of amino acids and urea from seed exudates (urea has been found in root exudates of other cereals; Naveed et al. 2017), or of amino acids coming from degradation of proteins found in cells of the outer layers of the wheat grain (Šramková et al. 2009). These findings indicate a higher microbial activity and oxygen consumptions early after germination, supporting a transition from an initial anaerobic environment to aerobic conditions following seed germination. This is in accordance with previous report of high oxygen consumption and at times anaerobic con-ditions in both the rhizosphere and spermosphere (Højberg andSørensen 1993, Bewley et al. 2013).
For the genes involved in nitrate reduction, nasA and narG decreased, whereas napA was found to increase in abundance. A reduction in genes encoding nitrate reductases would be expected during a period with increased oxygen levels. The contrasting increase of napA genes encoding a periplasmic nitrate reductase could be explained by the proliferation of bacteria performing aerobic denitrification. Aerobic denitrification has been shown repeatedly in other environments, with napA being the nitrate reductase used in this reaction (Ji et al. 2015). The nitrite reductase genes (nirK and nirS) did not show a clear overall dynamics pattern over time as the genes targeted by the different primers (nirS1, nirS2, nirS3 and nirK1, nirK2, nirK3) displayed diverse/contradictive changes with time.
The relative abundance of P cycling genes showed considerable temporal dynamics, and the genes involved in phytate degradation (cphy) and polyphosphate cycling (ppk and ppx) showed a decline with time. Phytate is the major P storage compound in plants and can represent a substantial proportion of seed dry weight (Lott et al. 2000). Phytate is degraded by plant phytases to release inorganic P (and other nutrients) for the developing seeds (Lott et al. 2000). However, our detection of bacterial cphy genes at 11 DAS suggests that bacteria are able to use the phytate stored in the seeds and might compete for released P during early germination. Our inability to detect the cphy phytase genes in the majority of the samples at 31 DAS (and 17 DAS; Fig. S9, Supporting Information) indicates that the seed-associated phytate is subsequently depleted. During germination, the hydrolysis of phytate releases inorganic orthophosphate that seemingly becomes available to bacteria as seen from the observed dynamics of the ppk and ppx genes involved in polyphosphate cycling. For comparison, a high occurrence of ppx and ppk genes was found in the maize rhizosphere, suggesting a possible enhancement of polyphosphate transformation and the availability of inorganic P in this environment (Li et al. 2014). According to the study by Li and co-workers, these genes were mainly distributed in Proteobacteria and Actinobacteria, taxa that dominate in the wheat seedassociated microbiota. Increases with time in the relative abundances of phnK involved in phosphonate utilization, the alkaline phosphatase genes phoD and phoX, as well as pqq involved in inorganic P solubilization might indicate a decrease in available P with time forcing the bacteria to exploit other organic P sources and increase solubilization of inorganic P. Genes involved in phosphonate utilization have previously been found in soil (Liu et al. 2018) and the phnK gene has even been found to be enriched in bacterial communities of the fungal hyphosphere, another nutrient cycling hotspot in the soil environment , Zhang et al. 2020). Along the same lines, high abundance of alkaline phosphatase genes, correlating with low available P (Olsen P), has been recorded in soil and rhizosphere environments (Acuña et al. 2016, Schneider et al. 2019, while the relative abundance of the pqq gene is affected by several soil factors as pH (Zheng et al. 2019) and hence shows a less clear relation to P availability. Taken together, the dynamics of P cycling genes indicate an intense inorganic P and polyphosphate cycling by the seed-associated bacteria early after germination, driven by utilization of inorganic P released from the hydrolysis of phytate. Later, phytate depletion reduces the access to easily accessible P, forcing the seed-associated bacteria to use other organic P sources or solubilize inorganic P. Across all N and P cycling genes, the results show a larger effect of time than of the soil fertility level, supporting our first hypothesis.
The microbial inoculants persisted throughout the experiment but showed very minor impact on the seed-associated bacterial communities
Microorganisms with plant beneficial traits are often added as inoculants to seeds to increase plant growth and health. Seedcoated microbial inoculants need to compete with indigenous communities in order to successfully colonize the seeds and benefit the plant. Therefore, we determined the persistence of the two inoculants, P. bilaiae and B. simplex, on the seeds in soils with different soil fertility levels. Penicillium bilaiae and B. simplex persisted on the seeds for the entire sampling period, supporting our second hypothesis. Penicillium bilaiae population size was neither affected by time nor by soil fertility level. Previous studies have demonstrated persistence of P. bilaiae for 3-4 weeks on wheat and maize seeds in laboratory pot or rhizobox experiments (Gómez-Muñoz et al. 2017, Hansen et al. 2020, and furthermore, P. bilaiae are found in as well as on wheat roots (Wakelin et al. 2004). Studies on inoculation with B. simplex on wheat are limited (Hassen andLabuschagne 2010, Hansen et al. 2020). In the present study, a transient increase in B. simplex was observed at 14 DAS. In contrast, Hansen et al. (2020) reported relatively stable populations of the current B. simplex strains on wheat seeds for up to 3 weeks. This discrepancy might be explained by a more frequent sampling in the present study allowing transient dynamics to be revealed. This further points to the importance of temporal dynamics when studying microbial interactions in natural systems. The inoculants, alone or in combination, did not affect the bacterial community alpha diversity, had a minor impact on the community compositions, and had insignificant effects on the occurrence of N and P cycling genes. Hence, there were no detectable unintended consequences of these inoculants in this natural seed-soil system, as the small changes in community structure were primarily due to the high amount of B. simplex introduced to the system. While the impact of inoculation with P. bilaiae and B. simplex is understudied, previous studies have found transient impact of seed-coated B. subtilis and B. amyloliquifaciens inoculants on tomato and lettuce rhizosphere microbiota (Erlacher et al. 2014, Qiao et al. 2017. These inoculated strains were biocontrol strains able to suppress soil-borne diseases and produce several antimicrobial metabolites such as polyketides and nonribosomal lipopeptides, causing a direct impact on the rhizosphere microbiota. The genomic backgrounds of the B. simplex strains, which were used here as part of a biofertilizer consortium, are not yet known, but information for another B. simplex strain 30N5 did not reveal genes for production of these antimicrobial compounds (Maymon et al. 2015). This may partly explain the observed small impact of B. simplex on the indigenous seed-associated microbiota. Moreover, this study was focusing on the persistence of the inoculants on the seed therefore not exploring their potential to colonize the newly formed roots and performing their deeds there. However, the persistence of B. simplex and P. bilaiae in the current study was assessed by direct DNA-targeted methods as differentiating the inoculants from indigenous soil species by plating was not possible. Development of methods to specifically determine the viability or metabolic activity of the inoculants under field conditions is highly needed to improve our understanding of the impact of inoculants on the seed, root and soil microbiota.
Conclusion
The seed-associated communities were dominated by taxa previously recognized as seed-associated or endophytic, highlighting the importance of the seed for shaping its associated bacterial communities, even under field conditions. This notion is supported by a low impact of the soil fertility level on seed community composition in contrast to the considerable impact reported for the same fertilizer amendments on corresponding soil microbiota (van der Bom et al. 2018). A role of the seeds for the nutrition of their associated bacteria is indicated by the increased abundance over time of genes involved in organic N metabolism and ammonium oxidation, probably reflecting increased mineralization of plant-derived amino acids. Moreover, the high abundance of phytase genes after germination indicated bacterial mineralization of this seed storage compound. Indeed, it would be relevant to study whether bacterial phytase activity provides P for the plant or whether bacteria and plants compete for P. The inoculants had very limited impacts on the composition and potential functionality of the seed-associated bacterial community but have previously shown positive effects on wheat P uptake under laboratory conditions (Hansen et al. 2020). This highlights the potential of an indirect priming effect on plant performance by inoculants rather than a direct impact on microbial functionality, which should be addressed in future studies.
|
2022-03-15T06:23:04.968Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "90bcd65dc09f26f5d2d2d0dd20afd36c8e558376",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d757286b17caf62b5632aa54c2476777823709f4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252616151
|
pes2o/s2orc
|
v3-fos-license
|
MR-PIPA: An Integrated Multilevel RRAM (HfOx)-Based Processing-In-Pixel Accelerator
This work paves the way to realize a processing-in-pixel (PIP) accelerator based on a multilevel HfOx resistive random access memory (RRAM) as a flexible, energy-efficient, and high-performance solution for real-time and smart image processing at edge devices. The proposed design intrinsically implements and supports a coarse-grained convolution operation in low-bit-width neural networks (NNs) leveraging a novel compute-pixel with nonvolatile weight storage at the sensor side. Our evaluations show that such a design can remarkably reduce the power consumption of data conversion and transmission to an off-chip processor maintaining accuracy compared with the recent in-sensor computing designs. Our proposed design, namely an integrated multilevel RRAM (HfOx)-based processing-in-pixel accelerator (MR-PIPA), achieves a frame rate of 1000 and efficiency of ~1.89 TOp/s/W, while it substantially reduces data conversion and transmission energy by ~84% compared to a baseline at the cost of minor accuracy degradation.
I. INTRODUCTION
I NTERNET-of-Things (IoT) devices are expected to reach $1100B in revenue by 2025, with a web of interconnections estimated to consist of approximately 75+ billion IoT devices, including wearable devices as well as smart cities and industries [1], [2]. Artificial Intelligence-of-Things (AIoT) nodes are composed of a variety of sensors, which are used to collect and process data from the environment and people. There is usually a great deal of redundant and unstructured sensory data captured. The conversion and transmission of large raw data to a backend processor at the edge are energy-intensive and highly latent [1], [3]. Those issues can be addressed by shifting computing architecture from a cloud-centric way of thinking to a thing-centric (data-centric) perspective, where IoT nodes process sensed data. Despite such challenges, artificial intelligence tasks that require hundreds of layers of convolutional neural networks (CNNs) have severe computational and storage constraints. There have been considerable advancements in both software and hardware to improve CNN efficiency by mitigating the ''power and memory wall'' bottleneck.
From the software point of view, exploration of shallower but wider CNN models, quantizing parameters, and network binarization [4] is widely accomplished. A recent development is reducing computing complexity and model size using low-bit-width weights and activations. By converting the multiplication-and-accumulate (MAC) operation into the corresponding AND-bitcount operations in [4], Zhou et al. performed bit-wise convolution between the inputs and the low-bit-width weights. Binarized CNNs (BNNs), as an extreme quantization method, have achieved acceptable accuracy on both small [5] and large datasets [4] after removing some high-precision requirements. By binarizing the weight and/or input feature map, they offer a promising solution to mitigate the aforementioned bottlenecks in storage and computation.
From the hardware point of view, the underlying operations should be realized using efficient mechanisms. The conventional processing elements are designed to work with a von-Neumann computing model involving separate memory and processing blocks interconnected via buses, which poses serious problems, such as long memory access latency, limited memory bandwidth, and energy-hungry data transfer, which limit the edge device's efficiency and working time [2]. In addition, this presents several significant issues at the upper level, including bandwidth congestion and security concerns. The concept of instant image preprocessing with smart image sensors has therefore been extensively investigated [2], [6], [7], [8] as a potential remedy. By using an on-chip processor, the digital output from pixels can be accelerated where the sensor is located, paving the way for enhanced sensor paradigms such as processing-near-sensor (PNS) as depicted in Fig. 1(b). Other promising alternatives are a process-insensor (PIS) platform [7], [9], as shown in Fig. 1(c), that processes preanalog-to-digital converter (ADC) data and a hybrid PIS-PNS [1] platform to incorporate vision sensors and eliminate redundant data output. Generally, PIS units process images before transmitting them to an on-chip processor for further processing. Typical designs rely on this type of data transfer (from CMOS image sensors to memory), which reduces the speed of feature extraction. With this PIS unit, a computation core can: 1) significantly reduce the power consumption of converting photo-currents into pixel values used for image processing; 2) accelerate data processing; and 3) alleviate the memory bottleneck problem [1], [2].
This article develops a new efficient processing-in-pixel (PIP) paradigm, as shown in Fig. 1(d), named an integrated multilevel RRAM (HfO x )-based processing-in-pixel accelerator (MR-PIPA), co-integrating always-on sensing and processing capabilities for image sensors. The main contributions of this work are as follows.
1) We experimentally demonstrate an integrated two-bitper-cell resistive random access memory (RRAM)-based weight storage unit. As low resistance states (LRSs) of the RRAM devices can lead to high power consumption, we run extensive device-level experiments on the fabricated device to achieve multilevel high resistive states. 2) The MR-PIPA architecture is developed based on a set of innovative microarchitectural and circuit-level schemes optimized to process the first layer of quantized neural networks (QNNs) using nonvolatile RRAM components to store weights offering energy efficiency and speedup. 3) We present a solid bottom-up evaluation framework and a PIP assessment simulator to analyze the whole system's performance. 4) MR-PIPA's performance and energy efficiency are thoroughly evaluated and then compared with the recent IoT sensory platforms.
II. BACKGROUND AND MOTIVATION
Systematic integration of computing and sensor arrays has been widely studied to eliminate off-chip data transmission and reduce ADC bandwidth, known as PNS [8], combining sensor and processing elements in the so-called PIS [9], [10], [11] and integrating pixels and computation unit, known as PIP [7], [8]. In [8], photo-currents are converted into pulsewidth modulation signals, and a dedicated analog processor is used to perform feature extraction, reducing the amount of power consumed by the ADC. To run spatiotemporal image processing, 3-D-stacked column-parallel ADCs and processing elements are implemented and utilized in [2]. The CMOS image sensor with dual-mode delta-sigma ADCs described in [12] is designed to process the first convolutional (Conv.) layer of binarized-weight neural networks (BWNNs). Charge-sharing tunable capacitors are used by RedEye [13] to implement the convolution operation. By sacrificing accuracy in favor of energy savings, this design reduces energy consumption compared to a central processing unit (CPU)/graphics processing unit (GPU). However, for high-accuracy computation, the required energy per frame increases dramatically by 100×. As a PIS platform, a processing-in-sensor architecture integrating MAC operations into image sensor (MACSen) [7] processes the first convolution layer of BWNNs with the correlated double sampling procedure and achieves speeds of 1000 fps in the computation mode. This method, however, suffers from an expansive area overhead and high-power consumption. In this work, we are motivated mainly by three observations to develop a PIP accelerator for the first layer of QNNs. First, from the accuracy point of view, in most QNN accelerators, the first and the last layers of the networks remain in full precision, that is, the floating-point domain. This is translated to a performance bottleneck in different hardware/software co-design accelerators and requires excessive memory and processing resources [14]. The continuous valued inputs can be readily handled as fixed points with n bits of precision. To verify this, we utilize the deep neural network (NN) energy estimation tool developed by Massachusetts Institute of Technology (MIT) [15] to assess the energy requirements. Fig. 2 depicts the breakdown of normalized energy consumption of a three-layer multilayer perceptron (MLP). As observed, the first layer consumes considerably higher energy than the other layers for computation (purple block) and data movement (the other three blocks). It is worth noting that this figure could be varied for different NN architectures. Second, in conventional image sensors, most of the power (>96% [16]) is consumed by processing and converting pixel values. This means that pixel circuits consume only 4% of power to perform photovoltaic conversions, whereas signal amplification, digital-to-analog conversion (DAC), and data transmission consume most of the power. Third, almost all the PNS/PIS/PIP systems are hardwired, so the functionalities are limited to simple preprocessing tasks such as first-layer BWNN computation.
III. PROPOSED RRAM-BASED MULTIBIT STORAGE
RRAM is a two-terminal nonvolatile memory (NVM) that stores data in varying resistive states by creating and rupturing a conductive filament within the metal oxide insulator, as shown in Fig. 3(a). Fig. 3(b) illustrates a transmission electron micrograph (TEM) of the fabricated TiN/Ti/HfO 2 /TiN RRAM device integrated with CMOS n-channel field-effect transistor (nFET) in 65-nm CMOS technology to realize a 1T1R unit cell as a primary storage element in the proposed PIP accelerator. In the set phase, the conductive filament connects the top and bottom electrodes, leading to an LRS, whereas in the reset phase, the filament breaks, and the resistance of the device increases, yielding a high resistance state (HRS), as shown in Fig. 3(a). Switching between LRS and HRS allows RRAM to operate as binary storage/memory elements. Leveraging different switching schemes enable RRAM devices to store multilevel resistance states [ Fig. 3(c)] for multibit per cell storage [17]. The most commonly used ways to produce multilevel resistance states are modulating the compliance current at lower resistant states and the reset voltage amplitude to reach multiple HRSs [18], [19]. The first approach results in an increased cell current due to low resistance and consequently increases overall system power consumption, while the latter results in higher HRS variability. Therefore, we propose a promising device-tosystem level codesign approach to reduce overall system power consumption aiming at multiple well-defined HRS levels. Fig. 4(a) shows the experimental results for switching voltage pulse widths across RRAM and gate voltages on the transistor [ Fig. 3(b)]. The device-level switching experiments are performed using a semiautomated Suss Microtech probe station with a high-precision semiconductor device analyzer B1500. A switching pulsewidth of 100 ns to 1 ms and a gate voltage during switching on 15 devices with 1000 cycles for each condition are considered. The median resistance values at the HRS state range from 80 to 200 k . This approach shows much higher resistances compared to low resistance levels, ranging from 3 to 30 k [20]. To reduce HRS variability, we adopted a read-write-verify approach to achieve resistances in a specific window, as shown in Fig. 4(b) [17]. The selected experimental resistance states will then serve as the potential memory states for MR-PIPA. We confirmed that the read-write-verify strategy employed requires a minimal amount of programming cycles. The box plots in Fig. 4(b) show that the required median programming cycles are as low as 20.
IV. MR-PIPA ARCHITECTURE
We propose an energy-efficient and high-performance solution for real-time and smart image processing for AIoT devices. MR-PIPA will integrate sensing and processing phases and can intrinsically implement a coarse-grained convolution operation required in a wide variety of image-processing tasks such as classification by processing the first layer in QNNs. Once the object is roughly detected, MR-PIPA will switch to a typical sensing mode to capture the image for a fine-grained convolution.
A. MICROARCHITECTURE
At the architecture level, the MR-PIPA's array consists of an m × n compute focal plane (CFP), row and column controllers (Ctrl), command decoder, sensor timing ctrl, and sensor I/O operating in two modes, that is, sensing and processing, as shown in Fig. 5(a). The CFP is designed to cointegrate sensing and processing of the first layer of QNNs targeting a low-power and coarse-grained classification. To enable this, the conventional pixel unit is upgraded VOLUME 8, NO. 2, DECEMBER 2022 to a compute pixel (CP). The Ri (row) signal is controlled by the row Ctrl and shared across pixels located in the same row to enable access during the row-wise sensing mode. The core part of MR-PIPA is the CP unit consisting of a pixel connected to v NVM elements, as shown in Fig. 5(b). A sense bitline (SBL) is shared across pixels on the same column connected to the sensor I/O for the sensing mode. Moreover, CPs share v compute bit-lines (CBLs), each connected to a sense amplifier for processing, as indicated by the purple line in Fig. 5(a). The first-layer weight corresponding to each pixel is prestored into RRAM conductance, and an efficient coarse-grained MAC operation is then accomplished in a voltage-controlled crossbar fashion. Fig. 6(a) depicts a sample MLP, wherein CP 1,1 -CP m,n are linked to out1 via NVM 1 's weight. Similarly, every pixel is connected to out2-outv. To maximize MAC computation throughput and fully leverage MR-PIPA's parallelism, we propose a hardware mapping scheme and a connection configuration between CP elements and corresponding NVM add-ons shown in Fig. 6(b) to implement the target NN.
B. PIXEL DESIGN 1) BASIC PIXEL STRUCTURE
A basic three-transistor (3T) pixel structure is depicted in Fig. 7(a) [21]. It comprises a photodiode (PD) as the primary sensing component, a reset transistor, a source-follower transistor, and a transfer transistor. PD is a semiconductor sensor that generates the photo-current (I PH ), proportional to the brightness of incident light or the number of photons. A simplified equivalent circuit of the PD is shown in Fig. 7(a) [22]. During exposure, the PD functions as a leaky capacitance, while the leakage rate proportionally depends on the illumination [23]. The photo-current, I PH , generated from PD can be calculated from the active PD area (A PD ), responsivity (R), and input I RR radiance (E in ) as I PH = A PD × R × E in . As shown in Fig. 7(b), during the bright illumination phase, the capacitor discharges faster and decreases the voltage across the PD more quicker. During low illumination, I PH is low, which results in a low voltage drop across the PD. The source-follower (SF) operates as a voltage buffer between the sensing element PD and replicates the voltage for readout.
2) COMPUTE ADD-ON
The compute add-on structure depicted in Fig. 5(b) consists of two functional blocks: 1) an input encoder and 2) 1T1R cells. The input encoder converts input from the basic pixel circuit to the input of the 1T1R cell. The 1T1R cell (part of the 1T1R array) acts as an analog multiplier unit for column-wise MAC operation. The input encoder unit consists of four transistors, of which T4 and T5 are logic transistors with an operating voltage of 1.2 V, while T6 and T7 are thick oxide 1.8-V transistors. The 1T1R devices are integrated with thick gate-oxide transistors T8 and T9. These transistors' maximum operating voltage is 3.3 V, allowing them to form and program high voltages for the RRAM cells. The proposed design follows three critical considerations (Cs) as follows.
a: LOCATION OF RRAM DEVICES
The thin oxide transistors require a smaller area and are suitable for low-power applications; as they have a low safe operating voltage, for example, 1.2 V. On the other hand, the thick oxide transistors can withstand large operating voltage, for example, 3.3 V, but suffers from higher power and area consumption. Hence, to reduce power and area, the pixel circuit is typically designed using low operating voltage thin-oxide transistors. However, RRAM devices require high forming and programming voltages (∼3.3 V). If the RRAM devices are connected directly across PD or pixel circuit transistors, during forming/programming, voltages will far exceed their operating voltages [ Fig. 8(a)], which can damage the low-voltage devices. MR-PIPA separates the pixel sensing and computing modules by transferring the signal from the pixel circuit through input encoders to the gate of the thick-oxide transistors, as shown in Fig. 5(b). We then used thick-oxide transistors with RRAM, allowing it to be formed or programed at the required higher voltage.
b: COMPUTE ADD-ON OUTPUT
The following subtle but critical consideration focuses on input encoding for RRAM cells, converting PD voltage to input for RRAM cells. In the standard RRAM-based matrix multiplication, for binary input x, which can be either 0 or 1, each RRAM cell current can be expressed by I = x · (V R /R) [24]. Here, V R is the applied voltage across the device, and R is the RRAM resistance of the cell. It can be realized from the given equation that for input zero, the RRAM-based compute unit should result in ideally zero current. Due to improper input encoder for the PIP circuit, the RRAM cell can result in nonzero cell current I RR when the input is 0 [ Fig. 8(b)]. In our design, we follow the conventional RRAMbased in-memory crossbar operations for NN inference as shown in Fig. 8(b).
c: FILL-FACTOR
The pixel is fabricated on silicon for hardware deployment using the CMOS fabrication process. Typically for imaging applications, a larger sensing area is preferred. The ratio of the PD sensing area to the total pixel area is defined as the fill factor. It is optimal to increase the PD area, which results in increasing the fill factor. However, depending on the application and add-on pixel capability, such as in-pixel digital processing, the fill-factor-feature tradeoff is chosen. Since the RRAM devices are fabricated at the back of the line, no large silicon area is consumed, as shown in Fig. 8(c). Although the fill factor is unaffected by RRAM, its access transistors can affect the fill factor.
C. OPERATIONAL MODES
To initialize the MR-PIPA, the proposed pixel circuit requires to go through forming and programming of the RRAM devices for weight storage. The filament [ Fig. 3(a)], required for resistive switching, can be formed by applying V R = 3.3 V across the RRAM one time. Forming can be performed by turning on transistor T1; this results in the input encoder output being V IG . As the input encoder is followed by RRAM cells, V IG is applied to the gate of T8 and T9 integrated into series with RRAM [ Fig. 9(b)]. As for the multilevel programming, different 1T1R gate voltage is required from 1 to 1.8 V [ Fig. 4(a)], it can be possible with similar approach applying different V IG = (1-1.8 V) [ Fig. 9(c)]. As we utilize a bipolar RRAM, it requires opposite polarity voltages for set and reset operations as shown in Fig. 3(a). This can be accomplished by applying positive voltages across opposite electrodes of the RRAM as shown in Fig. 9(c).
In the sensing mode, initially setting Rst = ''high,'' the reverse-biased PD is charged to V DDL = 1.2 V [ Fig. 7(a) and (b)] [21]. In this way, turning on the access transistor T3 and the k 1 switch at the shared ADC [ Fig. 5(c)] allows the C 1 capacitor to fully charge through SBL. By turning off T1, PD generates a I PH based on the external light intensity, which leads to a voltage drop (V PD ) at the gate of T2. Once again, by turning on T3, and this time the k 2 switch, C 2 is selected to record the voltage drop. Therefore, the voltage values before and after the image light exposure, that is, V 1 and V 2 in Fig. 5(c), are sampled. The difference between two voltages is sensed with an amplifier, while this value is proportional to the voltage drop on V PD . In other words, the voltage at the cathode of PD can be read at the pixel output.
During the object-detection mode, we leverage the efficient crossbar MAC with 1T1R array. As RRAM cells store data as resistive states, the resultant cell current I RR = V R /R VOLUME 8, NO. 2, DECEMBER 2022 when V R is the voltage applied across the cell [see Fig. 5(b)]. The voltage applied across the 1T1R cell, also known as read voltage V R , is chosen as low as 0.2 V such that it does not alter the programed state of the device (e.g., the voltage required to set or reset the device is ≥0.7 V). Here, the T8/T9 transistor gate voltage controls the output of the input encoder [ Fig. 5(b)]. If T8/T9's gate voltage is larger than the threshold voltage (0.7 V), it allows the current to pass through; as a result, the cell current is Here, R is one of the four resistive states representing the weighted state, and G represents the conductance of the cell. If the T8/T9 transistor gate voltage is 0, the transistor blocks the current, resulting in no cell current (I RR = 0A).
As discussed previously, under high illumination, the voltage across PD, V PD , is low and vice versa [ Fig. 7(b)]. The proposed input encoder converts V PD so that the output is logic ''1'' during low illumination (dark pixel) and logic ''0'' for high illumination (bright pixel). The first inverter (T4 and T5) of the input encoder operates at 1.2 V and converts V PD to 0 or 1.2 V output for the second inverter. The second inverter consists of thick-oxide 1.8-V transistors (T6 and T7), which allow the 0-1.8-V gate voltage for multilevel programming [ Fig. 9(c)]. As the threshold voltage of T6 and T7 transistors is below 0.7 V, the output of the second inverter results in (V IG , 0 V) [ Fig. 5(b)]. The resultant output of the input encoder is V IG and 0 V for low/dark illumination and high/bright illumination, respectively. Accordingly, the resultant cell's current for low and high illumination are I RR = V R /R and 0, respectively. Then, to combine and quantify the currents from both positive and negative weight connections, we constructed a differential amplifier [ Fig. 5(d)]. Input currents into the operational amplifier in each column pair consist of two columns of the positive and the negative weights [ Fig. 5(a)]. Each column current is the summation current from each 1T1R cells, for example, the positive weight current for the jth column can be described as M i=1 V R · G + i,j . The resultant output voltage of the operational amplifier will be proportional to is the conductance of the RRAM cell indexed by i and j storing the positive and negative weights, respectively. From a programmer's standpoint, MR-PIPA is a third-party accelerator rather than a memory unit. Thus, for general-purpose parallel execution, an Instruction Set Architecture (ISA) and virtual machine will be needed. With this, any user-level program can be translated at install time to the MR-PIPA's hardware instruction set to support MAC.
A. FRAMEWORK AND METHODOLOGY
To assess the performance of the proposed design, we developed a simulation framework from scratch consisting of three main components as shown in Fig. 10. First, at the device level, we fabricated the proposed RRAM device and extracted the switching data and resistance ranges experimentally. Second, at the circuit level, we fully implemented MR-PIPA with peripheral circuitry with IBM 65-nm CMOS10LPe PDK in Cadence to achieve the performance parameters. We trained a PyTorch QNN model inspired by [4] extracting the first-layer weights. MR-PIPA's RRAM elements are then programed at the circuit level by the quantized 2-bit weights. Third, after the first-layer computation, the results are recorded and fed into a behavioral-level in-house simulator to simulate the whole network at the architecture level and extract the performance parameters and inference accuracy.
B. DEVICE-TO-CIRCUIT LEVEL RESULTS
The proposed CP was designed at a 65-nm process node. The pixel's PD was simulated as a parallel capacitor, and the photo-current represented the illumination. The capacitance value (13 fF) was calculated from the doping concentration of the 65-nm CMOS process and the PD area (Section IV-B). To demonstrate, the lowest case for high illumination (/bright pixel) was considered as ∼13 k lux, and the highest case for low illumination/dark pixel was considered as ∼130 lux. The resultant I PH s used for the simulations are 10 and 0.1 nA for high and low illumination, respectively.
We simulated both high and low illumination with 10% variation and observed the response at different points of the circuit. First, the voltage response across the PD shows expected high voltage drop and low voltage drop over time, respectively [ Fig. 11 (a) and (e)]. It also confirms that the add-on compute unit does not affect the pixel sensing operation. Fig. 11 (b) and (f) shows the input encoder output. As the proposed input encoders are inverters, the inverters tend to switch to rail voltages 0 and 1.2 V during MAC operation. The switching from 1.2 to 0 V occurs before the ''read'' operation (at 0.8 × 10 −6 s). We observe that the proposed design is immune to I PH 's variation as any V PD during high illumination and low illumination are converted as rail to rail 0 and 1.2 V [ Fig. 11 (f)]. As the input encoder output acts as an input for the thick-oxide transistor integrated with RRAM, the 0 and 1.2-V voltages fall below and above the transistor's threshold voltage of 0.7 V. As a result, no current flows through the RRAM cell during 0-V input encoder output or during high illumination. On the other hand, during low illumination, the input encoder output becomes 1.2 V, and the cell output current according to Ohm's law is I RR = V R /R. It is noteworthy that the RRAM cell output current [ Fig. 11(b) and (f)] is independent of I PH 's variation. The immunity to I PH 's variation is a result of using inverters for input encoding. As for an analog voltage range above and below V DDL /2, the output of an inverter is 0 V or V DDL , respectively. Fig. 11(d) and (h) shows that when I PH and RRAM resistance variation are present, the output RRAM cell current is only dependent on the RRAM resistance variation. The RRAM cell current for four different resistance levels is shown in Fig. 12. Even with variations considered, the cell currents are distinguishable for different resistance/weight stored.
C. CIRCUIT-TO-ARCHITECTURE LEVEL RESULTS
We limited the weight precision to four resistance levels. This can be readily used to map and accelerate binary, ternary, and quaternary NNs. Table 1 compares the literature's structural and performance parameters of selective PIP and sensor designs. As different designs are developed for specific domains, for an impartial comparison, we estimated and normalized the power consumption when all units executed the similar task of processing the first layer of CNNs. Our crosslayer simulation results show that the MR-PIPA achieves a frame rate of 1000. This comes from the massively parallel CPs. However, the design in [6] achieves the highest frame rate, and the design in [2] imposes the least pixel size enabling in-sensor computing. As for the area, our simulation results reported in Table 1 show the proposed MR-PIPA's compute-pixel occupies ∼6 × 6 µm 2 in 65 nm. As we do not have access to the other layouts' configurations, it is almost impossible to have a fair comparison between area overheads. However, we believe that a rough assessment can be made by comparing the number of transistors in previous SRAM-based designs and MR-PIPA's lower-overhead compute add-on. We reimplemented MACSen [7] at the circuit level as the only CNN accelerator developed with the same purpose. Our evaluation showed that MR-PIPA consumes ∼74% less power consumption compared with MACSen performing the same task. Compared to [6], MR-PIPA substantially reduces data conversion and transmission energy by ∼84%. While Table 1 focuses on various PIS architectures (close-to-pixel computation) primarily supporting CNNs in the binary domain, recent architectures show a systolic neural CPU fusing the operation of a traditional CPU and a systolic CNN accelerator [26]. Compared with our work, the design in [26] shows a systolic neural CPU fusing the operation of a traditional CPU and a systolic CNN accelerator. It converts 10 CPU cores into an 8-bit systolic CNN accelerator showing a comparable performance (1.82 TOPS/W @65 nm versus 1.89 TOPS/W @65 nm in MR-PIPA) but provides higher flexibility and bit-width (up to 8-bit). Putting everything together, MR-PIPA offers: 1) a low-overhead, dual-mode, and reconfigurable design to keep the sensing performance and realize a processing mode to remarkably reduce the power consumption of data conversion and transmission; 2) singlecycle in-sensor processing mechanism to improve imageprocessing speed; 3) highly parallel in-sensor processing design to achieve ultrahigh-throughput; and 4) exploiting NVM reduces standby power consumption during idle time and offers instant wake-up time and resilience to power failure to achieve high performance.
D. ACCURACY
An image classification task is selected to demonstrate the benefits of MR-PIPA design. In the original BWNN topology, all the layers, except the first and last, were implemented with quantized weights [27]. However, in these tasks, the number of input channels is relatively lower than the number of internal layers' channels, so the required parameters and computations are small, and converting the input layer will not be a significant issue [27]. Therefore, in almost all previously developed 3T and 4T-pixel PIP designs, the first layer is implemented with quantized weights, realizing BWNN [7]. Then an identical NN accelerator can be used to accelerate the remaining layers after the first layer has been computed.
d: DATASETS
We conducted experiments on several datasets, including Modified National Institute of Standards and Technology (MNIST) database [28], Fashion-MNIST [29], MIT CBCL face database (MCFD) [30], and street view house numbers (SVHN) [31]. MNIST is leveraged as a gray-scale dataset that contains 70 000 28 × 28 images of handwritten digits from 0 to 960 000 images for training, and 10 000 images for testing sets. Similar to MNIST, Fashion-MNIST consists of 28 × 28 gray-scale images but includes 10 000 images for each training and testing set to form ten fashion categories. MCFD face recognition database contains face images of ten subjects, where each image is normalized to 20 × 20 pixels. Training data consist of 6977 images, while testing data consist of 24 045 images. Finally, we also exploit SVHN with 73 257 training digits, 26 032 testing digits, and 531 131 additional digits for extra training data. The images are preprocessed to 20 × 20 from the original 32 × 32 cropped version and fed to the model.
e: NN ARCHITECTURE
In order to evaluate our design and perform a fair comparison, we developed two networks, including a two-layer MLP and a CNN with three convolutional and three FC layers, which are equivalently implemented by convolutional layers. Herein, the first layer is performed at the device level, and its outputs are then fed into the second layer of the algorithm, which is implemented in Python. The comparison of classification accuracy is summarized in Table 2. The results show that higher accuracy can be achieved using our MR-PIPA architecture, which can handle four analog values (2-bit quantized) rather than two (1-bit).
VI. CONCLUSION
This work presents a PIP accelerator that intrinsically implements and supports a coarse-grained convolution operation in low-bit-width QNNs leveraging a novel compute pixel with nonvolatile weight storage at the sensor side. We demonstrate four distinct high resistance levels in order to decrease overall system power consumption. Our results demonstrate acceptable accuracy on various datasets, while MR-PIPA achieves the frame rate of 1000 and the efficiency of ∼1.89 TOp/s/W.
|
2022-09-30T15:23:24.550Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "545090a5ca025b2c6e387a8d88c5149bb1cb07e4",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6570653/7076742/09905572.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "91872074df22d8492f3ff75d8af5053ddc938f33",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Materials Science"
],
"extfieldsofstudy": []
}
|
46656771
|
pes2o/s2orc
|
v3-fos-license
|
Hepatosplanchnic circulation in cirrhosis and sepsis
Hepatosplanchnic circulation receives almost half of cardiac output and is essential to physiologic homeostasis. Liver cirrhosis is estimated to affect up to 1% of populations worldwide, including 1.5% to 3.3% of intensive care unit patients. Cirrhosis leads to hepatosplanchnic circulatory abnormalities and end-organ damage. Sepsis and cirrhosis result in similar circulatory changes and resultant multi-organ dysfunction. This review provides an overview of the hepatosplanchnic circulation in the healthy state and in cirrhosis, examines the signaling pathways that may play a role in the physiology of cirrhosis, discusses the physiology common to cirrhosis and sepsis, and reviews important issues in management.
INTRODUCTION
The prevalence of severe liver disease among intensive care unit (ICU) patients ranges from 1.35%-3.3% [1][2][3] and is increasing worldwide. A study of 174 ICUs in the United Kingdom reported that the number of patients admitted to ICU with alcoholic liver disease tripled from 1995 to 2005 [1] . The ICU mortality for patients with liver disease is high, ranging from 36.6% to 73.6% [4][5][6][7][8] , and the one-year mortality for ICU survivors is as high as 68% [6] . Liver disease exacerbates coexisting diseases.
Hepatic cirrhosis is associated with an increased risk for ICU-associated pneumonia, respiratory failure, and death [9,10] . In both the United Kingdom and the United States the proportion of sepsis in the setting of liver disease is increasing and patients with cirrhosis are more likely to die from sepsis [11,12] . The systemic effects of cirrhosis also increase the morbidity and mortality of surgery [13,14] . Given the high morbidity of severe liver disease in critical care, an appreciation of hepatosplanchnic physiology may help guide intensivists, particularly those who care for patients with sepsis. This review provides an overview of the hepatic and splanchnic circulatory anatomy, examines the factors that contribute to the circulatory changes of cirrhosis, reviews the pathophysiology common to cirrhosis and sepsis, and discusses clinical management in the ICU.
Epidemiology of cirrhosis in the ICU
Cirrhosis may be the result of infectious, autoimmune, vascular, hereditary, or toxic factors. In Europe and the United States it is primarily caused by either alcohol use or infection with hepatitis C virus, while in Asia and sub-Saharan Africa the most common cause is infection with hepatitis B virus [15] . Observational studies in the United Kingdom and France showed that the most common cause of cirrhosis among patients admitted to ICUs was alcoholic hepatitis (43%-78%) followed by viral hepatitis (10%-19%) [4,6,16] . Although the precise global prevalence is unknown because compensated disease can remain undetected for many years, up to 1% of populations worldwide may have histological cirrhosis [17] . In the United States the prevalence of cirrhosis is estimated at 0.15% [18] .
Hepatic circulation
The liver receives 20% of cardiac output [19] . Total liver blood flow is approximately 100 mL/min per 100 g liver tissue, or 800-1200 mL/min [20] . The liver has a dual blood supply with blood from the hepatic artery and portal vein, which together with the bile duct form the hepatic triad. The hepatic artery is a branch of the celiac artery, with a pressure similar to aortic pressure (mean 60-80 mmHg). It carries well-oxygenated blood to the liver, providing approximately 30% of hepatic blood flow. The valveless portal vein is a lowpressure/low-resistance system that provides partially deoxygenated blood from the intestinal bed to the liver, accounting for 70% of hepatic blood supply. Normal mean portal pressures range from 5-10 mmHg [21] ( Figure 1). Oxygen delivery to hepatocytes does not depend on the proportion of portal versus arterial blood flow [22] . Animal models have demonstrated that normal hepatocyte oxygen supply is approximately 16 mL/min per 100 g liver tissue with an extraction ratio of 35% [22] . Oxygen extraction changes with variations in demand; as oxygen supply decreases the extraction can approach 100% [22] . The unique interaction between the hepatic artery and portal vein flow, termed the hepatic artery buffer 2583 March 7, 2015|Volume 21|Issue 9| WJG|www.wjgnet.com response (HABR) [23] , is essential to maintenance of hepatic blood flow. The hepatic artery flow increases in response to decreases in portal venous flow [24,25] . This relationship is unilateral; portal vein flow does not change in response to alterations of hepatic artery flow. The HABR is capable of offsetting a 25%-60% decrease in portal vein flow [26,27] . The HABR is the primary regulator of hepatic artery flow. Flow does not change in response to metabolic activity or blood oxygen content [28] and myogenic autoregulation plays a relatively small role [29] . The physiologic purpose of HABR is unclear, as hepatic oxygen supply exceeds demand and oxygen extraction can increase in response to metabolic changes or decreased blood supply. Further, the underlying physiology remains unclear. The role of nitric oxide synthase (NOS) has been investigated but animal models have not shown a major contribution to the HABR [30] . The HABR is likely regulated by washout of adenosine, mediated through P1-purinoceptors [26,31,32] .
Splanchnic circulation
The splanchnic vasculature, comprised of gastric, small intestinal, colonic, pancreatic, and splenic vessels arranged in parallel, receives approximately 25% of cardiac output at rest [33] and more during digestion. The major supplying arteries are the celiac, superior and inferior mesenteric. The capillary beds of this system form extensive anastamoses. Human studies of splanchnic blood flow are scarce because direct measurement of splanchnic vasculature is almost impossible without surgery. Most studies rely on indirect measurements and extrapolation from experimental models. Splanchnic blood flow is regulated by a combination of local and systemic factors including paracrine and endocrine signaling, vasoactive substances, and sympathetic innervation. Autonomic regulation is a weak contributor, although it is enhanced in the fed state compared to the starved state [34] . In a low-flow state, splanchnic blood flow decreases in order to maintain vital cardiac and cerebral blood supply [35] . This response occurs even after small-volume hemorrhage [36] . The splanchnic organs do not produce lactate early in low-flow states because oxygenation is preserved due to high baseline supply [35] . However, recovery of splanchnic flow is protracted even after adequate volume resuscitation [37] . The hepatosplanchnic vasculature's active response to systemic bloodflow contributes to its role as a blood volume reservoir, and its anatomic position just distal to the inferior vena cava make it a significant component of cardiovascular preload. As a capacitance vessel, it has been shown to pool 2.5% or mobilize up to 5%-6% of total blood volume in response to physiologic challenges [38][39][40] .
Circulatory changes of cirrhosis
Liver cirrhosis is the end-stage of chronic liver disease characterized by replacement of hepatic tissue with fibrosis and regenerative nodules (structurally abnormal areas of attempted tissue repair), and impaired liver function. The altered hepatic architecture in cirrhosis leads to circulatory abnormalities, namely portal hypertension, splanchnic vasodilation, and hyperdynamic circulation.
Portal hypertension
Portal hypertension is a pathognomic feature of liver cirrhosis, defined as an increase in the hepatic venous pressure gradient (an indirect reflection of the portocaval gradient in patients with cirrhosis) of more than 10 mmHg [41] . Portal hypertension can also be diagnosed ultrasonographically: hepatic vein pulsatility flattens from triphasic to monophasic secondary to histologic reductions in hepatic vein compliance. This is accompanied by a decrease in portal vein flow and an increase in the hepatic artery pulsatility index, due to the HABR [42] . Portal hypertension can be diagnosed clinically by the presence of esophageal varices, patency of the umbilical vein, and the presence of portocaval shunts (e.g., splenorenal shunts).
The development of portal hypertension is multifactorial. Hepatic fibrosis plays a role by disrupting hepatocyte architecture and increasing resistance to bloodflow. Hepatic sinusoidal pressure is negatively correlated with the percentage of un-fibrosed portal spaces, or "residual portal spaces" [43] . Hepatic fibrosis is largely caused by hepatic stellate cell injury. When injured by toxins (e.g., alcohol, hepatitis virus, infection, acetaminophen) or exposed to platelet activating factor (PAF), hepatic stellate cells transform into myofibroblastlike cells, releasing collagen Ⅰ and Ⅲ [44] . Other cell types implicated in fibrosis include myofibroblasts derived from portal vessels [45] and hematopoietic stem cells [46] . Fibrosis is also stimulated by inflammatory cytokines and vasoactive molecules, including chemotactic protein 1, transforming growth factor-β1, nitric oxide, endothelin-1 and angiotensin Ⅱ [47,48] . These mediators are increased in liver disease and can further upregulate their own release, thereby accelerating an inflammatory cycle.
Circulatory system changes may also contribute to the development of portal hypertension. Direct intraoperative measurements have demonstrated that, in cirrhosis, a basal HABR is continuously activated but the acute HABR is impaired [49] . While recent data suggests that angiotensin Ⅱ is a primary mediator of the progression from hepatic inflammation to fibrosis [48] , the entire renin-angiotensin-aldosterone system (RAAS) may also play a role [50] .
Splanchnic vasodilation
Splanchnic arteriolar vasodilation with hyperdynamic flow has been demonstrated in liver disease by observation of shortened albumin transit times through the splanchnic circulation [51] , increased splenic and mesenteric bloodflow [52] , and decreased macrophages to produce PAF [73] , particularly in the setting of cirrhosis [74] . There are least four endothelin receptors (ETA, ETB1, ETB2, ETC). Endothelin-1 mediated vasoconstriction occurs through the activation of the ETA receptor, and the ETB1 receptor stimulates the release of nitric oxide [75] . Endothelin-1 also stimulates catecholamine release [76,77] which may contribute to the elevated levels seen in cirrhosis. In vitro models have shown a direct relationship between ETA and ETB expression and portal pressure [78] . It is unclear if endothelin is increased in cirrhosis as a consequence or pathogenic response to splanchnic dilation.
Vasopressin is a neurohypophyseal hormone that regulates plasma osmolality and increases vascular resistance in vasodilated states. Cirrhotic patients are vasopressin deficient, but respond to exogenous vasopressin (and its analogues terlipressin or ornipressin) with increased blood pressure [79] . It is unclear if vasopressin deficiency precedes or causes splanchnic dilation.
Observational echocardiographic studies of cirrhotic patients have noted normal baseline cardiac contractility but attenuated stress-response, with disturbances of left diastolic function [88,89] . This dysfunction is termed cirrhotic cardiomyopathy. Cirrhotic cardiomyopathy is distinct from alcoholic cardiomyopathy which is characterized by reduced left ventricular contractility at baseline [90] . Overt heart failure is rare in cirrhotic cardiomyopathy. The splanchnic sequestration of blood volume reduces the cardiac workload and disguises the symptoms of heart failure; these can be unmasked by physical or pharmacologic stress (e.g., surgery). The pathophysiology of cirrhotic cardiomyopathy is not completely understood but endocannibinoid-receptor antagonists have improved cardiac contractility in animal models, suggesting a role for endocannibinoids in the pathogenesis of cardiac dysfunction [91] . measured superior mesenteric artery impedance [53] . Decreased mesenteric artery impedance begins early in liver disease and worsens with the progression to cirrhosis [53] . Splanchnic vasodilation is multifactorial and not completely understood. The pathogenesis is partly explained by increased resistance to portal outflow, but activation of other mediators including the RAAS, nitric oxide, PAF, vasopressin, and inflammatory molecules likely plays a role.
Early in liver disease, total blood volume increases but is largely sequestered in the splanchnic vascular bed, leading to "splanchnic steal" and systemic hypovolemia [54][55][56] . Animal models have shown that this occurs before the development of portal hypertension or splanchnic vasodilation [57] . Splanchnic steal is likely mediated by the RAAS [58] , a hormone cascade which leads to volume loading through modulation of renal sodium retention. Recently an alternate RAAS pathway, angiotensin-converting-enzyme-2 (ACE-2) has also been investigated for its role in liver disease. ACE-2 levels are upregulated in cirrhosis, and expression is directly related to hepatocyte hypoxia [59,60] . The ACE-2 system acts downstream at the Mas receptor, which vasodilates splanchnic vessels. In cirrhosis, blockade of this receptor reduces portal pressure [60] . Nitric oxide (NO) is an endothelial-derived relaxing factor. Cirrhotic patients not only have increased expression of NO, but also show increased sensitivity to NO-mediated vasodilation [61] . NO causes vasodilation by stimulating soluble guanylate cyclase to generate cyclic guanosine monophosphate in vascular smooth muscle [62] . It also decreases vascular response to vasoconstrictors [63] . In animal models this vasoplegia is completely reversed by removal of the endothelium [64] . The constitutively expressed endothelial isoform of NOS has been implicated as a major contributor to splanchnic vascular overexpression of NO and its activity precedes splanchnic vasodilatation in rats [65] . Neuronal NOS is also upregulated in experimental models of cirrhosis [66,67] . In addition to nitric oxide, other vasodilators suggested to play a role in splanchnic dilation include carbon monoxide [68] , plasma calcitonin gene related peptide [69] , eicosanoids, bile salts, adenosine and substance P [41] . PAF is a pro-inflammatory molecule that affects platelet aggregation, vascular permeability, and vascular tone. Hepatic concentrations of PAF are increased in cirrhosis [70] . The effect of PAF on vasculature tone is regional, and exogenous PAF increases portal pressure but decreases systemic arterial blood pressure [71] .
PAF is also neoangiogenic, and may play a role in the development of the arteriovenous and portocaval shunts common to cirrhosis. Endothelin is a paracrine vasoconstrictor, released by vascular endothelial cells, which is increased in cirrhosis [72] . It stimulates hepatic sinusoidal
Liver disease increases the susceptibility for sepsis, but sepsis also aggravates liver disease. In animal models of sepsis, portal vein and overall hepatic flow decreases and angiotensin Ⅱ has been implicated [107,108] . Up to one third of patients with late septic shock have depleted vasopressin levels [109] resulting in hypotension, vasoplegia, and catecholamine resistance. These circulatory changes may affect hepatic blood flow and function. In septic patients with no previous history of liver disease, postmortem histopathologic hepatic changes were found, including portal inflammation, centrilobular necrosis, and hepatocellular apoptosis [110] . Human studies of hepatosplanchnic flow in sepsis remain scarce and it is important to note that animal studies do not always include volume-resuscitated arms, which would increase their clinical relevance.
Critical care considerations
Patients with cirrhosis may be admitted to the ICU with decompensated disease, after surgery, or with infection and sepsis. Although the Child-Turcotte-Pugh score [111,112] has traditionally been used for risk assessment, the Model for End Stage Liver Disease (MELD) score [113] is now commonly used to assess liver disease and rank-list patients for liver transplantation. While the MELD score is an excellent tool for predicting short-term mortality amongst cirrhotic patients awaiting liver transplantation [114] , data regarding its predictive power for mortality in hospitalized cirrhotic patients has been inconsistent. Teh et al [115] retrospectively demonstrated increased mortality in postoperative cirrhotic patients with MELD greater than 20, while Oberkofler et al [116] found no mortality prediction in a cohort of liver transplant recipients.
ICU scoring systems (e.g., Sequential Organ Failure Assessment (SOFA), Simplified Acute Physiology Score Ⅱ) have demonstrated superior mortality prediction in cirrhotic patients in the ICU [117,118] . Recently, two new scores have been developed for mortality prediction: a modified SOFA score for Chronic Liver Failure (CLIF-SOFA) and the Royal Free Hospital Score [119,120] . Single biomarkers have also shown prognostic value. In developing the CLIF-SOFA score, Moreau et al [119] demonstrated that leukocyte count was independently associated with acute-on-chronic liver failure and associated 28-d mortality. Furthermore, in an effort to identify patients at risk for imminent decompensation, López-Velázquez et al [121] found that bilirubin concentration alone was an independent predictor of 7-d mortality.
Beyond scoring systems, multiorgan dysfunction in cirrhosis has been correlated with hospital mortality: a prospective study of ICU patients with cirrhosis found that coma and acute renal failure were independent predictors of mortality [8] . While organ dysfunction is reflected in scoring systems, these findings highlight the importance of assessing patients for clinical markers of dysfunction other than those included in scores. Recently a novel method of transient elastography has been used to measure liver stiffness, a metric associated with hepatic fibrosis. In a prospective study of ICU patients, liver stiffness was highest in patients with decompensated cirrhosis (compared to other critical illnesses or comorbidities), and was associated with increased ICU-and postdischarge-mortality [122] . Transient elastography may serve as a useful triage tool for critically ill patients with liver disease.
As noted, the circulatory abnormalities of cirrhosis predispose patients to multiorgan dysfunction including heart failure, renal dysfunction, and hemodynamic instability. Monitoring to predict or prevent this morbidity has not been identified, nor has the optimal treatment regimen. Notably, a prospective study of ICU patients with cirrhosis demonstrated 100% mortality for those with pulmonary artery catheters, 84% mortality for patients requiring mechanical ventilation, and 89% mortality for those requiring renal replacement therapy [8] . These mortality rates likely reflect a high severity of disease rather than adverse effects of the monitors themselves. Studies are needed to determine the most appropriate monitoring and interventions for ICU patients with cirrhosis.
Given the morbidity and mortality attributable to sepsis for cirrhotic patients in the ICU, intensivists should maintain a high index of suspicion for infection. Early prophylactic antibiotics for patients with cirrhosis may reduce the incidence of bacterial translocation, sepsis, and variceal hemorrhage [123] . Studies focused on immune system function and inflammatory mediators may clarify the pathophysiology common to cirrhosis and sepsis, and suggest novel therapeutic interventions.
CONCLUSION
The hepatosplanchnic circulatory system is the largest blood reservoir in the human body and is essential to multiple aspects of homeostasis, including nutrient absorption, endocrine function, and toxin metabolism. Pathologic splanchnic vasodilation in cirrhosis leads to hyperdynamic circulation and blunting of the HABR. These alterations contribute to systemic disease and perioperative mortality, and resemble pathophysiologic changes seen in sepsis. Cirrhosis increases the risk of developing sepsis, and sepsis may exacerbate cirrhosis. A better comprehension of circulatory changes in cirrhosis may lead to therapeutic modalities that improve intensive care management.
|
2018-04-03T06:24:12.783Z
|
2015-03-07T00:00:00.000
|
{
"year": 2015,
"sha1": "0f758c835e7b6690a5206ed24a95a2ce1124c21a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v21.i9.2582",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "54ada8e0dd56a381aaddd9d1f1e3bea0a816cc46",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56007313
|
pes2o/s2orc
|
v3-fos-license
|
A novel mutation in XLRS1 gene in X-linked juvenile retinoschisis
ological findings, and family histories that are consistent with Xlinked inheritance and identification of a RS1 mutation [3]. Several Korean patients have been reported carrying the RS1 mutations [4-7]. In this report, we described a patient with exon 1 deletion of the RS1 gene. Considering X-linked inheritance pattern, validation of a carrier state of a patient’s mother is important for the genetic counseling of other family members and for the future reproductive plan. Exonic deletion in a carrier is not easy to identify when an appropriate commercial kit is not available. In this report, we did the multiplex ligation-dependent probe amplification (MLPA) analysis using peripheral leukocytes and confirmed the carrier state of the patient’s mother. A novel mutation in XLRS1 gene in X-linked juvenile retinoschisis
Introduction
X-linked juvenile retinoschisis (XLRS) of hereditary type (OMIM #312700) is one of the most common causes of macular degeneration and is characterized by bilateral vitreous-retinal dystrophy. XLRS is caused by a mutation in the retinoschisin 1 (RS1) gene [1]. Its incidence rate in males is 1:5,000 to 1:25,000 [1]. The RS1 gene is located at Xp22.13 encoding retinoschisin that is involved in the development and maintenance of the retina secreted from photoreceptor cells and bipolar cells of the retina [2].
Most patients with XLRS suffer from hyperopic astigmatism and decreased visual acuity since their first decade of lives [2]. Diagnosis of XLRS is based on fundus abnormality, electrophysi-
Case
The patient was the first baby of non-consanguineous Korean parents. He was born after 41 weeks of gestation, 3,350 g. Pregnancy, labor and spontaneous vaginal delivery were uneventful. His growth and development had been unremarkable. At age of 2 years, he had his right eyebrow hit the edge of the table. After 1 year, at age of 3 years, abnormal focus of eye and gaze were noted. Fundus examination revealed vitreous hemorrhage in both eyes at the local ophthalmologic clinic, which was thought as developed due to the previous trauma. One month later, he was referred to Asan Medical Center due to the persistent vitreous hemorrhage. The detailed fundus examination revealed bilateral vitreous hemorrhages, inferior ghost vessels, and retinoschises. The examination by RetCam wide-field digital imaging system (Natus Medical Inc., Pleasanton, CA, USA) and fluoroscopy showed bilateral vitreous opacity retinoschises ( Fig. 1). Bilateral peripheral retinoschises were also noted, and the right side was more severe than the left side on optical coherence tomography (Fig. 2).
Informed consent was obtained from his parents, and blood sample was collected from the patient. Genomic DNA was ex-tracted from peripheral blood leukocytes using a Puregene DNA isolation kit (Gentra, Minneapolis, MN, USA). All coding 6 exons and exon-intron boundaries of the RS1 gene were directly se- quenced by an ABI 3130 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) using a BigDye Terminator cycle sequencing kit (Applied Biosystems) in the patient's peripheral leukocytes. Polymerase chain reaction (PCR) amplification was not possible for exon 1, which suggested the exon 1 deletion in this patient. For the genetic counseling for his family members, it was important to validate the carrier state of his mother. MLPA kit was not commercially available but SALSA Reference Kit, probe X519-A1 RS1 was kindly provided from MRC Holland, Amsterdam, the Netherlands. The MLPA was done according to the manufacturer's instructions. Amplified products were separated using ABI 3130 Genetic Analyzer and analyzed by Gene Mapper Software (Applied Biosystems). Complete exon 1 deletion (c.1-?_52+?del) in patient was identified (Fig. 3A). Patient's mother also had heterozygous deletion of exon 1 as a carrier (Fig. 3B).
Discussion
XLRS is a macular degeneration that mostly affects males early in their lives [8]. Most common characteristic features of XLRS is decreased visual acuity by invasion of foveal area, retinal split-ting and most of patients were frequently diagnosed prior to school age [9]. The hall mark of XLRS is the presence of a spokewheel pattern in the macula on high magnification ophthalmoscopy [10]. Approximately 80% of patients have additional peripheral retinoschisis which helps to differentiate with other diseases [8]. Our patient complained bilateral poor vision at age of 3 years. Fundus exam showed old floating vitreous hemorrhage, flat posterior pole on both eyes. Detailed examination showed only peripheral retinoschisin on lower part of retina.
The RS1 mutations are responsible for XLRS. This gene consists of 6 exons and encodes 224 amino acids that produce a protein called retinoschisin, 24 kDa. Retinoschisin is highly expressed in retina and helps the retina inner surface to adhere to each other, involved in the development and maintenance of the retina [10]. Therefore, RS1 gene mutations generate retinal tearing leading to the vision problems [11]. In previous reports, the most common mutations in the RS1 gene has been missense (75%) mutations. Nonsense, small frameshifting insertions/deletions and splice site mutations account for the remaining 25% of the mutations. Exonic deletion has been reported as rare [10,12,13]. A total of 17 XLRS case reports have been reported in Korean Fig. 3. The results of multiplex ligation-dependent probe amplification analysis using SALSA Reference Kit, probe X519-A1 RS1 (MRC Holland, Amsterdam, the Netherlands). Most line inclusions are marked. However, the peak ratio of exon1 on RS1 gene with 111.6 bin size in X-chromosome is 0% (arrow in A) in patient. Therefore, we know that the patient had exon1 deletion of RS1 gene in X-chromosome and his mother was heterozygote carrier. [8]. RS1 mutations were identified in 14 patients (12 missense mutations and 2 splice-site mutations) and 8 mutations were found in exon 6 [8]. MLPA analysis is a useful test for the detection of small exonic deletions. By its semi-quantitive nature, MLPA can reveal the heterozygous deletion as in our patient's mother. Quantitative genomic PCR can also be considered as a test to identify the heterozygous exonic deletion even though it gives less accurate results compared to MLPA analysis. Because knowing the genetic carrier status is important in establishing the future reproductive plan, we used the MLPA method and found out the carrier state of patient's mother.
|
2018-12-11T12:50:58.026Z
|
2018-06-30T00:00:00.000
|
{
"year": 2018,
"sha1": "0205095a10e8fc5d06c4115154e2f4d0639f4a75",
"oa_license": null,
"oa_url": "https://doi.org/10.5734/jgm.2018.15.1.13",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0205095a10e8fc5d06c4115154e2f4d0639f4a75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
155091391
|
pes2o/s2orc
|
v3-fos-license
|
Tau decays into two mesons: an overview
We review the state-of-the-art theoretical analyses of tau decays into a pair of mesons and a neutrino. The participant vector and scalar form factors, $f_{+}(s)$ and $f_{0}(s)$, are described in the frame of Chiral Perturbation Theory with resonances supplemented by dispersion relations, and the physical parameters of the intermediate resonances produced in the decay are extracted through the pole position of $f_{+,0}(s)$ in the complex plane. As a side result, we also determine the low-energy observables associated to the form factors. We hope our study to be of interest for present and future experimental analyses of these decays.
Introduction
The tau is the only lepton heavy enough (m τ ∼ 1.8 GeV) to decay into hadrons. At the exclusive level, the hadronic partial width (∼ 65%) is the sum of the tau partial width to strange (∼ 3%) and to non-strange (∼ 62%) hadronic final states, and provides an advantageous laboratory to investigate the non-perturbative regime of QCD under rather clean conditions that is useful to understand the hadronization of QCD currents, to study form factors and to extract resonance parameters. While the non-strange decays are largely dominated by the π − π 0 mode which, in turn, constitutes the main decay channel of the τ with an absolute branching ratio of ∼ 25%, the strange hadronic final states are suppressed with respect to the non-strange ones mainly due to the following two reasons: i) the mass of the strange quark is larger than the mass of the up and down quarks thus yielding to a phase-space suppression; ii) strange decays are Cabibbo suppressed since the |V us | element of the CKM matrix enters the transition instead than |V ud |. The dominant strangeness-changing τ decays are into Kπ meson systems which adds up to ∼ 42% of the strange spectral function. However, in order to increase the knowledge of the strange spectral function, the τ − → K − η ( ) ν τ decays are important.
In this letter, we provide a brief overview of the main results we have obtained in our series of dedicated analyses of two meson tau decays based on the framework of Resonance Chiral Theory supplemented by dispersion relations i.e. τ − → π − π 0 ν τ and τ − → K − K S ν τ [1], τ − → K S π − ν τ and τ − → K − η ( ) ν τ [2,3], and τ − → π − η ( ) ν τ [4]. sponding amplitude can be expressed as an electroweak part times an hadronic matrix element where d = V * udd + V * uss . In Eq. (1), we have not considered the gauge boson propagator, since the explored energy region ( √ s < m τ ) is much lighter than the W ± mass (M W ± ∼ 80 GeV), but rather its expansion and used the well-known relation G F / The hadronic matrix element encodes the unknown QCD dynamics and it is given by where C P − P 0 are Clebsch-Gordon coefficients, p µ − and p µ 0 are the momenta of the charged and neutral pseudoscalars, respectively, q µ = (p − + p 0 ) µ is the momentum transfer and s = q 2 . In Eq.
(2), f P − P 0 0 (s) corresponds to the S -wave projection of the state P − P 0 |, while f P − P 0 + (s) is the P-wave component, and they are known as the scalar and vector form factors accordingly. Notice that the scalar contribution is suppressed by the mass-squared difference ∆ P − P 0 = m 2 P − − m 2 P 0 . In terms of these form factors, the differential decay width reads dΓ τ − → P − P 0 ν τ where λ P − P 0 ≡ λ(s, m 2 P − , m 2 P 0 )/s 2 and S EW is a short-distance electroweak correction. Our initial setup approach to describe the required vector form factors assumes a Vector Meson Dominance form that includes both the real and imaginary parts of the unitary loop corrections thus fulfilling analyticity and unitarity. One can then extract its phase φ P − P 0 input (s) and insert it into a dispersion relation. The use of a thrice-subtracted dispersion relation where α 1,2 are two subtraction constants that can be related to chiral low-energy observables and s th is the corresponding two-particle production threshold, is found to be an optimal choice that makes the fit less sensitive to the higher-energy region of the dispersive integral where the phase is less well-known. In the isospin limit no scalar contributes to τ − → π − π 0 ν τ , while for the required Kπ, Kη ( ) scalar form factors, we use [5].
The pion vector form factor and τ
The pion vector form factor is a classic object in low-energy QCD since it provides a privileged laboratory to study the effects of ππ interactions under rather clean conditions. In [1], we have exploited the synergy between Chiral Perturbation Theory and dispersion relations and provided a representation that uses for the phase required as input in Eq. (4): This phase contains the following remarkable features: i) it fully exploits Watson's theorem providing a model-independent description of the elastic region i.e. until ∼ 1 GeV 2 , through the use of the ππ scattering phase δ 1 1 (s) [6]; ii) for the region m 2 τ ≤ s, we guide smoothly the phase to π at high-energies thus ensuring the correct 1/s fall-off; iii) for the intermediate region 1 GeV 2 ≤ s < m 2 τ , we use a parametrization that contains the physics of the inelastic regime until m 2 τ by means of ψ(s) = arctan[Im f ππ + (s)| 3 res expo /Re f ππ + (s)| 3 res expo ], where f ππ + (s)| 3 res expo is the Omnès exponential representation of the form factor that reads (see Ref. [1] for details) ReA π (s) .
Armed with this parametrization, and variants of it, we have analyzed the high-statistics Belle data [7] focusing our effort on the improvement of the description of the energy region where the ρ(1450) and ρ(1700) come up into play. In Fig. 2 (left), we display the form factor modulus squared including the statistical fit uncertainty for our reference fit (red error band) and a conservative systematic uncertainty coming from the largest variations of central values with respect to our reference fit (gray error band). Our central results for the physical resonance where the first error is statistical while the second is our estimated systematic uncertainty. From our study, we conclude that the determination of the pole mass and width of the ρ(1450) and ρ(1700) is limited by theoretical errors that have been usually underestimated so far. The study of the τ − → K − K S ν τ decay is of timely interest due to the recent measurement of its spectrum released by the BaBar Collaboration [8]. The K − K S threshold opens around 1000 MeV which is ∼100 MeV larger than M ρ + Γ ρ , a characteristic energy scale for the ρ(770)-dominance region. This implies that this mode is not sensitive to the ρ(770) peak, and consequently not useful to study its properties, but rather enhances its sensitivity to the properties of the heavier copies ρ(1450) and ρ(1700). In [1], within a dispersive parametrization of the kaon vector form factor, we have performed different fits to the measured spectrum (see right plot of Fig. 2) and determined the ρ(1450) mass and width. We have pointed out that higher-quality data on this channel will allow to extract the ρ(1450) and ρ(1700) parameters with improved precision from a combined analysis with the pion vector form factor data.
Combined analysis of the decays τ
We analyze the experimental measurement of the invariant mass distribution of the decay τ − → K S π − ν τ together with spectrum of the K − η mode both released by Belle [9,10]. The former has been studied in detail in [11,12], improving the determination of the resonance parameters of both the K * (892) and its first radial excitation K * (1410), while the later, with a threshold above the K * (892) dominance, has been studied in [3] obtaining K * (1410) properties that are competitive with those of the K S π − channel. In [2], in a simultaneous study of the decay spectra of τ − → K S π − ν τ and τ − → K − ην τ within a dispersive representation of the required form factors, we have illustrated how the K * (1410) resonance parameters can be determined with improved precision as compared to previous studies. We have also investigated possible isospin violations in the form factor slope parameters and claimed that making available the K − π 0 decay spectrum [13] would be extremely useful to get further insights.
Our best fit results are compared to the measured Belle distributions in Fig. 3 where satisfactory agreement with data is seen for all data points. The K S π − decay channel is dominated by the K * (892) resonance peak followed by the contribution of the K * (1410) resonance, whose shoulder is visible on the second half of the spectrum. The scalar form factor contribution is small although important to describe the data immediately above threshold. There is no such clear peak structure for the Kη channel due to the interplay between both K * resonances. The scalar form factor contribution is insignificant in this case. With the current data, we succeed in improving the determination of the K * (1410) mass and width with the findings For the τ − → K − η ν τ decay, it is dominated by the scalar form factor and we have obtained a branching ratio of ∼ 1 × 10 −6 [3], well below the experimental upper bound.
2.3
The second-class current τ − → π − η ( ) ν τ decays The non-strange weak hadronic currents can be divided according to their G-parity: i) firstclass currents with quantum numbers J PG = 0 ++ , 0 −− , 1 +− , 1 −+ ; ii) second-class currents Eventsêbin t -ØKhn t excluded fit points 'Unfolded' t -ØKhn t Belle data t -ØK S pn t excluded fit points Unfolded t -ØK S pn t Belle data Fit to t -ØKhn t Fit to t -ØK S pn t Scalar contributions Figure 3. Belle τ − → K S π − ν τ (red circles) and τ − → K − ην τ (green squares) measurements as compared to our best results (solid black and blue curves, respectively) obtained in combined fits to both data sets.
(SCC), which have J PG = 0 +− , 0 −+ , 1 ++ , 1 −− . The former completely dominate weak interactions since there has been no evidence of the later in Nature so far. We study the τ − → π − η ( ) ν τ decays which belong to the SCC processes i.e. parity conservation implies that these transitions must proceed through the vector current which has opposed G-parity to the π − η ( ) system. Our predictions [4] are displayed in Fig. 4, where we show the total decay rate distribution for τ − → π − ην τ (left) and τ − → π − η ν τ (right). The low-energy part of the πη spectrum is dominated by the vector contribution associated to the ρ(770) while effects of the a 0 (980) and a 0 (1450) scalar resonance contributions might show up and dominate the intermediate and high-energy part. Contrarily, the vector contribution is suppressed in τ − → π − η ν τ because the π − η threshold lies well beyond the region of influence of the ρ(770), thus being this mode dominated by the scalar form factor. Our branching ratio predictions for π − η are found to be within the window [0.36, 2.12] × 10 −5 respecting the current experimental upper limit, 7.3 × 10 −5 at 90% CL, reported by Belle [14]. Regarding the branching of the π − η mode, it might be one or two order of magnitude smaller than the π − η channel. . Decay spectrum for τ − → π − ην τ (left) and τ − → π − η ν τ (right). See Ref. [4] for details.
In this letter, we have provided an overview of all possible semileptonic two-meson decay channels of the τ lepton. These decays provide a privileged laboratory to study, under rather clean conditions, the energy region of two-meson form factors where resonances come up into play. An ideal roadmap for describing them would require a model-independent approach demanding a full knowledge of QCD in both its perturbative and non-perturbative regimes, knowledge not yet unraveled. An alternative to such enterprise would pursuit a synergy between the formal theoretical calculations and experimental data. In this respect, dispersion relations are a powerful tool to direct oneself towards a model-independent description of form factors. By exploiting the synergy between dispersion relations and Chiral Perturbation Theory, we have carried out a dedicated study of the high-statistics Belle data of the pion vector form factor, assessing the role of the systematic uncertainties in the determination of the ρ(1450) and ρ(1700) parameters, and performed a first analysis of the τ − → K − K S ν τ BaBar data. We have also shown the potential of the combined analysis of τ − → K S π − ν τ and τ − → K − ην τ to extract the K * (1410) mass and width. Finally, while for the decay τ − → π − ην τ we find a total branching ratio that ranges [0.36, 2.12] × 10 −5 , well within the reach of Belle-II, the π − η channel might be one or two order of magnitude more suppressed.
|
2019-05-15T17:53:50.000Z
|
2019-05-15T00:00:00.000
|
{
"year": 2019,
"sha1": "769aee17e8b1c747ebd4547ed095c4c74f35cc32",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2019/17/epjconf_phipsi18_08003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "42fae1319317a60b7deca6bf6a8c16673ef258ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
43717111
|
pes2o/s2orc
|
v3-fos-license
|
Lead Halide Perovskite Nanocrystals in the Research Spotlight: Stability and Defect Tolerance
This Perspective outlines basic structural and optical properties of lead halide perovskite colloidal nanocrystals, highlighting differences and similarities between them and conventional II–VI and III–V semiconductor quantum dots. A detailed insight into two important issues inherent to lead halide perovskite nanocrystals then follows, namely, the advantages of defect tolerance and the necessity to improve their stability in environmental conditions. The defect tolerance of lead halide perovskites offers an impetus to search for similar attributes in other related heavy metal-free compounds. We discuss the origins of the significantly blue-shifted emission from CsPbBr3 nanocrystals and the synthetic strategies toward fabrication of stable perovskite nanocrystal materials with emission in the red and infrared parts of the optical spectrum, which are related to fabrication of mixed cation compounds guided by Goldschmidt tolerance factor considerations. We conclude with the view on perspectives of use of the colloidal perovskite nanocrystals for applications in backlighting of liquid-crystal TV displays.
I n the past few years, lead halide perovskites (LHPs) in the form of colloidal nanocrystals (NCs), such as organic− inorganic CH 3 NH 3 PbX 3 LHPs (often denoted as MAPbX 3 , with MA standing for the methylammonium cation) and allinorganic CsPbX 3 LHPs (X = Cl, Br, I), have been intensively investigated for various applications such as light-emitting devices (LEDs) and photodetectors due to their color-tunable and narrow-band emission as well as easy synthesis, convenient solution-based processing, and low fabrication cost. We refer interested readers to some recent reviews for comprehensive treatment of these topics. 1−6 Most striking has been the impact of thin-film perovskites in photovoltaics, 7 with extremely high power conversion efficiencies of more than 22%, 8 and the reports of light-emitting diodes with external quantum efficiencies over 10%. 9,10 The literature underpinning the development in bulk and thin-film perovskites is very extensive and is not covered herein; instead, we focus on nanoscale perovskite NCs and their emerging applications. In this Perspective, we first provide some short historical remarks on LHPs and then outline their basic structural and optical properties, highlighting differences and similarities between the LHP NCs and conventional II−VI and III−V semiconductor quantum dots (QDs). We then proceed with more detailed insight into two important issues inherent to LHP NCs, namely, the innate advantage of so-called defect tolerance and the necessary steps required to improve their stability under the environmental conditions found in devices. We then discuss several issues that need to be addressed in the burgeoning field of LHP NCs, such as the origin of the significantly blue-shifted photoluminescence (PL) from CsPbBr 3 NCs and the synthetic strategies toward fabrication of stable mixed-cation LHP NC materials with an optimum Goldschmidt tolerance factor (TF) that emit in the red and near-infrared part of the optical spectrum. The defect tolerance of LHP NCs offers strong inspiration to search for similar attributes in other related compounds, especially those that do not contain toxic lead or other heavy metals. We also provide our view on the perspectives on the use of the colloidal LHP NCs for applications in backlighting of liquid-crystal displays for television (LCD TV displays) and other related color conversion and color enhancement applications.
Basic Properties of Lead Halide Perovskite Nanocrystals. The synthesis of bulk CsPbX 3 compounds was reported as early as 1893, 11 whereas their perovskite crystal structure and photoconductive, and hence semiconductive, nature was only discerned later, in the 1950s. 12 Since then and until the late 1990s, CsPbX 3 compounds were rather thoroughly characterized as to the details of their crystallography and phase diagrams, including direct structural characterization by X-ray and neutron scattering and by nuclear magnetic resonance. 13−28 Also, a number of lead-free halide perovskites were studied at that time; CsSnX 3 phase transitions were characterized, 6,29−31 and specific electrical conductivity was observed. 32 CsGeCl 3 was reported to have dielectric constants comparable to BaTiO 3 while exhibiting ferroelectric characteristics as well. 33 In 1978, Weber et al. synthesized and determined the crystal structure of MAPbX 3 for the first time. 34 The motivation to investigate LHPs in the form of colloidal NCs has its roots in prior successes of colloidal QDs of conventional semiconductors (CdSe, CdTe, PbSe, InP, and the like). 36 LHP NCs have spurred intense research efforts owing to, on one hand, their extremely facile synthesis ( Figure 1, upper part) and, on the other hand, their very bright PL covering the entire visible spectral range ( Figure 1a). These highly crystalline, cubic-shaped NCs (Figure 1b) reflect the intrinsic near-cubic symmetry of the crystal lattice ( Figure 1c). Just like their perovskite oxide ancestors (i.e., CaTiO 3 ), LHPs crystallize into an ABX 3 -like lattice that comprises threedimensional (3D) corner-shared [PbX 6 ] octahedra (X being Cl, Br, I). There are commonly three cations, namely, cesium (Cs + ), methylammonium (MA, CH 3 NH 3 + ), and formamidinium (FA, CH(NH 2 ) 2 + ), which fit into a 12-coordinate A-site formed in between [PbX 6 ] octahedra. According to the Goldschmidt TF, 37 any substantially larger or smaller, for example, by 10% or more, A-site ion would destabilize the lattice and induce conversion into lower-dimensional lead halide compounds, with much larger bandgaps, as was observed experimentally. 38 In contrast to other semiconductor materials (Si, GaAs, Cd chalcogenides, In pnictides), LHPs are highly ionic compounds. Hence, it is not surprising that they readily and easily form highly crystalline NCs even at room temperature. Colloidal synthesis of CsPbX 3 NCs, depicted in Figure 1, 35 represents just one out of numerous variations of the ionic coprecipitation method, optimized to obtain narrow size dispersions. Size control and colloidal stability are imparted by the capping ligands, typically a mixture of a carboxylic acid (such as oleic acid, OA) and alkylamines (such as oleylamine, OLA). 39−42 The first colloidal synthesis of organic−inorganic MAPbBr 3 NCs was reported in 2014 by Galian and Perez-Prieto, who used an alkyl ammonium bromide with a medium-sized chain to stabilize small-sized crystallites in a suspension; 43 the same group further enhanced their PL quantum yield (QY) to 100%. 44 Soon after that first publication, Zhong's group introduced a ligand-assisted reprecipitation (LARP) technique (as shown in Figure 2a) in a mixture of a good and a poor solvent to produce MAPbX 3 (X = Cl, Br, I) NCs with a tunable bandgap by varying halide elements; the same group also reported improved LARP and in situ fabrication later. 39,45−47 Later in Huang's related report, 40 bandgap tunability of MAPbBr 3 NCs while controlling the LARP process by modifying the poor solvent's temperature was demonstrated (Figure 2b), and NCs with high PL QYs of up to 93% and high crystallinity ( Figure 2c) were obtained. 1D and 2D perovskite NCs have also been explored, and quantum confinement has been completely verified and quantified in the 2D case. 41,42,48 Simple top-down fabrication of MAPbBr 3 and MAPbI 3 NCs by employing a mixture of OA and OLA ligands as coordinating solvents under ultrasonication was also demonstrated by Huang et al. 49 The ultrasonication approach was also demonstrated by Hintermay et al. and Tong et al. 50,51 Combinations of LHP NCs can provide wide color gamuts covering the whole visible spectral range (400−700 nm with CsPbX 3 and MAPbX 3 NCs) and the emission can even be extended into the infrared (up to 800 nm with FAPbI 3 NCs). In the visible, emission line widths are narrow, typically less than 100 meV, corresponding to a full width at half-maximum (fwhm) of 12−50 nm. The lower widths are seen at shorter wavelengths, in the blue, and a meaningful means of comparison is to take the fwhm divided by the central wavelength (i.e., the fractional bandwidth) 52 as this would bear some relationship to the size distribution and highlight major differences in the latter where the fractional bandwidths differ significantly. PL QYs are high, even without the benefits of core−shell passivation, and can reach peak values of up to 95− 100%. 35,53 Such high PL QYs are a direct consequence of the defect-tolerant nature of LHP's electronic structure, which we will consider in detail below.
These attractive optical characteristics of LHP NCs are counterbalanced by several major issues related to the stability Lead halide perovskites are highly ionic compounds.
of these materials. The key difficulty from the viewpoint of chemical stability concerns MAPbX 3 NCs. 43 Due to the low energy of formation, MA-based LHPs can eventually decompose into PbX 2 and volatile byproducts (i.e., CH 3 NH 2 , HI, I 2 , etc.). This decomposition is greatly accelerated by the high surface area of LHP NCs and by moisture, oxygen, heat, light, and their combined effects. 54,55 Often, MA-based LHP NCs decompose during isolation and purification procedures. Higher durability has been observed with FA-and Cs-based LHP NCs. 35,53 Owing to the considerable ionicity of the bonding, yet another challenge specific to all LHP NCs is their instability in essentially all polar solvents. In addition, LHP NCs exhibit rather moderate thermal stability due to either low melting points of 400−500°C (CsPbX 3 ) or thermal decomposition (MAPbI 3 at ca. 150−200°C; FAPbI 3 at ca. 290−300°C). In addition, a great challenge originates from the rather labile and dynamic nature of the ligand binding in these materials, 56 causing a loss of colloidal stability during the purification of LHP NC colloids. These challenges have led to intense research into alternative ligand chemistries 57,58 and developing coatings with protective polymeric or inorganic layers, 59,60 which we will consider in detail below.
Another form of structural instability comes from the polymorphism, which is especially pronounced for iodidebased LHPs (CsPbI 3 and FAPbI 3 ). 3D polymorphs of CsPbI 3 and FAPbI 3 are thermodynamically metastable and undergo transitions to wide-bandgap 1D polymorphs. 13−16,61−65 Thin films and NCs of CsPbI 3 and FAPbI 3 exhibit extended but finite stability in their 3D polymorphs (days to several months), primarily due to surface effects. 35,53,66−69 Thermodynamic instability is caused by the Cs and FA ions being, respectively, slightly too small and too large for the A-site, as determined by the Goldschmidt TF and by the octahedral factor for the required dense packing in 3D perovskites. 37,70−74 Combined with the chemical instability of the MAPbI 3 NC system, a "red wall" exists for LHP NCsa difficulty to obtain stable NCs with PL in red and near-infrared spectral regions. APbX 3 perovskites that feature 3D interconnection of PbX 6 octahedra are of primary interest. These octahedra form either an ideal cubic lattice (typical for FAPbBr 3 and FAPbI 3 ) or a similar 3D orthorhombic one (CsPbX 3 ). In the case of iodide LHPs (FAPbI 3 and CsPbI 3 ), 3D -phases are metastable at room temperature, and the instability decreases upon reduction of crystallite size from bulk to NCs. 53,75 Although FAPbI 3 NCs are stable for at least several months, CsPbI 3 NCs are highly unstable and, at best, retain their red PL for several weeks, only. Poor chemical stability of MAPbI 3 and poor phase stability of its FA and Cs cousins had been previously termed by us as the "perovskite red wall". 53 To illustrate the mitigation strategy on this issue, which can be based on employment of mixed-cation perovskites, we briefly review the underlying reasons for the phase transformation illustrated in Figure 3. Perovskite structures can be viewed as a close-packing of ions, and hence, the Goldschmidt TF concept, commonly used for metaloxide perovskites 37 can be also extended to LHPs. 71,72 For ideal 3D cubic close-packing, the Goldschmidt TF is calculated as where r A , r Pb , and r X are the ionic radii of each ion. In an ideal close-packing case, TF = 1. Although for more ionic oxides TF = 0.8−1 is known as an empirical stability range, higher covalency in LHPs and nonsphericity of their A-cations (both MA and FA) lead to the observation of stable 3D perovskites only for TF ≥ 0.9. The data from Travis et al. 71 The composition control of LHP NCs is more flexible and convenient than that for many conventional semiconductor QDs. The tunability of perovskite NC compositions can be achieved after synthesis through subsequent anion exchange, which is more facile than that for many conventional II−VI and III−V QDs. In chalcogenide NCs, cation exchange is quite common and easy to certain degrees; 82 however, the anion exchange is rarely reported in such materials. Anion sublattice bonding is rather stronger than that of the cation sublattice, while the anions themselves are often bulkier than the cations, making anion exchange difficult without using extreme conditions, and usually any exchange that is observed is not topotaxial. 83,84 Another outstanding feature of perovskite NCs is that they can have high PL QYs, which have even reached 100%, 44 by virtue of their fortuitous band structures, as discussed further in the next section. The PL fwhm of perovskite NCs is narrower than that for most of the other types of QDs. 1,35,36,39 Narrower line width emission is said to be more saturated, placing the fluorescence color coordinates more toward the curved edge of the CIE chromaticity space (e.g., CIE 1931 standard). 39,85 Combinations of three emitters (red, green, and blue), which lie close to the fully saturated boundary curve, can then create the widest range of perceived colors, termed the color gamut, by display and lighting manufacturers. The cost of production of perovskite NCs is regarded as low because of their solvent processing and relatively low temperature synthesis.
The relatively low or nonoccurrence of fluorescence blinking 86,87 of LHP NCs is an attractive prospect for hot carrier/multiexciton effects as it is probably a marker for relatively weak Auger recombination effects. However, the photothermal stability of the materials under high photon energy and at high fluences is a factor that needs to be addressed in order to fully realize the benefits of such effects.
Defect Tolerance of Lead Halide Perovskite Nanocrystals. One of the most striking features of LHPs is their high tolerance toward defects. The term "defect tolerance" here means that, though the optical and electronic properties of perovskites often appear as though there are no electronic traps or excessive doping present, structural and other characterization methods do point to a large density of various structural defects. From the electronic point of view, such behavior suggests preservation of a clean bandgap upon creation of typical defects such as vacancies or surface-related sites because their defect energy levels reside entirely within either the valence band (VB) or the conduction band (CB) manifolds but not within the bandgap itself. In this regard, perovskite NCs are highly unusual; 88 they are highly luminescent without recourse to any electronic surface passivation, whereas such passivation is mandatory to achieve a high PL QY from conventional QDs derived from metal chalcogenides (i.e., CdSe) or metal pnictides (i.e., InP).
The defect tolerance had been rationalized theoretically for a variety of perovskite compounds. For CsPbBr 3 , for instance, the surfaces of NCs, point defects in the bulk material, 89 as well as grain boundaries 90 were all shown to either form shallow trap states or to be resonant with VB and CB states. The defect tolerance is partly attributed to the high ionicity of bonding in LHPs. Furthermore, mixing of a Pb lone pair s orbital and an The tunability of perovskite nanocrystal compositions can be achieved after synthesis through subsequent anion exchange, which is more facile than that for many conventional II−VI and III−V quantum dots.
Perovskite nanocrystals exhibit high "defect tolerance", meaning that, unlike conventional semiconductors, they can be bright emitters without electronic passivation of their surfaces. iodine p orbital results in antibonding coupling in the perovskite lattice, with the bandgap opening up between two antibonding bands. Because of this band structure, structural defects that may arise from the halide and MA or other A + -type vacancies tend to have energy levels that fall within the CB and VB, respectively, rather than lying within the bandgap itself. On the contrary, in conventional, defect-intolerant semiconductors such as Si, CdSe, or GaAs, the bandgap is formed between bonding and antibonding orbitals, leading to enclosure of all defect states either as shallow or as midgap states as bonding is locally weakened at all defect sites (point defects, dislocations, planar defects, surfaces, etc.). The comparison is schematically depicted in Figure 4.
A second reason for having clean bandgaps relates to the energy of defect formation in LHPs. Halide and A-site vacancies (V X and V A ) are easily formed as a pair of Schottky vacancies, thus maintaining overall charge neutrality of the lattice. Fortunately, in the LHPs, other point defects, such as interstitially or antisite misplaced atoms, have much higher energies of formation, 91 often even above the formation energy of the parent compound. This scenario is illustrated in Figure 5 for MAPbI 3 . On the basis of thermodynamic calculations, ionic compensation of point defects in MAPbI 3 has been suggested as a charge carrier concentration self-compensation mechanism. 92 Defect tolerance is similarly expected to be of high relevance also in 2D perovskites. 93 NCs can be robust light emitters, even when a large number of ligands are displaced from the surface, and yet the influence of consequent surface defects in trapping charge carriers is negligible. 94, 95 Synthetic Strategies toward Improving the Stability of Lead Halide Perovskite Nanocrystals. Employing different ligands to improve or change the properties of as-prepared materials is a very common strategy in the colloidal QD field, and this is particularly relevant to increasing the stability of LHP NCs given their innate sensitivity to water and other polar solvents. Figure 6a shows an attempt to use different ligands other than the commonly used OA or OLA by Luo et al. 97 perovskite NCs with a PL QY of ∼100% by using 2adamantylammonium bromide (ADBr) as the only capping ligand. 44 The photodarkening of these nanoparticles under prolonged irradiation, attributed to moisture, can be avoided by the formation of cucurbit[7]-uril-adamantyl ammonium host− guest complexes (AD@CB) on the NC surface. Figure 6b demonstrates the higher photostability of MAPbBr 3 NCs with the latter coating in toluene dispersions even under water with UV photoirradiation.
Besides the issue of stability in contact with moisture and under irradiation with light, it is well-known that CsPbI 3 NCs suffer from a facile cubic perovskite to orthorhombic phase transformation (as demonstrated in Figure 6c(i,ii)), which may be a limiting factor for their optoelectronic applications. By replacing the conventionally used OA with an alkyl phosphinic acid, Wang et al. obtained phase-stable cubic perovskite CsPbI 3 NCs (Figure 6c(iii,iv)). 98 By changing the ligands, the asprepared sample remained luminescent for over 20 days while the OA comparison sample showed no emission to the naked eye.
Producing core−shell structures to increase stability is yet another widely used strategy in colloidal semiconductor QDs. Similar treatments have also been used in perovskite syntheses. Bhaumik et al. 99 reported a putative mixed MA−octylammonium lead bromide perovskite core−shell-type structure (Figure 6d). With a thin shell and little to contrast for the core from the shell in TEM images, it was difficult for the authors to show direct evidence of the formation of a shell; however, indirect evidence from elemental analyses and improved PL stability was taken as tentative evidence of successful shell formation. The emission color was tunable in the blue to green range by using different MA−octylammonium ratios (438−521 nm), while the PL QY was as high as 92%. Their solution-processed material was reported to be stable at least for 2 months under ambient conditions. Chen et al. reported a NC architecture made of CsPbX 3 /ZnS heterodimers synthesized via a facile solution-phase process (Figure 6e). 100 Figure 6e compares the PL stabilities for pure CsPbBr 3−x I x and CsPbBr 3−x I x /ZnS heterodimers. The CsPbBr 3−x I x /ZnS heterodimer could keep for about 12 days without any protection in air, while pure CsPbBr 3−x I x QDs became unstable and blue-shifted within 1 day under the same conditions.
Jing et al. found that the stability of mixed-halide CsPb(Br x I 1−x ) 3 NCs could be dramatically enhanced by using a selective acetone etching method. 101 This formed a passivation layer on iodine-rich perovskite NCs by partial iodine etching to instead leave a bromine-rich surface passivation layer (Figure 6f). After the treatment, the PL 50% decay constant was around 17500 h compared with 20 h for the untreated NCs. In other words, the PL stability was increased almost 1000-fold.
In terms of postsynthetic treatments, the employment of silica or silicone derivative coatings on LHP NCs has been proven useful. Huang et al. fabricated SiO 2 -encapsulated MAPbBr 3 QDs by using a small amount of water in analytical-grade toluene to hydrolyze tetramethyl orthosilicate. 102 Photostability tests were carried out at a relative humidity of 60%, and after 7 h, the PL of the encapsulated powders remained at 94% of the initial value, higher than that for the unencapsulated sample, where the PL had declined to 38% of the original level (Figure 7a).
The first successful water-resistant coating of solid-state perovskite powders was demonstrated by Huang et al. 103 through surface passivation of CsPbX 3 (X = Br or I) with POSS molecules, as shown in Figure 7b. In the form of aqueous suspensions, CsPbX 3 /POSS composites retained their emission unchanged for several months. The POSS coating was also useful when two-color emitters were formed by mixing different composition perovskite NCs as it prevented undesirable anion exchange reactions between the different constituents from occurring in the powder state. The benefits of this passivation strategy were demonstrated when green-emitting POSS-CsPbBr 3 and red-emitting POSS-CsPb(Br/I) 3 NC powder mixtures were used to fabricate all-perovskite solid-state luminophore down-conversion white light LEDs.
Wang et al. used commercially available mesoporous silica mixed with green CsPbBr 3 NCs 104 to similarly bestow water resistance and prevent ion exchange in their mixtures of different composition. The photostability comparison is shown in Figure 7c. By infiltrating perovskite precursors into mesoporous silica after drying, Dirin et al. showed the formation of perovskite NCs entrapped within the pores. 88 Sun et al. used a similar hydrolysis approach 102 with another silica source APTES. 106 Hai et al. reported a simple fabrication method for emissive flexible films composed of polyvinylpyrrolidone (PVP) as a matrix polymer and codoping blue, green, and red CsPbX 3 (X = I, Br, and Cl) as guest fluorophores at various ratios. 105 A schematic of their hydrophobic silicone resin (SR)/PVP NC composite film, SR/PVP-CsPbX 3 , is presented in Figure 7d. PVP-coated NCs (as single or multiple component mixtures) were electrospun to form nanofiber films using single-or multinozzle electrospinning. To provide further protection from humidity and facilitate handling, SR was deposited onto the surface of the composite electrospun nanofibers to obtain water-stable nanofibrous membranes.
Apart from silica coating, polymer coating has also proved useful in LHP NC passivation. Meyns et al. demonstrated the addition of poly(maleicanhydride-alt-1-octadecene) (PMA) into the precursor mixture during the synthesis of perovskite NCs. 107 The normalized integrals of the emission peaks between 460 and 600 nm over 12 h of constant irradiation showed higher emission signals for samples with PMA compared with untreated NCs (Figure 8a). By reducing the ligand surface exchange rate, the ligand binding was tightened in the presence of the PMA, reducing the scope for the NC surface to interact with the surrounding medium, thereby improving the NC stability.
Zhang et al. formed water-resistant polystyrene microhemispheres (MHSs) embedded with CsPbX 3 (X = Cl, Br, I) NCs (denoted as NCs@MHSs) as hybrid multicolor and multiplexed optical coding agents. 110 PVP acted as the capping ligand and was adsorbed onto the perovskite NC surface and in doing so formed a protective layer. The PVP surface thus formed also served as an interface layer for further addition of an additional polystyrene matrix allowing the CsPbX 3 NCs to be embedded in polymer MHSs. The well-passivated CsPbX 3 NCs@MHSs were incorporated into live cells showing high stability and noncytotoxicity and functioned as useful multicolor luminescent probes.
Hou et al. demonstrated stable core−shell colloidal LHP NCs using a copolymer templated synthesis approach. 108 The block copolymer served as a confined nanoreactor during perovskite crystallization and passivated the perovskite surface by forming a multidentate capping shell. The polymer nanoshell provided an additional layer for further surface modifications, useful for self-assembly and so forth and also served to passivate and improve the photostability of the NCs. Figure 8b compares the PL stability of CsPbBr 3 NCs with the multidentate copolymer ligand and with small-molecule ligands (OA and OLA) upon exposure to ethanol and propan-2-ol. While OA/OLA-capped NCs quenched immediately after mixing the colloids with both solvents and the PL totally disappeared within 3 h, the multidentate polymer/perovskite NC samples exhibited stable fluorescence after more than 25 h in ethanol and for up to 50 days after adding IPA.
Raja et al. reported enhanced water and light stability by encapsulation of CsPbBr 3 NCs into matched presynthesized hydrophobic macroscale polymeric matrixes. 109 Their CsPbBr 3 QDs lost all emission after 60 min of contact with water (Figure 8c(i)), while the NC/polymer composite films functioned even after more than 4 months of continuous immersion in water (Figure 8c(ii)). The author also claimed no detectable lead leaching into the water that was in contact with the encapsulated perovskites.
Summary and Future Outlook. There are a number of research avenues related to LHP NCs that will require attention in the forthcoming years. One of the puzzling questions concerns the origin of the significantly blue-shifted PL from CsPbBr 3 NCs. Interestingly, both the PL peak and absorption edge from CsPbBr 3 NCs never exceed 520 nm, even at NC sizes far beyond the quantum-confinement regime (>20 nm). In fact, bulk CsPbBr 3 has an optical band gap at 2.25 eV (551 nm), both in our experiments and in the literature. 111 Our experience shows that the PL peak for NCs larger than 11 nm is always at exactly 520 nm, fully ruling out the quantum size effects at these large sizes as the origin of the blue shift. At present, the atomistic origin of this effect remains unclear. Rather broad Xray diffraction reflections of CsPbBr 3 NCs make it difficult to differentiate between the orthorhombic (nearly cubic) lattice of the bulk material and other possible distortions of the ideal cubic lattice. A recent study suggested significant and dynamic structural disorder that involves formation and re-formation of twin planes between orthorhombic perovskite subdomains in CsPbBr 3 . 112 It has been not easy to push the emission of LHP NCs toward the red and near-infrared spectral range while maintaining reasonable material stability. An effective strategy to overcome this so-called "red wall" is mixing larger FA + and smaller Cs + in one lattice, thereby compensating for the poor individual fits of these ions separately. An additional stabilizing factor in this case is provided by the high entropy of mixing. 80 Formation of mixed-cation compositions in iodide-based LHPs has become a major strategy in thin-film solar cell research, yielding the highest power conversion efficiencies of up to 22%: FA/MA, 79,113−115 Cs/MA, 116 Cs/FA, 70,77,78,80 Cs/MA/FA, 117 or even Rb/Cs/MA/FA. 81 Recently, this approach has been extended to LHP NCs, namely, for a (Cs/FA)PbI 3 . 53 Similarly, other mixed-cation formulations have been investigated as well, including Au−CsPbBr 3 , Cs 1−x Rb x PbBr 3 , and so on. 118−120 Further work will establish the synthesis procedures and elucidate structures for corresponding multinary LHP NCs.
The defect tolerance of LHP NCs offers strong inspiration to search for similar attributes in other related compounds, especially those that do not contain toxic lead or other heavy metals. 6,121 Similar electronic structures and defect-tolerant behavior are to be expected from the main-group metals, which offer both s and p electrons for the formation of the VB and CB. A first example is through the replacement of Pb 2+ with Bi 3+ , an ion of similar size. Yet, the resulting compounds of composition Cs 3 M 2 X 9 (M = Sb, Bi) have vastly different crystal structures, dominated by 0D or 2D networks of Bi−X polyhedra, and exhibit no significant PL at ambient conditions. 122 A full structural analogue of 3D perovskites can be constructed by replacing Pb 2+ with a 1:1 mixture of M + and one M 3+ , forming so-called double perovskites, A 2 M + M 3+ X 6 , such as such as Cs 2 BiAgCl 6 and Cs 2 AgInCl 6 . 123,124 The electronic band structure of thallium halides also shows a strong resemblance to LHPs. 125 Finally, the most obvious strategyreplacement of Pb 2+ with Sn 2+ and Ge 2+ has thus far failed due to oxidative instability, even with respect to trace quantities of oxygen. Even trace amounts of Sn 4+ and Ge 4+ degenerately dope such semiconductors. In this regard, a somewhat surprising finding is the bright and air-stable emission, albeit with broad fwhm in excess of 100 nm, from (C 4 N 2 H 14 Br) 4 SnX 6 (X = Br, I), 126 a compound comprising isolated SnX 6 4− octahedra in a land of large organic cations. One can assume that oxidative stability is enabled by these cations that prevent diffusion of oxygen to the Sn 2+ sites. This observation might open an avenue to other stable hybrid organic−inorganic lead-free perovskites.
Many strategies discussed in this Perspective for perovskite NC stability enhancement would leave the NCs inaccessible in terms of injection of charges, which could be detrimental for a number of optoelectronic applications. They still have a vast possibility of applications such as color-conversion and colorenhancing layers. If the stability of LHP NCs can successfully be improved, with the narrow PL fwhm of just 18−20 nm in the green at 530 nm (CsPbBr 3 , FAPbBr 3 NCs) and 35 at 630 nm (CsPb(Br/I) 3 ) and high PL QYs of up to 95−100%, LHP NCs may become a strong competitor to traditional colloidal QDs for applications in backlit TV displays and in related colorconversion and color-enhancing applications. At present, two principal types of QD emitters in the red and green have been successfully commercialized in LCD TVs: CdSe-based QDs by Sony in 2014 and InP-based QDs by Samsung in 2015 (under the brand name SUHD TV). Perovskite NCs could be used to replace CdSe or InP QDs in those commercialized LCD TVs, potentially exceeding their performance in terms of color saturation and brightness in the longer term. Under the pressure of increasingly stringent legislation for the use of heavy metals in consumer electronics, Cd use is being limited in such applications. Lead, on the other hand, is exempted for several applications, such as in lead-acid batteries, produced globally on the millions of tons scale. For comparison, one TV display of typical 40−60 in. dimension requires only several mg of QDs, 127 summing up to at most several kilograms at substantial TV display market penetration. LHP NCs could offer strong competition with regard to InP-based QDs arising from the inherently much narrower, size-independent emission, being at 530 nm twice as narrow as the equivalent III−V-based NCs (fwhm ≈ 40 at 530 nm for InP-based QDs).
|
2018-04-03T05:42:07.965Z
|
2017-08-10T00:00:00.000
|
{
"year": 2017,
"sha1": "92d04c7e73833c1d3b93e1f36461ed9ea0d024a1",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsenergylett.7b00547",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "92d04c7e73833c1d3b93e1f36461ed9ea0d024a1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
10815431
|
pes2o/s2orc
|
v3-fos-license
|
Estimates of invariant metrics on pseudoconvex domains near boundaries with constant Levi ranks
Estimates of the Bergman kernel and the Bergman and Kobayashi metrics on pseudoconvex domains near boundaries with constant Levi ranks are given.
Limiting asymptotic behavior on smoothly bounded, strongly pseudoconvex domains in C n was obtained by Diederich [D] for the Bergman kernel and metric, and by Graham [G] for the Carathéodory and Kobayashi metrics. In a celebrated paper [Fe], Fefferman obtained asymptotic formulas for the Bergman kernel and metric and used them to establish smooth extension of biholomorphic maps. Fefferman type asymptotic expansions for the Carathéodory and Kobayashi metrics on strongly pseudoconvex domains were given in [F2] (see also [Ma] for related results). Estimates for the Bergman kernel and the invariant metrics, in terms of big constants and small constants, were obtained by Catlin [C2] for smooth bounded pseudoconvex domains of finite type in C 2 and by J.-H. Chen [Ch] and McNeal [M] for convex domains of finite type in C n . We refer the reader to the monograph of Jarnicki and Pflug [JP] for extensive treatment on the subject. Strong pseudoconvexity and Levi-flatness are in a sense at the opposite ends of pseudoconvexity. Theorem 1.1 shows that in terms of boundary behavior of the invariant metrics, these two types of domains bear striking resemblance. Our proof of Theorem 1.1 uses an idea of Catlin (see [C2]): We construct plurisubharmonic functions whose complex Hessians blow up at the rate of 1/δ 2 (z) in the complex normal direction and at a rate of |L δ (z, X)|/δ(z) in the complex tangential directions. To estimate the Kobayashi metric from below, we also use the Sibony metric [S].
This paper is organized as follows. In Section 2, we recall the necessary definitions and basic properties of the Bergman kernel and the invariant metrics. In Section 3, we review the local foliation of a hypersurface with constant Levi rank and choose local holomorphic coordinates under which the defining function has a desirable form. Plurisubharmonic functions with large Hessians are constructed in Section 4. Theorem 1.1 is proved in Section 5.
Throughout the paper, we will use C to denote positive constants which may be different in different appearances. We will also use f g to denote f ≥ Cg where C is a constant independent of relevant parameters, and use A ≈ B to denote A B and B A.
Preliminaries
Let Ω be a domain in C n and let D be the unit disc. Let H(D, Ω) be the set of holomorphic maps from D into Ω. Let z ∈ Ω and let X = n i=1 X i ∂/∂z i ∈ T 1,0 z (Ω). (We sometimes identify T 1,0 z (Ω) with C n without explicit notices.) The Kobayashi metric is given by (Ω) be the space of square-integrable holomorphic functions on Ω. The Bergman kernel (on the diagonal) and metric can be defined via the following extremal properties: Denote by S z (Ω) the class of functions u defined on Ω such that: (1) u is C 2 on a neighborhood of z ∈ Ω; (2) 0 ≤ u ≤ 1 on Ω and u(z) = 0; and (3) log u is plurisubharmonic on Ω. The Sibony metric [S] is given by The Bergman metric is a Kähler metric and the Kobayashi and Sibony metrics are Finsler metrics. Note that while the Kobayashi metric is always upper semi-continuous ( [R,Prop. 3 on p. 129], the Sibony metric is not, even for a domain of holomorphy (see [JP,Example 4.2.10]). Both the Kobayashi and Sibony metrics are identical to the Poincaré metric on the unit disk and satisfy the following length decreasing property: If Φ : Ω 1 → Ω 2 is a holomorphic map, then [C2] constructed bounded plurisubharmonic functions with large Hessians and used them to estimate the Bergman kernel and invariant metrics for smooth bounded pseudoconvex domains of finite type in C 2 . The following theorem was proved by Catlin ([C2, Theorem 6.1 and pp. 461-462]), using Hörmander's L 2 -estimates for the∂-equation. We first fix some notations. Denote by D α j j any mixed partial derivative in z j andz j of total order α j . For α = (α 1 , . . . , α n ), write D α φ = D α 1 1 · · · D αn n φ. Theorem 2.1 (Catlin).
Proof. It suffices to establish the upper bounded in (2.3) for the Kobayashi metric and the lower bound for the Sibony metric. The upper bound follows directly by comparing the Kobayashi metric on Ω with that on the polydisc P and using the length decreasing property. We now prove the lower bound for the Sibony metric, following [S]. Let where M is a large constant to be chosen. Then u(ẑ) = 0 and 0 ≤ u ≤ 1 in Ω. Evidently, log u is plurisubharmonic when g(z) < 1/2 or when g > 1. We now consider the case when 1/2 ≤ g(z) ≤ 1. A simple computation yields that where C is a positive constant. Choosing M ≥ α/C, we then obtain that log u is plurisubharmonic on Ω. From the definition of the Sibony metric, we then have We thus conclude the proof of (2.3).
Levi foliations of real hypersurfaces
We first recall well-known facts about local foliations of hypersurfaces whose Levi form has constant rank, following [Fr]. Let M be a smooth real hypersurface in C n . Let z 0 ∈ M and let r(z) be a local defining function of M on a neighborhood V of z 0 . The Levi rank of M at z 0 , denoted by R(M, z 0 ), is the number of non-zero eigenvalues of the Levi form Thus R(bΩ, z 0 ) = n−1−dim C N z 0 . Note that both R(bΩ, z 0 ) and dim C N z 0 are independent of the choices of the defining functions or local holomorphic coordinates. A complex foliation of (complex) codimension q of M ∩ V is a set F of complex submanifolds of V such that there exists a smooth map σ : V → R 2q of rank 2q on M satisfying: (See [Fr,Section 2].) The following theorem is well-known (cf. [Fr,Theorem 6.1]): Theorem 3.1 (cf. [Fr]). Let M be a real hypersurface in C n . Suppose M has constant constant Levi rank n − l − 1. Then for each p ∈ M , there exists a neighborhood V of p and a unique complex foliation of codimension n − l of M ∩ V such that N z is the complex tangent space of leaves of the foliation.
Let Ω ⊂⊂ C n be a pseudoconvex domain and let z 0 ∈ bΩ. Assume that bΩ is smooth in a neighborhood V of z 0 and that R(bΩ, z) = n − l − 1 for all z ∈ bΩ ∩ V . Then there exists a neighborhood U ⊂⊂ V of z 0 such that for each p ∈ bΩ ∩ U , there is a biholomorphism mapping ζ = Φ p (z) from U onto the unit ball B(0, 1) that satisfies (1) Φ p (p) = 0; (2) Φ p depends smoothly on p; (3) Φ p (bΩ ∩ U ) has a defining function of the form near 0, where λ j , 2 ≤ j ≤ n − l, are positive constants depending smoothly on p.
A boundary point p of Ω is called a local weak peak point if there exist a neighborhood U p of p and a function f p holomorphic on Ω ∩ U p and continuous on Ω ∩ U p such that f p (p) = 1, |f p (z)| < 1 for z ∈ Ω ∩ U p , and |f p (z)| ≤ 1 for z ∈ Ω ∩ U p . The function f p is called a local weak peak function of Ω at p. Corollary 3.3. Assume the same hypotheses as in Proposition 3.2. Then each p ∈ bΩ ∩ V is a local weak peak point of Ω.
Proof. It follows from Proposition 3.2 that for any p ∈ bΩ ∩ V , there exist a neighborhood U p of p and a biholomorphic mapping Φ p from U p onto B(0, 1) such that for a sufficiently small ε 0 > 0. Let where the cubic root takes the principle branch by deleting the negative Re ζ 1 -axis. Let U p = Φ −1 p (B(0, ε 0 )) ∩ U p . Then f p (z) = h(Φ p (z)) is a local weak peak function at p defined on Ω ∩ U p .
Remark.
Let Ω be a pseudoconvex domain with piecewise smooth boundary such that each piece has constant Levi rank. Then it follows from the localization and length decreasing properties of the Kobayashi metric ( [R, p. 136]) and Corollary 3.3 that Ω is Kobayashi complete. This also follows from Theorem 1.1 (to be proved in Section 5).
Construction of plurisubharmonic functions with large Hessians
We now turn to the construction of bounded plurisubharmonic functions with large Hessians near a piece of boundary that has constant Levi rank on a pseudoconvex domain Ω. For δ, a > 0, and X ∈ C n , let P δ,a = ζ ∈ C n ; |ζ 1 | < aδ, |ζ j | < aδ 1 2 , 2 ≤ j ≤ n − l, |ζ j | < a, n − l + 1 ≤ j ≤ n and let We will follow the notations in Section 3. For p ∈ bΩ ∩ U , let Ω p = Φ p (Ω ∩ U ). The following construction of plurisubharmonic functions with large Hessians plays a key role in this paper (compare [C2,Prop. 2.1]; also [S,Prop. 7]).
Theorem 4.1. Assume the hypothesis of Proposition 3.2. Let W ⊂⊂ U be a neighborhood of z 0 . Then for any p ∈ bΩ ∩ W and any sufficiently small δ, there exists a function g p,δ ∈ C ∞ ( Ω p ) and constants a, b, C and C α , independent of p and δ, such that (1) |g p,δ (z)| ≤ 1, z ∈ Ω p ; (2) g p,δ is plurisubharmonic on Ω p ; (3) for ζ ∈ P δ,ab ∩ Ω p and Y ∈ C n , (4) for ζ ∈ P δ,ab ∩ Ω p , Proof. Let χ 1 (t) ∈ C ∞ (R) be a decreasing function with χ 1 (t) = 1 for t < 1 2 and χ 1 (t) = 0 for t > 1. Let φ δ (ζ) be defined by and let G p,δ be defined by where ρ(ζ) is the defining function obtained from Proposition 3.2 and M is a large constant to be chosen. For Y = (Y 1 , . . . , Y n ) ∈ C n , let and Y * = (Y * 1 , Y * ). It follows from a direct calculation that (4.1) It follows from (3.1) that when a is sufficiently small, for ζ ∈ P δ,a . Therefore It is easy to see that on Q δ,c . Therefore, by choosing c sufficiently small, we have Applying Theorems 2.1 and 4.1, we have Let p δ = Φ −1 p (ζ δ ). Since |JΦ p (p δ )| ≈ 1, it follows from the localization property of the Bergman kernel (see [O1]) and (5.2) that when δ is sufficiently small. By letting p vary on bΩ ∩ W for a small neighborhood W ⊂⊂ U of z 0 and letting δ vary in (0, ε 0 ) for a sufficiently small ε 0 > 0, we obtain the estimates for the Bergman kernel in Theorem 1.1.
The sufficiency is a special case of Theorem 1.1. To see the necessity, one observes that if bΩ is not Levi-flat near z 0 , then by a simple continuity and inductive argument on the Levi rank, z 0 is an accumulation point of boundary points z k such that bΩ has constant Levi rank ≥ 1 near each z k . We then arrive at a contradiction to Theorem 1.1.
|
2012-03-08T18:16:41.000Z
|
2012-03-08T00:00:00.000
|
{
"year": 2014,
"sha1": "bd57d4982804fb61c161ef2a0c4dcca423ef2796",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.1872",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bd57d4982804fb61c161ef2a0c4dcca423ef2796",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
263704350
|
pes2o/s2orc
|
v3-fos-license
|
Reliability constrained dynamic generation expansion planning using honey badger algorithm
Generation expansion planning (GEP) is a complex, highly constrained, non-linear, discrete and dynamic optimization task aimed at determining the optimum generation technology mix of the best expansion alternative for long-term planning horizon. This paper presents a new framework to study the GEP in a multi-stage horizon with reliability constrained. GEP problem is presented to minimize the capital investment costs, salvage value cost, operation and maintenance, and outage cost under several constraints over planning horizon. Added to that, the spinning reserve, fuel mix ratio and reliability in terms of Loss of Load Probability are maintained. Moreover, to decrease the GEP problem search space and reduce the computational time, some modifications are proposed such as the Virtual mapping procedure, penalty factor approach, and the modified of intelligent initial population generation. For solving the proposed reliability constrained GEP problem, a novel honey badger algorithm (HBA) is developed. It is a meta-heuristic search algorithm inspired from the intelligent foraging behavior of honey badger to reach its prey. In HBA, the dynamic search behavior of honey badger with digging and honey finding approaches is formulated into exploration and exploitation phases. Added to that, several modern meta-heuristic optimization algorithms are employed which are crow search algorithm, aquila optimizer, bald eagle search and particle swarm optimization. These algorithms are applied, in a comparative manner, for three test case studies for 6-year, 12-year, and 24-year of short- and long-term planning horizon having five types of candidate units. The obtained results by all these proposed algorithms are compared and validated the effectiveness and superiority of the HBA over the other applied algorithms.
The power system is composed of three main sectors which are generation sector, transmission sector, and distribution sector.The planning issue of each element is a process that the investment decisions are made based on different aims to meet the forecast demand 1 .Generation expansion planning (GEP) is an important decision-making activity in power systems which aims at deciding on new as well as upgrading existing generation units with different types.The decision should determine the features of where to allocate, when to install the new generation units, and what to select the new generation technology over a long-range planning horizon.The main objective function of GEP is to minimize the total costs which include the investment, operation and maintenance costs, and outage costs.At the same time, the upper construction limit, reliability, fuel mix, reserve margin constraints must be satisfied.GEP is a highly complex optimization problem due to including different nonlinear constraints.Therefore, the solution for this problem should be executed by more efficient methods to reach optimum solution 2,3 .GEP models are divided into two classes of static and dynamic.The static GEP model is a single period GEP problem which introduces less computational complexities of the problem, whereas dynamic GEP is a multi-period planning problem with higher computational complexity.Thus, dynamic GEP is more complex than static model.
Literature review
A few contemporary review publications are available that categorize the GEP models [4][5][6] .The GEP optimization frameworks with incorporation of renewable energy divides the various approaches into four categories: (a) conventional approaches that incorporate environmental limitations throughout the GEP; (b) formulating the www.nature.com/scientificreports/GEP as a multiple-objective optimization problem; (c) strategies for addressing variable RES-related uncertainties in the GEP process; and (d) different dynamics and issues brought about in the power networks due to the growing incorporation of intermittent RES.It contributes to an improved comprehension of the anticipated results of each technique by offering insights into the traits, benefits, and drawbacks of the theoretical methods used as well as their applicability for various parts of the problem.Because transmission network operators as well as other managers believe that such models produce reliable results, the list of national/regional optimization models is lengthy.
In generating system analysis, the ability to meet the demand requirements is measured by the reliability criterion of the system which depends on the availability of generation units.The reliability of a system can be calculated by different indices such as loss of load probability (LOLP), loss of load expectation (LOLE), and expected energy not served (EENS).LOLE index is widely used, because of its simplicity.EENS reflects the true risk than LOLE so that it is an appealing index than LOLE.Also, the EENS index can be used to calculate the expected energy produced by each generation unit for each stage of the planning horizon 3,7,8 .There are several tools to calculate the expected energy produced based on EENS.Therefore, the proper valuation of EENS is required to provide accurate estimates of the variable costs.Power system probabilistic production simulation (PPS) provides an accurate evaluation framework for EENS, and LOLP 3,9 .Also, several methods have been handled to solve such complicated optimization problem.Some of the classic optimization methods, that have applied to solve the GEP problem, include Analytic Hierarchy Process 10 , dynamic and quadratic programming 11,12 and mixed integer nonlinear programming (MINLP) model 13,14 .In 14 , the option of Demand-Side Management (DSM) has been modeled as an equivalent generating unit which used only at peak load to maintain system reliability.In 15 , the GEP problem incorporating renewable energy has been presented where an hourly unit commitment problem has been used for supporting the selection of fuel mixes of power plants.In 16,17 , the least cost GEP model considering emission reduction has been presented that the penalty cost of emission was added to the objective function but the reliability constraint has not taken in account.Also, the GEP problem with the integration of renewable energy technologies and expected impacts in term of emission reduction has been presented in 18 .Despite the simple and easy implantation of MILP in solving the GEP problem, it has been utilized with low control variables.In 19 , MILP with fuzzy objectives has been applied to create the expansion model of the primary heat source in the district heating systems, combined heat and power units.In 20 , the GEP problem was handled by MILP considering the impact of unit commitment and renewable sources but the forced outage rate of generation units didn't taken into account at expected energy produced calculation.However, because of GEP is a complex problem with nonlinear constraints, the convergence of those classical methods to the optimal decision makes difficult to solve.Also, there are commercial packages like Wien Automatic System Planning (WASP) Package has been used for solving GEP problem 21,22 .On the other side, the unavailability of generation units has not considered at variable costs calculations in 21 .
In addition to that, a mixed integer linear programming model for the optimal GEP problem was introduced in Ref. 23 .This model was designed as a two-step process, with the first phase including dynamic programming to determine the best connection lines for electricity and the subsequent phase involving optimal transmission and GEP.A dynamic, integrated planning model for GEP involving transmission expansions was additionally created in 24 using the linearized form of the AC power flow model as the basis for the generalized bender's decomposition approach.With the goals of minimizing risks at each level of expected return and maximizing anticipated return for any particular level of risk, Ref. 25 adapted portfolio theory to the optimum GEP issue of Iran including financial risk management.Furthermore, with the aim of minimizing the projected cost and the conditional value-at-risk, Jin et al. presented a two-phase stochastic mixed-integer programming to handle the GEP issue under unpredictable growth in demand and fuel prices development 26 .Ref. 27 explores the complementing possibilities of demand-side response by looking at researches on the German subsequent energy system that would exclusively use RES.In the example of the Chilean electrical grid, Ref. 28 handles the GEP concerning substantial RES penetration by using a column generation technique and a unique Dantzig-Wolfe decomposition.
The study shows that the suggested method beats commercial solver software because it surpasses intractability and drastically lowers computing load.
Over the last decades, there has been a growing concern in modern nature-inspired algorithms which are classified as meta-heuristic algorithms such as genetic algorithm (GA) [29][30][31] , gravitational search algorithm (GSA) 32 , shuffled frog leaping algorithm (SFLA) 33 , and modified shuffled frog leaping algorithm (MSFLA) 9 .The performance of GA was improved by application of Virtual mapping procedure, and controlled elitism (NSGA-II) which has been used for solving the GEP problem with and without network constraints considered as in 29,31 , respectively and compared with differential evaluation as in 34 .Also, investigation the impact of RES penetration on environmental aspect has been conducted over the GEP model using GA 32 .The implantation of capacity expansion planning based on electricity market was developed in 30 to help the generation companies to take their decision whether to invest on new assets.In 9 , the comparison between GA, SFLA, and MSFLA has been presented to solve GEP problem with reliability constrained.Also, particle swarm optimization (PSO), differential evolution (DE), evolutionary programming (EP), ant colony optimization (ACO), tabu search (TS), simulated annealing (SA), and hybrid approach have been applied in 35 .In this study, the penalty factor of reliability criteria, which were used, was very small compared to the objective function which doesn't guarantee the feasibility of the achieved solutions.So that, the output results haven't been achieved the reliability constraint.Thus, the value of the penalty factor must be equal to several times of the objective function.PSO algorithm has been used in 36 to solve the GEP problem in the deregulated electricity market.Also, a centralized GEP problem has been addressed by a distributional robust chance-constrained as in 37 .
The GEP dimensionality is large which the length of a string is equal to the product of the number of planning stages and the number of candidate unit's types.So that the Virtual mapping procedure (VMP), penalty factor
Paper contributions
The contributions of this paper with respect to the previous research in the area can be summarized as follows: • A multi-stage GEP with reliability constrained is presented to minimize the capital investment costs, salvage value cost, operation and maintenance, and outage cost under several constraints.• A comparative assessment for solving the proposed reliability constrained GEP problem is performed of different modern algorithms of PSO, CSA, AO, BES, and HBA.• Several modifications are proposed such as VMP, PFA, and MIIPG to decrease the GEP problem search space and reduce the computational time.• A high effectiveness and superiority of the proposed HBA over the other applied algorithms for solving the proposed reliability constrained GEP problem for three test case studies for 3-year, 12-year, and 24-year of short-and long-term planning horizon.
Paper organization
The rest of this paper is organized as follows: in the next section, the GEP problem formulation is presented.
In "Honey badger algorithm for reliability constrained dynamic GEP" section, the GEP problem modifications are described, the proposed HBA algorithm for solving the reliability GEP problem is described in "Test system description and results discussion" section.The test system and simulation results for the reliability GEP problem are presented in "Conclusions" section.Finally, the paper is concluded in section VI.
Objective function
Solving a reliability constrained GEP problem with minimum total costs aims at determining the optimum expansion plan over planning period that achieves minimum total cost under constraints satisfaction.The GEP total costs can be divided into two main parts which are related to new installed and already existed candidate units.In this context, the investment and salvage costs belong to the first part while operation and maintenance and outage costs are related to the candidate and existed units all together as following 9,33 :
Capital investment cost
This term represents the investment cost of new candidate units I(u t ) which can be given by: where, i is the discount rate;CI k is the investment cost of each new candidate unit k added in stage t which are selected of different technologies; N is the number of selected candidate units of technology k;u t is the capacity vector of alcandidate unit types in the stage;t o is the number of years between the reference date for discounting and the 1st year of study; s is the number of years in each staget.
Salvage value cost
Salvation value SV (u t ) is the real value of generating unit at a specific time and after considering the depreciation rate which is calculated as following: where, δ k,t is the salvage factor of unit k added in stage t;T is the number of stages in the planning horizon.It is assumed that the investment cost for a candidate unit, which selected by the expansion plan, is made at the beginning of the stage in which it goes into service.On the other side, and the salvage value is calculated at the end of the planning horizon 1 .
Operating and maintenance cost
This term of the objective function is the generation operation and maintenance costs for existing and new candidate units and assumed to occur in the middle of the corresponding planning stage which is calculated as follows: (1) www.nature.com/scientificreports/where, X t and G t,k are the capacity and the expected energy produced for all existing and selected candidate units of each type k in stage t ; FOM k and VOM k are fixed and variable operating and maintenance (O&M) costs, respectively.The fixed cost of each generation unit is calculated based on its capacity (kW), while the variable cost is proportional to expected energy produced by each generation unit in each stage.Therefore, proper calculation of the expected energy produced by each generation unit is crucial to provide more accurate estimation of the variable costs.The expected energy produced by each generation unit in the system can be determined by the calculation of the expected energy not served (EENS).The EENS, which is one of reliability indies, can be calculated by conventional method or probabilistic production cost simulation methods to obtain cost modeling.The conventional method is composed of, establishing capacity outage probability table, load probability table and margin between them to calculate the reliability indies such as EENS, and LOLP 7,38 .This method provides a relatively simple approach for EENS calculation but take more computation time, so that will not be suitable for GEP with the large-scale size problem.Therefore, another relative computational speed and solution quality of six different probabilistic production cost simulation methods have been presented for expected energy produced as in 3,39 .In this context, the comparison results have been implied that the probabilistic production cost simulation using the equivalent energy function (EEF) method is more accurate and takes less computation time.
Expected energy not served cost ( O(X t ))
There are several reliability indices such as EENS, LOLP and LOLE.The EENS reflects the reliability status of a power system better than other indices.In this context, customer satisfaction with better supply will greatly influence the utility's competitive ability.Hence, continuously energy supplying, which indicates better system reliability, can achieve customer satisfaction.On the other hand, each generating may be not available at any time depending on its forced outage rate (FOR).So that, the EENS cannot be made zero, but should be minimized as a cost term which can be formulated as follows: where, CEENS is the cost of EENS in $/MWh.It is assumed that the cost occurs in the middle of the correspond- ing stage.However, the objective function for solving reliability constrained GEP problem is equivalent to find the optimum expansion planning over planning horizon that minimizes total cost, including capital investment, operation and maintenance, and outage cost under several constraints calculated as follows:
Constraints
Several constraints must be considered during the expansion planning process which represented as follows: • Upper construction limit: the number of each generation technology has committed that must satisfy the maximum construction number at stage t as follows: where, U max,t is the maximum construction number of each generation type at stage t.
• Spinning reserve constraint
The forecasted load demand and capacity reserve margin constraint must be met by the existing and new selected candidate units, which is represented as follows: where, X t,k is the total capacity of existence and new units, LD t is the peak load demand in the stage t, SR min and SR max minimum and maximum required reserve capacity in stage t, respectively.
• Fuel mix ratio
Selection of candidate units for expansion planning must be a limited ratio of each technology to all the existing units' capacity that is expressed as following: where, FR j min , and FR j max are lower and upper bounds of jth fuel type mix ratio in stage t, X t,j , capacity of fuel type j in stage t.
• Reliability constraint: The existing and new candidate units must satisfy the reliability criterion to maintain supplying energy continuously.LOLP is another reliability index to represent the system robustness in response to any contingencies.( 6) www.nature.com/scientificreports/where, ε is the reliability criterion expressed in LOLP that less or equal than ε in each stage of the planning horizon.
The equivalent energy function method
As mentioned before, the calculation of expected energy produced by each generation unit, and reliability indices calculation are very important issues for solving GEP problem.For that purpose, probabilistic production simulation (PPS) methods have been widely used for such calculations.The PPS model based on EEF method has been established to analyze the impacts and benefits of efficiency power plant as in 40 .Also, EEF has been applied for estimating the EENS, LOLP, and expected energy produced for solving GEP problem as 9,33,35 which is dependent on the system load duration curve (LDC) within a period of T as it is shown in Fig. 1.Then, the LDCs probability distribution is p = f (x) = F(x)/T.The x axis is divided into sections with lengths of x , the discrete energy function can be defined as follows: where, J = [x/�x] + 1 is an integer, E(J) is the energy that corresponds to a section of LDC from x to x + x .
The corresponding discrete value of the system load X max is N E = [X max /�x] + 1 .Therefore, the total energy consumed by the system can be calculated as follows: The FOR of the generation units is considered for the energy function that is represented for random outage of the generation units in EEF method.Suppose the generation unit i has a capacity of C i ,a FOR of q i , and an operational availability of p i = 1 − q i .Then, then equivalent load duration curve (ELDC) is: where, f 0 (x) is the probability distribution of the primary LDC.The corresponding function can be defined as: The above equation is the convolution formula in the EEF method, in which k i = C i /�x.k i is an integer because x is chosen to be the greatest common factor of all unit capacities.Thus, the expected energy produced of i th unit should be: All the committed generation units have been sorted in ascending order based on their variable costs.Then, the energy function E(J) is used to calculate the reliability indices as EENS, and LOLP as follows: www.nature.com/scientificreports/where, n is the number of generating units.
Proposed GEP model modifications
Because the GEP problem is a high dimensional problem with multiple conflicting objectives and nonlinear constraints which makes complex optimization problem, some modifications have been applied to deal with this problem.These modifications such as Virtual mapping procedure (VMP), penalty factor approach (PFA), and modified of intelligent initial population generation (MIIPG) have been used for facility the GEP problem and improve the effectiveness of the meta-heuristic algorithms as follows: Virtual mapping procedure (VMP) This mapping procedure transforms each combination of candidate units into a dummy variable for each stage.This dummy variable represents a position of each agent in the search space which is updated at each iteration based on using an algorithm.Thus, the decision variable of each stage is represented by single variable only which needs less memory space.Further, if the mapped variable took part in all related solutions, a small change in the mapped variable will reduce the infeasible solutions 2,29,33 .The steps involved in VMP can be stated as follows: • Form all the possible combinations of the candidate units.
• For each combination, multiply the number of units with the corresponding capacities and add them to get the total capacity of this combination.• Arrange the total capacity in ascending order based on operation and maintenance costs.
As a result, a multivariable could be reduced to a single variable, which in this case serves as the decision variable.The array size of the solution to the problem will grow by multiples of 5 because it is assumed that five distinct types of units will be used as choice variables at each stage.However, the decision variables obtained by using VMP are a multiple of the number of stages as the array size becomes 1 instead of 5 for each stage.Thus, a size reduction of 80% for each stage is realized.This reduces the dimensionality and the used memory space while improving the algorithm's performance.
Objective function modification (with PFA)
Because the constraints for GEP problem are nonlinear and more complex, obtaining feasible solutions is more difficult.However, the constrained problem can be transformed into an unconstrained problem by using the PFA, which is common for all the meta-heuristic techniques.Thus, the infeasible solutions are avoided in the subsequent iterations by adding proportional penalty values of each violated constraint to the objective function.The objective function with the PFA is given as the fitness function cost (FC): where, FC i is the objective function modified by PFA of i th individual;Ob i is the objective function value without addition penalty factors for any constraints which is expressed in Eq. ( 7);α is the penalty factor for the constraints validated;p 1 is the violation amount of the constraint of spinning reserve margin;p 2 is the violation amount of the constraint of fuel mix ratio, LOLP;p 3 is the violation amount of the constraint of LOLP.The penalty factor value must be greater than the total cost with several times to distinguish the feasible and infeasible solutions.When the penalty factor value is small compared with the total costs, the output solution may be infeasible which the results in 9,33 are the case since their acquired reliability constraint is violated their permissible ranges.
Modification of initial population generation (MIIPG)
The first step in any optimization problem is the creation of several populations randomly.Then, the position of each agent is updated according to the performed algorithm.Thus, many of initial solutions may be infeasible due to large search space and complex constraints that affected the convergence and performance of the algorithms.Therefore, the creation of initial population is incorporated to decrease the search space.The minimum and maximum cumulative capacity is considered in this procedure as having been made available in the earlier stages.As a result, the following formula can be used to determine the minimum and maximum capacity needed for each stage depending on reserve margin: where, CAP min,t , and CAP max,t represent the minimum and maximum capacities needed for a stage t based on forecasted load demand and reserve margin.The preceding stage's minimum and maximum capacities, CAP min,t , and CAP max,t , are calculated as the total of the capabilities of the selected units and existence capacities.Their values are never constant, but rather change with each possible combination that is chosen as in 2 . (18
Honey badger algorithm for reliability constrained dynamic GEP
One approach of obtaining the optimal solution of the GEP problem is through the modern meta-heuristic optimization algorithms.Most of these algorithms are based on natural phenomena, where a fitness function is used as an indicator for the distance from the optimal solutions.The advantages of reducing array size by using VMP, converting the constrained problem into an unconstrained problem by using PFA.Also, the solution space reduction can be utilized by using MIIPG and implanted for solving GEP problem.For solving the proposed reliability constrained GEP problem, a novel honey badger algorithm (HBA) is developed.It is a meta-heuristic search algorithm inspired from the intelligent foraging behavior of honey badger to reach its prey.In HBA, the dynamic search behavior of honey badger with digging and honey finding approaches is formulated into exploration and exploitation phases 41 .It prefers to stay alone in self-dug holes and meet the other badgers only to mate.Because of their courageous nature, it can be attacked by even much larger predators when it cannot escape.Also, it can climb on trees for reaching bird nests and beehives for food.A honey badger locates its prey by smelling mouse skills and digging or follows the honey guide bird, which can locate the hives but cannot get honey.The first method of honey badger to reach its food source is called digging mode while the second method is called honey mode.The first mode is executed by honey badger alone, but the second strategy is executed with the help of other birds to locate the hives.The second mode phenomena lead to a relationship between the two which are enjoying the reward of teamwork.However, HBA has dynamic search modes, because of its ability to maintain the trade-off balance between exploration and exploitation in the searching process.The mathematical model of HBA is represented as follows: The first step of the proposed HBA is to initialize the number of honey badger based on the population size number (N) and their respective positions as follows: where, x i is honey badger position, lb i and ub i are lower and upper limits of each position in the search space, r 1 is a random number between 0 and 1.
Then, the intensity (In) is defined which is related to concentration, strength of the prey and distance between it and ith honey badger.When the smell is strong, the prey will move quickly, and vice versa is true.Calculating the defining intensity is done as follows: where, r 2 is a random number; S is source strength; d i denotes distance between x prey prey and ith badger position.
• To guarantee a smooth transition from the exploration phase to the exploitation phase, the density factor ( α ), which regulates time variable randomness, is specified and updated.The following equation shows how this factor lowers with repeated rounds to reduce randomness over time.: where, iter max is a maximum iterations number, C is a constant equal to 2. • To enhance escaping from local to optima regions, a flag (F) is generated which alters the search direction.
Thus, agents now have several options for thoroughly exploring the search space.The equation that follows determines it: where, r 3 is the random number between [0,1].• Then, the agent's positions are updated where x new is updated according to two phases of digging phase, and honey phase as follows: In the digging phase, a honey badger performs actions like Cardioid shape which can be simulated as follows: where, β is the ability of the honey badger to get food which is greater than or equal to 1 (default = 6), r 4 , r 5 , and r 6 are three different random numbers between 0, and 1.A honey badger follows the honey guide bird to the replicated beehive during the honey phase.
where, r 7 is a random number between 0 and 1, d i and α are calculated using Eqs.( 25) and ( 26), respectively.The flowchart diagram of the HBA is shown in Fig. 2. ( Therefore, a honey badger searches near its prey, and the search is influenced by time variations ( α ).Addi- tionally, two user-defined parameters ( β and C ) have a considerable impact on the HBA performance, thus it is important to choose the parameter values carefully.The optimum parameter values for the suggested algorithm in this situation are ( β = 6, and C = 2), which are given from 41 .
Added to that, several modern meta-heuristic optimization algorithms are employed which are crow search algorithm, aquila optimizer, bald eagle search and particle swarm optimization.The GEP problem has been solved using PSO and some modified versions of it as 2,35,42 .Also, CSA is a one of modern optimization tools which is based on crow's intelligence in storing and retrieving its food in hiding locations 43 .This algorithm is used for the first time to solve the GEP problem, but it has been applied for solving several optimization problems such as economic dispatch 44 , unit commitment 45 , preventive maintenance scheduling 46 , capacitor allocation in distribution systems 47 , and optimal power flow problem 48,49 .
The AO is a one of modern meta-heuristic optimization approach inspired by the natural behavior of the Aquila during the prey capturing process.Aquila has potential for switching of catching a prey process that it is speed and agility.The choice of hunting tactics is also influenced by the hunting circumstances.However, more details about AO is in 50 .The BES algorithm is a novel meta-heuristic optimization algorithm which mimics the hunting process or intelligent behavior of the bald eagle as they search for fish 51 .Hence, the hunting process of the proposed BES is represented in three strategies, namely, selecting search space, searching within the selected search space, and swooping.For the tested systems all constraints presented in Eqs. ( 7)-( 10) are preserved beyond their permissible limits using all optimization algorithms.
Test system description and results discussion
In this section, the proposed HBA is performed for solving three cases based on the considered planning horizon as: Case-1: short-term GEP problems with 6-year planning horizon Case-2: long-term GEP problems with 12-year planning horizon Case-3: long-term GEP problems with 24-year planning horizon Each planning horizon consists of two years of stage, making them to be 3, 6, and 12 stages in short and long-term planning horizon.Number of years between the reference date of cost calculations and the first year of study ( t 0 ) are assumed to be 2 years.www.nature.com/scientificreports/
Test system description
In this paper, the GEP problem is solved by PSO, CSA, AO, BES and HBA.The simulation results are carried out on a test system whose date is reported in Supplementary Material.The test system consists of 15 existing generation units (based on type of fuel used: Oil, Liquid Natural Gas (LNG), coal and Nuclear (Nuc.PWR)) as in Table S.1.While the 5 different new generation technology is selected as candidate units (based on fuel type used: Oil, Liquid Natural Gas (LNG), Coal, and Nuclear (Nuc.PWR and Nuc.PHWR)) as in Table S.2.The forecasted peak load, as in Table S.3 with initial peak load of 5000 MW, and other data of existing and candidate generation units are taken from 9,35 .Different parameters have been used for the same test system at different planning horizons such as 2,9,35 .In this paper, the discount rate is taken 8.5%, LOLP criteria at each stage is considered of 0.01.The lower and upper limits for reserve margin are set at 20% and 50%, respectively.EENS cost is assumed to be 0.05 $/kWh.The lower and upper bounds of capacity mixes by fuel types are 0% and 30% for oil-fired power plants, 0% and 40% for LNG-fired, 20% and 60% for coal-fired, and 30% and 60% for nuclear, respectively.The cost of unserved energy, or EENS, which is calculated by EEF method, is set at 0.05 $/kWh.The initial period is set as two years while the investment cost is assumed to occur in the beginning of the year and the salvage cost is assumed to occur at the end of the planning horizon.
Parameters for GEP and optimization algorithms
The utilized optimization algorithms of PSO, CSA, AO, BES besides the proposed HBA is handled for solving the short-and long-term capacity expansion planning based on 200 of iteration number, and 60 of population size for 30 simulation runs.In BES algorithm, each population of solution has been updated three times at each iteration where it is updated once for the other applied algorithms.Thus, the population size is 30 for BES only.Moreover, the penalty factor value ( α ), which is equal to 10 15 , is used to penalize the infeasible solutions.
Numerical results and discussion
By using VMP to create dummy control variables, PFA to convert the constraint problem into unconstrained problem, and MIIPG to decrease the search space, PSO, CSA, AO, BES, and the proposed HBA are carried out for solving the short-and long-term reliability constraint GEP planning horizon.
Simulation results for case-1
In case-1, the CSA, AO, BES, PSO, and the proposed HBA are employed for solving reliability constraint GEP over 6-years (3 stages) planning horizon.Table 1 summarizes the optimal results obtained by each applied algorithm for reliability GEP for 6 years planning horizon which are the number of each type of power plants in each stage.The total cost achieved results in terms of the statistical indices of each applied algorithm is recorded in Table 2.
The obtained comparative results show that the proposed HBA has achieved the best total costs compared to the other applied algorithms, whereas the improvement of the objective function is equal to 3.2%, 1.1% and 0.081% in costs over the CSA, AO and BES, respectively.Although the best cost of the proposed HBA is equal to that of PSO, the HBA algorithm achieves the standard deviation and standard error of (2.27/12.5)× 10 7 , which are less than their counter parts for CSA, AO, BES, and PSO algorithms with (2.58/14.1)× 10 7 , (2.99/16.4)× 10 7 , (2.53/13.8)× 10 7 , and (3.09 /16.9) × 10 7 respectively.These findings demonstrate the effectiveness of the HBA for solving the GEP in case-1 compared to the other algorithms.Moreover, Figs. 3 and 4 describe the convergence curves and box graph of CSA, AO, BES, PSO and the proposed HBA.From Fig. 3, it can be observed that the HBA algorithm shows better convergence compared to the others.From Fig. 4, the proposed HBA shows significant results with the smallest length of the box plot.In addition, it provides the smallest worst objective with 7.1736 × 10 9 $ where the CSA, AO, BES and PSO achieve
Nuc. (PHWR)
Algorithm 7.3464 × 10 9 , 7.3267 × 10 9 , 7.1976 × 10 9 and 7.2265 × 10 9 $, respectively.Also, the reliability criterion of LOLP obtained are shown in Table 3 for each stage which indicates that the reliability criterion is satisfied for each applied algorithm.As shown, both HBA and PSO the smallest LOLP of 0.009787, 0.005251 and 0.008899 for the three stages planning, respectively.
Simulation results for case-2
In this case, a long-term planning horizon 12-year (6 stages) is conducted.CSA, AO, BES, PSO, and the proposed HBA are employed for solving reliability constraint GEP for this case.Table 4 shows the number of new candidate generation units for each stage of the planning horizon.According to the results of case 2, their achieved results in terms of the statistical indices of each applied algorithm is recorded in Table 5.For 12-year planning horizon, it is found that the proposed HBA provides the minimum best total costs and better performance than other algorithms.The proposed HBA has achieved a 2.5%, 4.5%, 2.5%, and 5.16% enhancement in costs over www.nature.com/scientificreports/CSA, AO, BES, and PSO, respectively.It should be noted that the performance of PSO algorithms is affected by the planning horizon than others.Figures 5 and 6 show the convergence characteristics and box chart of CSA, AO, BES, PSO and the proposed HBA which ensures the better performance and effectiveness of the proposed HBA.From Fig. 5, the proposed HBA algorithm shows better convergence compared to the others.From Fig. 6, the proposed HBA shows significant superiority as follows: • The proposed HBA algorithm provides the smallest average objective with 1.3675 × 10 10 $.
• The proposed HBA algorithm provides the smallest worst objective with 1.4211 × 10 10 $.In this context, the values of LOLP criterion of each stage, which must be satisfied, are also shown in Table 6.As shown, the reliability criterion is satisfied for each applied algorithm.
Simulation results for case-3
In this case, a long-term planning horizon 24-year (12 stages) is conducted.CSA, AO, BES, PSO, and the proposed HBA are employed for solving reliability constraint GEP for this case.Table 7 shows the number of new candidate generation units for each stage of the planning horizon.According to the results of case 2, their achieved results in terms of the statistical indices of each applied algorithm is recorded in Table 8.The obtained results comparison shows that the proposed HBA has achieved 4.2%, 2.72%, 2.7%, and 3.4% over other applied algorithms.Thus, the HBA achieves the optimum reliability constrained GEP with minimum total cost and better performance for the long-term planning horizon.Figures 7 and 8 show the convergence characteristics and box chart of CSA, AO, BES, PSO and the proposed HBA which ensures the better performance and effectiveness of the proposed HBA.From Fig. 7, the proposed HBA algorithm shows better convergence compared to the others.From Fig. 8, the proposed HBA shows significant superiority as follows: • The proposed HBA algorithm provides the smallest average objective with 2.4953 × 10 10 $.
• The proposed HBA algorithm provides the smallest worst objective with 2.6569 × 10 10 $.
• The proposed HBA algorithm provides standard error and standard deviation with (1. 33 In addition to, the reliability criterion is satisfied for each stage which the values of LOLP are given in Table 9.From the three cases studied of short-and long-term planning horizon, the proposed HBA algorithm obtains the optimum reliability expansion planning with minimum objective function and satisfied constraints better than CSA, AO, BES, and PSO algorithms.10 shows and analyzes the main differences between the presented study and other previously works.As shown, the presented study based on the HBA can achieve the highest reserve margin reaching 60%.In this regard, same margin has been addressed via the differential evolution in 34 but the proposed study achieved greater reduction the investment costs.Moreover, the presented study based on the HBA addressed three different planning horizons of 6, 12 and 24 years.Otherwise the loss of load probability has been greatly minimized to 0.01 using the presented study based on the HBA and the elitist Non-dominated Sorting Genetic Algorithm version II 29 .
Conclusions
This paper presents a novel honey badger algorithm (HBA) for solving the proposed reliability constrained Generation expansion planning (GEP) problem.In the GEP problem, a multi-stage model with reliability constrained is presented to minimize the total costs over the planning horizon maintaining several practical constraints of the spinning reserve, the fuel mix ratio and Loss of Load Probability.Also, the Virtual mapping procedure, penalty factor approach, and the modified of intelligent initial population generation are incorporated to the HBA to decrease the search space and reduce the computational time.Besides the proposed HBA, four modern metaheuristic optimization algorithms were applied in three test cases to solve the short-and long-term reliability www.nature.com/scientificreports/ that outperform CSA, AO, BES, and PSO in term of best, average, worst, standard error, and standard deviation.
The proposed HBA achieves an order of magnitude of improvement for short-and long-term expansion planning than other algorithms.Future studies will be made to incorporate uncertainties such as load demand, and outage of generation units within GEP problem.
Figure 2 .
Figure 2. Flowchart for the proposed HBA for solving the GEP problem.
Figure 3 .
Figure 3. Convergence characteristics of comparison algorithms for case 1.
Figure 4 .
Figure 4. Box chart of variations of runs for case 1.
Figure 6 .
Figure 6.Box chart of variations of runs for case 2.
Figure 7 .
Figure 7. Convergence characteristics of comparison algorithms for case 3.
Figure 8 .
Figure 8. Box chart of variations of runs for case 3.
Table 1 .
Number of newly candidate units in each stage for case 1.
Table 3 .
values of LOLP criterion for case 1.
Table 4 .
Number of newly candidate units in each stage for case 2.
Figure 5. Convergence characteristics of comparison algorithms for case-2.•
Table 6 .
Values of LOLP criterion for case 2.
Table 7 .
Number of newly candidate units in each stage for case 3.
Table 8 .
Statistical results for case 3.
constraint GEP problem.They are crow search algorithm, aquila optimizer, bald eagle search and particle swarm optimization.The proposed HBA was successfully applied to reliability constrained GEP problems, with results
Table 9 .
Values of LOLP criterion for case 3.
Table 10 .
Main differences between the presented study and other previously works.
|
2023-10-07T06:17:50.090Z
|
2023-10-05T00:00:00.000
|
{
"year": 2023,
"sha1": "f98b24901d7182e3e8a31544ec50edcb2a11fb12",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-43622-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbd46624d69e8606f0b092bfada04183e3dd5daf",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255840885
|
pes2o/s2orc
|
v3-fos-license
|
Psychometric properties of the critical thinking disposition assessment test amongst medical students in China: a cross-sectional study
Critical thinking disposition helps medical students and professionals overcome the effects of personal values and beliefs when exercising clinical judgment. The lack of effective instruments to measure critical thinking disposition in medical students has become an obstacle for training and evaluating students in undergraduate programs in China. The aim of this study was to evaluate the psychometric properties of the CTDA test. A total of 278 students participated in this study and responded to the CTDA test. Cronbach’s α coefficient, internal consistency, test-retest reliability, floor effects and ceiling effects were measured to assess the reliability of the questionnaire. Construct validity of the pre-specified three-domain structure of the CTDA was evaluated by explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). The convergent validity and discriminant validity were also analyzed. Cronbach’s alpha coefficient for the entire questionnaire was calculated to be 0.92, all of the domains showed acceptable internal consistency (0.81–0.86), and the test-retest reliability indicated acceptable intra-class correlation coefficients (ICCs) (0.93, p < 0.01). The EFA and the CFA demonstrated that the three-domain model fitted the data adequately. The test showed satisfactory convergent and discriminant validity. The CTDA is a reliable and valid questionnaire to evaluate the disposition of medical students towards critical thinking in China and can reasonably be applied in critical thinking programs and medical education research.
Background
For decades, the importance of developing critical thinking skills has been emphasized in medical education [1]. As listed by the World Federation for Medical Education, critical thinking should be part of the training standards for medical students and practitioners [2]. Critical thinking is essential for medical students and professionals to be able to evaluate, diagnose and treat patients effectively [3]. One major criticism of medical education is the gap that exists between what students learn in the classroom setting and what they experience in clinical practice [4]. Only a few students will analyze and employ critical thinking when they acquire knowledge during their education [5]. Therefore, critical thinking has become increasingly necessary for medical students and professionals [6].
Critical thinking is an indispensable component of ethical reasoning and clinical judgment, and possessing reasonable critical thinking abilities reduces the risk of clinical errors [7]. Adverse events that occur by human error and preventable medical errors were frequently caused by a failure of cognitive function (e.g., failure to synthesize and/or take action based on information), which was second only to 'failure in technical operation of an indicated procedure' [8,9]. Similar problems have been reported in several countries such as the United Kingdom, Canada and Denmark [10]. Therefore, medical professionals need to exercise critical thinking, transcend simple issues, and make sound judgments in order to handle adverse medical situations [11]. Providing evidence and logical arguments to medical students and professionals is beneficial in order to support clinical decision-making and assertions [12]. Lipman and Deatrick are of the same opinion; i.e., critical thinking is a prerequisite for sound clinical decision-making [13]. Therefore, medical students should be exposed to clinical learning experiences that promote the acquisition of critical thinking abilities that are needed to provide quality care for patients in modern complex healthcare environments [14].
Currently, critical thinking is defined as a kind of reasonable reflective thinking following the synthesis of cognitive abilities and disposition [15]. The former includes interpretation, deduction, induction, evaluation and inference, whereas the latter includes having an open mind and being intellectually honest [16]. The critical thinking disposition (CTD) was described as seven attributes including, truth-seeking, open-mindness, analyticity, systematicity, critical thinking self-confidence, inquisitiveness and maturity [17]. A disposition to critical thinking is essential for professional clinical judgement [18]. An assessment of the CTD in professional judgment circumstances and educational contexts can establish benchmarks to advance critical thinking through training programs [4].
To investigate and assess the CTD in medical students, a reliable and valid tool is indispensable. Several CTD measurement tools are available, such as the California Critical Thinking Dispositions Inventory (CCTDI), Yoon's Critical Thinking Disposition (YCTD) and the Critical Thinking Disposition Assessment (CTDA). The CCTDI was developed to evaluate the CTD in normal adults. It had good reliability and validity in western cultures, however, it had low reliability and validity for Chinese nursing students in previous studies [19,20]. Yoon created the YCTD, which was based on the CCTDI, for nursing students in South Korea [21]. According to the literature review and other measures of critical thinking disposition, Yuan developed the CTDA in English version. They used it to measure the CTD for medical students and professionals. In his study, the Cronbach's alpha for the entire assessment was 0.94 [22]. The CTD for the CTDA were defined as "systematicity and analyticity", "inquisitiveness and conversance" and "maturity and skepticism". The "systematicity and analyticity" portion is the cognitive component of the CTD and measured the tendency towards organizing and applying evidence to address problems. Being systematical and analytical allow medical students to connect clinical observations with their knowledge to anticipate events that are likely to threaten the patient's safety [23]. The "inquisitiveness and conversance" is the motivation component of the CTD. It measures the desire of medical students for learning whenever the application of the knowledge is inconclusive and is essential for medical students to expand their knowledge in clinical practice [24]. The "maturity and skepticism" is the personality component of the CTD which measured the disposition to be judicious in decision making and how often it leads to reflective skepticism. This disposition has particular implications for ethical decision making, particularly in time-pressured clinical situations [25]. All the domains have a tight connection to one another. In adapting to the Chinese version, we followed the translation and cross-cultural adaptation of the guidelines set forth by the WHO [26]. The steps listed by the WHO are as follows: forward translation, expert panel review, back translation, pretest and cognitive interviews, and formulation of the final version. As such, the CTDA may be especially valuable for institutes or universities in Asian countries or with an Eastern culture for assessing critical thinking disposition in medical students. Given the lack of effective instruments to assess the CTD in undergraduate medical programs in mainland China, the objective of this investigation was to evaluate the psychometric properties of the CTDA.
Sample sizes
According to Kline's recommendation, it is necessary to note that the sample size should base on the principle of a 1:10 item to participant ratio [27]. The total number of items in the CDTA is nineteen and so the sample size should be at least 190 students. Therefore, using this guideline, with a sample size of 300 students, this research exceeds the recommended minimum.
Participants and procedures
Students of clinical medicine in China must undergo 5 years of medical training. Years 1 and 2 are dedicated to the basic sciences, years 3 and 4 to clinical medicine, and year 5 is the clinical internship. This study involved stratified-cluster random sampling. Firstly, the participants were recruited from different academic years. Two classes were selected randomly from each year. There were approximately 30 individuals in one class, with 300 medical students enrolled in this study in total. The sample of the study for test-retest reliability to assess the ICCs is 14 [28]. Forty-nine respondents were randomly selected to finish the online survey 2 weeks later and 43 participants completed it.
Three hundred medical students completed the online survey between March and June 2019. Respondents provided written consent to participate in the study. A selfadministered questionnaire was applied in the survey. The anonymity of participants was guaranteed and all of students voluntarily took part in the study. It took approximately 15 to 20 min to submit the questionnaire.
Instrument
The questionnaire consisted of two components: part A which included sociodemographic characteristics (e.g., age, gender, and academic year) and part B which contained the CTDA. The CTDA assessed the CTD of medical students and professionals and was comprised of 19 items in three domains as follows: "systematicity and analyticity", "inquisitiveness and conversance", and "maturity and skepticism". Items were rated on a seven-point Likert scale ranging from 1 to 7 (1 for very strongly disagree and 7 for very strongly agree) [22]. Each domain was computed to the sum of its item score and the total CTDA was calculated by the sum of its domain scores. Higher scores signified higher CTD.
Statistical analysis Reliability
We computed Cronbach's α as a measure of internal consistency along with the means, standard deviation, skewness, kurtosis, ceiling and floor effects of the questionnaire and its domains. Absolute values of skewness and kurtosis higher than 3 and 10 respectively showed significant deviance from a normal subject's distribution [29]. Student's F-test was performed to determine the association between the academic year and domains of the CTDA. The ceiling and floor effects were considered abnormal when the highest/lowest scores were higher than 20% [30,31]. Following Kline's recommendations, a Cronbach's alpha above 0.70 was considered satisfactory [27]. The test-retest reliability was good if the ICC was higher than 0.70.
Validity
With the purpose of assessing construct validity, the original three-factor structure of the CTDA was applied for explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Factor analysis using principal component analysis with direct oblimin rotation was used and a factor load > 0.4 was considered as acceptable [32]. Domains of the instrument was assessed based on selected criteria through the following indexes: a) CMIN/DF < 3; b) RMSEA< 0.08; c) AGFI> 0.80; d) the p value should be significant [33,34]. Pearson's correlation coefficient between each domain of the CTDA was used to test the inter-correlation of the scale.
The convergent and discriminant validity of the questionnaire was measured by computing item-domain Pearson's correlations. If the former was more than 0.4, it indicated that the items and their domains were acceptable [35]. The latter was considered satisfactory if items showed correlations with other domains that were lower than those with their own domains. The CFA was conducted with AMOS 21 and other statistics were calculated with SPSS 23. Ten students checked the face validity. Each item received positive feedback from students indicating that the CTDA had good face validity.
Basic characteristics of the study sample
Of the total number of 300 students participating in the research, 278 (92.67%) completed the study. The mean age of the 278 individuals was 20.88 ± 1.76 years (SD); within the study sample, 113 of the participants (40.64%) were male. Additionally, 54 of the individuals (19.42%) were first year students and 55 of the students (19.78%) were fifth year students.
Score distributions
Across domains, "systematicity and analyticity" obtained the highest score (43.93 ± 5.71), whereas "maturity and skepticism" scored the lowest (28.41 ± 3.96). The skewness and kurtosis coefficients of the entire questionnaire were acceptable, with the former ranging from − 0.98 to − 0.32 and the latter ranging from − 0.13 to 2.05. There were no floor effects in the three domains. However, items 12, 18, and 19 showed significant ceiling effects ranging from (20.14-23.74%).
Reliability
The overall Cronbach's α coefficient of the CTDA was good (0.92) and showed good internal consistency. The three domains were considered to have shown acceptable internal consistency (0.81-0.86). The overall splithalf reliability coefficient of the CTDA was acceptable (0.89). The retest response rate was 83.67% (41/49), and the test-retest reliability (0.93) revealed statistically significant ICCs for the three domains. In addition, the Pearson's correlation coefficients of all domains were acceptable. The results are reported in Table 1.
Construct validity
EFA The Kaisex-Meyer-Olkin test result was 0.92 and Bartlett's test of sphericity yielded a p < 0.05, signifying that the gathered results indicated that factor analysis could be performed. The EFA revealed factors with eigenvalues greater than 1, accounting for 57.13% of the variance. A three-factor solution based on the results was reported in the rotated component matrix ( Table 2).
Correlation analysis between CTDA domains
The CTDA showed significant correlation between any of the two assessment domains (r = 0.61-0.72), with p values less than 0.01. The correlations between assessment domains based on Pearson's correlation are shown in Table 3.
Convergent validity and discriminant validity
Based on item-domain correlations, the scores of each item correlated with their own domain to an acceptable degree (r = 0.65-0.86, p < 0.01), and the convergent validity of the CTDA was acceptable. In addition, whole items showed a higher correlation with their own domains than with other domains and the discriminant validity was satisfactory, as shown in Table 4.
Dose-response analysis
The relationship between the academic year and domains of the CTDA is reported in Table 5. It indicated that there were significant differences among the 5 years on the CTDA score and domains. The year 2 students obtained the highest of the CTDA (107.88 ± 11.34) and the year 1 students scored 107.20 ± 12.14. Surprisingly, the year 5 students reported the lowest level at (98.91 ± 12.52). Among the 5 years, the year 2 students had the highest score (45.32 ± 5.01) in "systematicity and analyticity" and the year 1 students obtained the highest score (33.76 ± 4.59) in "inquisitiveness and conversance". Moreover, the highest scores overall were the year 3 students' in "maturity and skepticism" at 29.16 ± 3.47. On the other side, the year 5 students had the lowest scores in all of the domains.
Discussion
The psychometric properties of the questionnaire were satisfactory. Results demonstrated that the CTDA is good, reliable, and valid for Chinese medical students. In addition, all items and domains showed acceptable kurtosis and skewness coefficients. Our results were similar to those of previous studies conducted in Ireland and Iran using other critical thinking disposition instruments [36,37]. However, three items showed a significant ceiling effect, above the accepted threshold of 20%. This result was comparable to that reported in two critical thinking studies which showed evidence of a ceiling effect in overall scores in the United States and China [38,39]. The ceiling effect might be attributable to the population distribution at schools or universities [39].
It is clear that the domains of the CTDA showed rationally acceptable reliability when evaluating the CTD of medical students. The satisfactory Cronbach's α coefficient values of the domains demonstrate the high internal consistency of the entire questionnaire. Our results are in line with other studies conducted in Asian countries, as seen by the Cronbach's α reliability of the CCTDI of 0.87 in Turkey by Iskifoglu [40] and 0.80 in Iran by Gupta [41]. Our study showed the Cronbach's alpha of the CTDA was 0.92, which was similar to the value reported in the original study [22]. Therefore, the Cronbach's α indicates that the whole internal reliability of the CTDA test is satisfactory.
Our findings indicated that the EFA of the CTDA conducted with medical students and professionals yielded a three-domain model. The EFA model of our study was the same to the previous study [22]. Our CFA results indicated that the three-factor structure ("systematicity and analyticity", "inquisitiveness and conversance" and "maturity and skepticism") of the CTDA (AGFI = 0.83, RMSEA = 0.08) showed an acceptable fit with the data. It is likely that the differences in the domains depend on the different theoretical models [42]. The domains of the [43]. Shin noted that the CFA of the YCTD revealed a seven-domain model, and three of the domains (systematicity, intellectual eagerness/curiosity, and healthy skepticism) were similar to those of our study [44]. However, Zuriguelperez reported that the CFA of the Critical Thinking Questionnaire completed by Spanish students yielded a four-factor model (personal, intellectual and cognitive, interpersonal/self-management and technical), based on the Alfaro-LeFevre theoretical model [45]. Similar results were found by Yuan and Wang's studies in the critical thinking disposition inventory for Chinese medical students [6,22]. Our research offers a plausible explanation for the high correlations between the domains. "Inquisitiveness and conversance" could be taken to mean that the students have the desire for learning and are intellectually curious while "systematicity and analyticity" could mean that students use reason and evidence to address problems with systemic thinking. Both of them have a tight connection with one another.
Our research demonstrated that the convergent validity and discriminant validity of the CTDA were satisfactory and all items displayed a higher correlation with their own domain than with other domains. Therefore, no items need to be modified or reassigned to another domain. Other studies conducted in China have reported similar results in terms of convergent and discriminant validity of the CTD instrument [46,47].
We reported that the CTD scores of the year 5 medical students were lower than those of the year 1 students. The explanation could be that employment pressure and the stress of internship for the fifth-year students may have made their CTD worse. In addition, Ip suggested that the CTD scores of the younger Chinese nursing students were higher than the older students, especially in the domain between inquisitiveness and confidence [19]. Similar result was found by Kim in Korean nursing students. They found that the domain scores between intellectual integrity and truth seeking in year 1 were higher than year 4 [48]. However, Hunter found that the CTD were the highest during the year 4 for nursing students [49].
The CTDA shows promise as an instrument for future studies on the CTD by medical students in China. However, certain limitations of our research should be acknowledged. First, the medical students were recruited from a single medical institution in China, so the sample representativeness was limited. Second, due to time constraints, the findings of our study were limited by the size of the study population. Future studies could increase the representativeness of the study population by expanding sample diversity and size. Third, the concurrent validity of CTDA was not tested due to the lack of a widely used CTD scale. Fourth, the CTDA could only measure the dispositions or traits of critical thinking which cannot assess for critical thinking skills.
Conclusions
Our findings demonstrate promising applicability of the CTDA, since the questionnaire is of good reliability and validity to measuring the CTD amongst Chinese medical students. The results may be valuable to other institutions involved in assessing critical thinking disposition in students.
|
2023-01-16T15:02:08.653Z
|
2021-01-06T00:00:00.000
|
{
"year": 2021,
"sha1": "ab87c67257750139d2904a8aa09f018a2f7de810",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12909-020-02437-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ab87c67257750139d2904a8aa09f018a2f7de810",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
}
|
20687445
|
pes2o/s2orc
|
v3-fos-license
|
What clinical factors are associated with mortality in septicemic melioidosis ? A report from an endemic area
Introduction: Melioidosis, caused by Burkholderia pseudomallei, has high mortality, particularly in its septicemic form. Data on the factors associated with mortality from melioidosis are still limited. Methodology: All patients (≥ 15 years of age) who were positive for melioidosis by blood culture in the year 2009 were enrolled. The study was conducted at Khon Kaen Hospital, Thailand. Patients were divided into two groups: surviving and deceased. Multivariate logistic regression was used to identify factors associated with death by three models: clinical, laboratory, and combined. Results: There were 97 patients who had blood cultures positive for melioidosis. The mortality rate was 54.17% (52 patients). The clinical presentation model found one significant factor associated with mortality from septicemic melioidosis: pulmonary presentation. Two factors were statistically significant for death as determined by the laboratory model: white blood cell count (WBC) and blood urea nitrogen (BUN) value. For the combined model, three significant factors were associated with death: pulmonary presentation, WBC, and BUN. The adjusted odds ratios (95% confidence interval) of the three factors were 10.739 (3.300–34.953), 0.930 (0.877–0.985), and 1.057 (1.028–1.087), respectively. Conclusions: Three clinical factors associated with mortality in septicemic melioidosis were pulmonary presentation, white blood cell count, and blood urea nitrogen level. Physicians should be aware of high mortality if septicemic melioidosis patients have these clinical features. Aggressive treatment may be needed.
Introduction
Melioidosis, caused by Burkholderia pseudomallei, is an emerging infectious disease.It is endemic in northeast Thailand, northern Australia, and other tropical countries [1].The incidence rate in northeast Thailand was reported to be 21.3 cases of melioidosis per 100,000 people per year [2].Several risk factors for this emerging infection have been reported and include male gender, diabetes, renal failure, thalassemia, alcoholism, chronic lung disease, and steroid use [3,4].
B. pseudomallei are Gram-negative bacilli organisms that can be found in soil and water.Humans may get infected by skin contamination or inhalation.This bacterium can infect several organs in humans such as the lungs, liver, spleen, kidneys, skin and soft tissue, joints, or multiple organs, or can result in a disseminated infection [3].Due to the high virulence and disseminated infection, the mortality rate of melioidosis is high.Factors associated with mortality from melioidosis, however, have not been confirmed in previous studies.
Methodology
This study was conducted retrospectively at Khon Kaen Hospital, a tertiary care hospital in northeast Thailand in an endemic area of melioidosis.All patients (≥ 15 years of age) who were admitted and tested positive for melioidosis by blood culture in the year 2009 were enrolled.Patients with incomplete data were excluded.
Clinical data of all patients were collected and included baseline characteristics; duration of presenting symptoms; occupation; co-morbid diseases; history of smoking, alcohol drinking, or steroid use; laboratory investigations; treatment; and outcomes.Patients were categorized as surviving or deceased.
The organs involved were classified as pulmonary, abdominal, musculoskeletal, or neurological presentation.Pulmonary presentation was defined as having pulmonary infiltration determined by chest Xray with symptoms of fever, cough, or sputum production.Abdominal presentation included having liver or splenic abscess by ultrasonography.Musculoskeletal presentation included having muscular abscess or septic joints.Neurological presentation included having neutrophilic meningitis.Melioidosis titer was tested by an indirect hemagglutination assay method (Mahidol University, Bangkok, Thailand).
Statistical analyses
Baseline and clinical characteristics of surviving and deceased groups were compared using descriptive statistics.Due to small numbers in some factors, nonparametric tests were used for the bivariate analyses.Wilcoxon rank-sum and Fisher's exact tests were applied to compare the differences in numbers and proportions between the two groups, respectively.
Univariate logistic regression analyses were applied to calculate the crude odds ratios of individual variables for mortality.All clinical variables were considered statistically significant if the p value by univariate analysis was less than 0.15, and these were included in subsequent multivariate logistic regression analyses.There were three final models by multivariate logistic regression analyses, including clinical presentation, laboratory, and combined models.The combined model was run by using significant factors from independent clinical presentation and laboratory models plus antibiotic treatment.Analytical results were presented as crude odds ratios (ORs), adjusted ORs, and 95% confidence intervals (CIs).Possible interactions were tested in the final model.The significant level for interaction was set at 0.10.The goodness of fit of the multivariate model was evaluated using Hosmer-Lemeshow statistics.All data analyses were performed with STATA software version 10.1 (College Station, Texas, USA).
This study protocol was approved by the institutional review board, Khon Kaen Hospital, based on the Declaration of Helsinki and Good Clinical Practices.
Results
During the study period, there were 97 patients who were admitted and had positive blood culture for B. pseudomallei.One patient was excluded due to incomplete clinical data and treatment outcome.In total, there were 96 patients for analysis, and the mortality rate was 54.17% or 52 patients.
Clinical presentations between the surviving or deceased groups were mostly comparable (Table 1).There were four different clinical features between both groups, including duration of symptoms, pulmonary presentation, abdominal presentation, and musculoskeletal presentation.Patients who died had a shorter duration of symptoms, higher proportion of pulmonary presentation, but fewer patients who died had abdominal or musculoskeletal presentation compared with those patients who survived.
In terms of laboratory results, patients who died had significantly higher levels of five laboratory values than patients who survived.These factors included blood urea nitrogen, creatinine, asparate aminotransferase (AST), alanine transaminase (ALT), and total bilirubin (Table 2).A ceftazidime-based regimen was the most commonly used one in 49 patients (23 and 26 patients who died and survived, respectively; p = 0.315) (Table 2).The duration of hospitalization in patients who died was significantly longer than in patients who survived (3.75 ± 4.29 vs. 13.39 ± 10.79 days; p < 0.001).
In multivariate logistic regression analyses, the clinical presentation model found only one significant factor associated with mortality from septicemic melioidosis: pulmonary presentation, which had an adjusted OR of 6.924 (95% CI 2.066-23.212).This model was adjusted for several factors, which are shown in Table 3.Two factors were statistically significant for death as determined by the laboratory model: white blood cell count and blood urea nitrogen value, which had adjusted ORs of 0.722 (95% CI 0.540-0.966)and 1.110 (95% CI 1.026-1.201),respectively.
For the combined model, three significant factors were associated with death: pulmonary presentation, white blood cell count, and blood urea nitrogen level.The adjusted ORs (95% CI) of the three factors were 10.739 (3.300-34.953),0.930 (0.877-0.985), and 1.057 (1.028-1.087),respectively.Decreasing white blood cell count by 1,000 cells/mm 3 increased the chance of death by 7%, while increasing one mg/dL of blood urean nitrogen level increased the risk of death by 5.7%.No significant interactions were found in the final model.The Hosmer-Lemeshow values for clinical model, laboratory model, and combined model were 91.24 (p = 0.276), 23.67 (p = 0.943), and 84.01 (p = 0.571), respectively.
Discussion
The mortality rate in septicemic melioidosis was 54.17%, which was somewhat higher than that reported in a previous study from Chonburi (47%), another province in northeast Thailand [5].The previous study comprised 83 patients during 2001 and 2006 with a mortality rate of 47% [5].The study from northeast Thailand showed a mortality rate of 25% [3].The blood culture of the previous study, however, was positive in only 58% of patients [3].
Compared with other countries, the mortality rate in this study was similar to that in a report from Singapore [6].The mortality rate of 27 melioidosis patients who admitted to the intensive care unit was 48.1%.Other studies from Malaysia, Australia, and India had lower mortality rates of 32.9%, 14%, and 9.5%, respectively [7][8][9].The total numbers of patients in those studies were 85, 540, and 95 patients, respectively.The study populations among these studies were different and led to different mortality rate from those in the present study.All patients in the present study had blood culture positive for melioidosis, while all patients were admitted to the intensive care unit in the study from Singapore.The other three studies included both severe and non-severe cases and both bacteremic and nonbacteremic forms.The study from Australia showed that the mortality of bacteremic patients had a mortality rate similar to that of the present study (47%) [8].The bacteremic form of melioidosis was a severe form and had a high mortality rate, as in the present study [7][8][9][10][11].
This study also showed that three factors associated with death included pulmonary presentation, blood urea nitrogen level, and white blood cell count.Pulmonary presentation had the highest adjusted OR among all predictors and was also previously reported as the most common presentation of melioidosis [12,13].Pulmonary presentation was found in 51% of patients who had melioidosis [5,15].Seventy-one percent of patients who died from septicemic melioidosis in this study also had pulmonary problems, while only 27.27% of patients who survived had pulmonary problems (Table 1).Patients with acute pneumonia from melioidosis had a high risk of septic shock, acute respiratory distress syndrome, and death [6,13].
White blood cell count was not statistically significant by univariate logistic analysis (Table 2).After adjustment for other factors, it was independently associated with mortality from septicemic melioidosis (Table 3; combined model).Its adjusted OR was 0.930, which indicated that lower white blood cell count was associated with mortality.Lower white blood cell count was shown to be associated with early septicemia in neonates [16].The white blood cell count, therefore, may be related to early sepsis from melioidosis.
Organ dysfunction, particularly renal dysfunction, was the main predictor in this and in previous studies [6,14].One organ failure increased risk of death 8.2fold, particularly renal function [6].Renal failure is another risk factor for melioidosis [3,4], but we found that having a history of renal failure was not associated with mortality in septicemic melioidosis (Table 3; clinical model).The adjusted OR (95% CI) of history of renal failure was 1.253 (0.315-4.981).However, if patients with positive blood culture for B. pseudomallei had high levels of blood urea nitrogen, they had a high risk of mortality.Increasing one mg/dL of blood urea nitrogen level increased the risk of death by 5.7%.The mean blood urea nitrogen of patients who died was 57.40 mg/dL (Table 1).Therefore, the risk of death was 327.18%.Blood urea nitrogen was one factor indicating organ failure.A previous study [14] showed that renal dysfunction increases mortality rate, with an adjusted OR of 1.37 (95% CI 1.11-1.71).The adjusted OR was quite similar to that in the present study (1.057; 95% CI 1.028-1.087),as shown in Table 3.
There are several antibiotic regimens for treating septicemic melioidosis.The ceftazidime-based regimen is the first-line regimen.Approximately half of the patients in this study received ceftazidime (49/96 patients, 51.05%).However, antibiotic treatment was not an independent factor for successful treatment (Table 3; combined model).Two reasons may explain this finding.The admission duration of patients who died was very short (3.75 days), and the therapeutic effects of antibiotics may not have had time to overcome bacterial virulence.Mortality from severe melioidosis was associated with renal function but not with the types of antibiotic treatment for melioidosis, either inhibitory or bactericidal effects [14].
The present study was conducted in an endemic area.The total number of septicemic melioidosis cases was quite large in only one year of study.Clinical factors may be used as predictors for mortality from septicemic melioidosis.The results may be universal for all other endemic areas, particularly resourcelimited settings, because the models comprised basic routine clinical factors.
The main strength of this study is the study population.All patients were proven to have melioidosis by blood culture, which indicates septicemic melioidosis.This study was conducted in an endemic area of melioidosis.There are some limitations in the study, including small sample size and missing data.The model by multivariate logistic regression may lead to wide CIs with small sample size, such as for pulmonary infiltration (3.300, 34.953).After tracing back, the power of pulmonary presentation with the numbers of the studied population was 99%.Therefore, the wide adjusted OR for pulmonary presentation may be due to categorical type of data.
Conclusions
Three clinical factors associated with mortality in septicemic melioidosis were pulmonary presentation, white blood cell count, and blood urea nitrogen level.Physicians should be aware of high mortality if septicemic melioidosis patients have these clinical features.Aggressive treatment may be needed.
Table 1 .
Clinical features of patients with melioidosis who survived and died.
Data are presented as mean (standard deviation) or number (percentage); differences between groups were analyzed by either Wilcoxon rank-sum or Fisher's exact test when appropriate; COPD: chronic obstructive pulmonary disease.
Table 2 .
Laboratory results and initial antibiotic treatment of patients with melioidosis who survived and died.
Data are presented as mean (standard deviation) or number (percentage).Differences between groups were analyzed by either Wilcoxon rank-sum or Fisher's exact test when appropriate; AST: asparate aminotransferase; ALT: alanine transaminase.
Table 3 .
Factors associated with mortality in patients with melioidosis by clinical, laboratory, and combined models.
|
2017-11-02T12:44:01.284Z
|
2016-04-28T00:00:00.000
|
{
"year": 2016,
"sha1": "52b92ebd61004dd2a7a787ede528e83be9f62ffa",
"oa_license": "CCBY",
"oa_url": "https://jidc.org/index.php/journal/article/download/27131004/1495",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "52b92ebd61004dd2a7a787ede528e83be9f62ffa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214253752
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of thermal stabilization process for organomorphic preforms made of oxidized polyacrylonitrile
Simulation and analysis of thermophysical processes in PAN-fiber-based pressed samples that occurs during thermal stabilization were carried out. It is significant fact from the heat transfer point of view that the heat stabilization process proceeds with a definite exothermic effect. Herewith a strongly pronounced coupling between both the intensity of heat generation and the temperature level and heating rate at each point of the sample was observed. The influence of the main factors that effects on the sample temperature state, such as the density and thermal conductivity of the preform material, also the rate constants of the thermal stabilization process, was studied. The experimental data of the sample temperature state in the thermal stabilization process is presented and was compared with the analysis results.
Introduction
In the manufacturing of reinforced carbon-carbon (RCC) and ceramic matrix composites (CMC) [1][2][3][4][5], polyacrylonitrile (PAN) as a feedstock in the form of tows of monofilaments or organomorphic frames consisting of carbonized pressed PAN is used [6][7][8][9]. The first stage of the RCC and CMC manufacturing is to conduct thermal stabilization (oxidation) of the PAN feedstock. During the thermal stabilization process, cyclization and oxidation of the PAN preform occur. It is significant fact from the manufacturing point of view that the thermal stabilization process proceeds with a definite exothermic effect, therefore, too rapid heating of the PAN preform can lead to the excessive heat generation and thermal burning of the preform. In the manufacturing of the continuous carbon filaments, this heat is removed by fibers blow-off with large volume of air that allows to achieve the temperature difference in the fiber along the length of no more than 2 degrees to guarantee exclusion of burning which drastically reduces the filaments strength. Obviously, this approach cannot be used for heat treatment of pressed organomorphic preforms.
Experimental selection of the time-temperature regime of thermal stabilization may require a large time and material consumption and does not guarantee the achievement of an optimal result. Therefore, one should rely on the results of mathematical simulation that should determine effects of the main factors on the sample temperature state in the thermal stabilization process and allow selection of a rational regime.
Model of thermal stabilization process
The object of the study was organomorphic frame of non-woven fabric Oxypan®, consisting of partially oxidized PAN staple fibers Pyron®.
The complexity of the thermal stabilization process simulation was that several processes were simultaneous and interconnected in the sample -heat transfer by thermal conductivity and volumetric heat generation during an exothermic reaction. In this case, at each point of the sample the heat generation intensity (the power of internal heat sources) was dependent both on the temperature at this point and on the heating rate. It was assumed that physicochemical transformations in the organomorphic frame sample occur with the heat generation in the thermal stabilization process. Also the changes in the thermal stabilization process completion degree of the material was determined by Arrhenius equation. To simulate this process, the rate constants of the process were required.
In study [10], the thermal stabilization process of the PAN tow was investigated. Herewith it was considered that the multiple Arrhenius equation could be used to describe the oxidation reactions and heat generation in the tow. To determine the rate constants of the thermal stabilization process, two experimental dependences obtained using a DSC-calorimeter at different heating rates of the tow, were used. Based on the analysis of the number of heat-generation curve maxima, it was found that two reactions proceeded in the thermal stabilization process. The following rate constants were obtained for the PAN tows with filaments of 0.17 tex: 1 E = 146.5•10 6 J/kmol; 2 Q = 2 • 10 6 J/kg. Similar studies for tows with filaments from 0.08 to 0.12 tex were presented in [11]. The mathematical model of the thermal stabilization process was corresponding to [10], but it was assumed that the reaction proceeded in one step. It was shown that the activation energy for different heating rates and filaments linear density was in the range from 185.27 to 351.15 J/kmol. Also the pre-exponential factor was from 1.6·10 13 to 1.24·10 33 1 s . The total thermal effect of the reaction was set as 9.7•10 5 J/kg. Apparently, accounting only one reaction led to a significant dependence of the rate constants on the heating rate and a big difference of the results from the data obtained in [10].
Significantly more complex models of the thermal stabilization process were used in [12]. The temperature dependences of the pre-exponential factor and fiber density were taken into account and various kinetics and nucleation models were applied. In this case, as well as in [10], a conclusion about the behavior of two chemical processes: autocatalytic and first order, was drawn. Such a different mathematical description of the process made it difficult to directly compare it with the data of [10] and [11].
In general, it can be stated that, at present, the rate constants of the thermal stabilization process with respect to the processing of tows and fabrics based on PAN feedstock were determined with various degrees of reliability. However, the thermal stabilization parameters for nonwoven materials that differ in fiber diameter from tows, linear density of filaments and the degree of initial oxidation are completely absent. Unfortunately, the value of initial oxidation is not disclosed by the material manufacturer.
Therefore, to determine parameters of the thermal stabilization process the special study was carried out. As the result, it was found that two successive stages were occurred in the thermal stabilization process. Its rate constants are presented in Table 1. 3 It is apparent that the thermal effect absolute value was much less compared with other researchers [8,9] that was natural result as the Pyron® fiber was already partially oxidized in the delivery condition.
Heat transfer model in the thermal stabilization furnace
A geometric model of the thermal stabilization furnace with a sample box mounted in it is shown in Figure 1 and 2. Heaters made of nichrome wire wound on ceramic tubes were heating source. A steel box with a sample was mounted on ceramic and steel plates placed on the furnace hearth. A sample of the organomorphic frame in the form of a pressed PAN flat plate with dimensions of 95 × 95 × 40 mm was clamped between two steel plates. To create pressure a massive steel load was placed on the sample upper surface. The uniformity of sample heating was increased by filling the box with furnace coke to a height of approximately two-thirds of the overall size. The top of the box remained free.
Physical and mathematical models of heat transfer in the thermal stabilization furnace were developed. A three-dimensional steady-state process of combined radiation-conductive heat transfer was considered. In the simulation, the experimentally measured dependence of the nichrome heaters power on time was used ( Figure 3). Herewith it was assumed that power was uniformly released in the entire volume of the heaters. The gas medium in the furnace volume was taken diathermic radiationtransparent. Conductive heat transfer in all solid bodies was also simulated in furnace insulation, ceramic tubes and plate, box frame, filling and load. The analysis was carried out in the COMSOL Multiphysics.
To evaluate the influence of the exothermic effect in the pressed array of the organomorphic frame, an experimental temperature measurement was performed at various points of the sample. The hot junction of one thermocouple was placed in the sample central part in a previously prepared recess. The second junction was located on the sample edge.
Comparison of the experiment and numerical simulation showed that the calculated and experimental temperature values were in complete good agreement, and the maximum difference between them does not exceed 10 ℃ (Figure 4).
The process effect analysis on the sample temperature state
As criterion for estimating the sample temperature state, the temperature difference between its center and edge was chosen. That particular criterion is most simultaneously dependent on both the rate constants of thermal stabilization process and the materials thermophysical characteristics and the furnace operating regime. A series of parametric studies was carried out in order to identify the influence of the material thermophysical characteristics (Figures 5 and 6), rate constants (Figures 7-9) and the heaters power ( Figure 10). It is apparent that the sample density, the thermal effect of the exothermic reaction and the pre-exponential factor were of the greatest influence.
Conclusion
The model of organomorphic preforms thermal stabilization made of oxidized PAN is presented. The analysis of the main process parameters significance on the sample temperature state was conducted. It was shown that the sample density, the heat effect of the exothermic reaction and the pre-exponential factor are of the greatest influence. The comparison of the experimental and calculated temperature values are in a good agreement.
|
2020-01-09T09:22:32.559Z
|
2020-01-03T00:00:00.000
|
{
"year": 2020,
"sha1": "fd28f2317aeaf97c22721a2b4f01eed0f9cae21b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/709/2/022114",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4b5eb1119f0bd5393475e1f3a92fb887ecd07b76",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
220793865
|
pes2o/s2orc
|
v3-fos-license
|
Non-vanishing of central values of quadratic Hecke $L$-functions of prime moduli in the Gaussian field
We study the first and second mollified moments of central values of a quadratic family of Hecke $L$-functions of prime moduli to show that more than nine percent of the members of this family do not vanish at the central values.
Introduction
Central values of L-functions have received considerable attention in the literature as they carry rich arithmetic information. In general, an L-function is expected to vanish at the central value for a reason. Such a reason may simply come from an observation on the sign of the functional equation or arise from ties with deep assertions such as the Birch and Swinnerton-Dyer conjecture.
For the case of Dirichlet L-functions, it is believed that L( 1 2 , χ) = 0 for any Dirichlet character χ. When χ is a primitive quadratic character, this is a conjecture of S. Chowla [3]. In [12], M. Jutila initiated the study on the first two moments of the family of quadratic Dirichlet L-functions. His results imply that Chowla's conjecture is true for infinitely many such L-functions. By further evaluating the mollified moments, K. Soundararajan [16] showed that at least 87.5% of the members of the same family do not vanish at the central value.
Other than the entire family of quadratic Dirichlet L-functions, it is also intriguing to investigate the non-vanishing issue of the family of quadratic Dirichlet L-functions over prime moduli. In this case, it again follows from the work on Jutila in [12], who also obtained the first moment of the family of quadratic Dirichlet L-functions of prime moduli, that there are infinitely many such L-functions having non-vanishing central values. With more efforts, by combining an evaluation on the mollified first moment and an upper bound for the mollified second moment using sieve methods, S. Baluyot and K. Pratt [1] were able to show that more than nine percent of the members of the quadratic family of Dirichlet L-functions of prime moduli do not vanish at the central value.
In [6], the author studied the mollified first and second moments of a family of quadratic Hecke L-functions in the Gaussian field to show that at least 87.5% of the members of that family do not vanish at the central value. This can be regarded as an analogue to the result of Soundararajan in [16]. In this paper, motivated by the above mentioned result of Baluyot and Pratt in [1], it is our goal to investigate the non-vanishing issue of the quadratic family of Hecke L-functions introduced in [6] with a further restriction to prime moduli.
Throughout the paper, we denote K = Q(i) for the Gaussian field and O K = Z[i] for the ring of integers of K. We say an element d ∈ O K is odd if (d, 2) = 1. We also denote L(s, χ) for the L-function associated to a Hecke character χ and ζ K (s) for the Dedekind zeta function of K. We denote ̟ for a prime element in O K , by which we mean that the ideal (̟) is a prime ideal in O K . The expression χ c is reserved for the quadratic residue symbol c · defined in Section 2.1.
A Hecke character χ of K is said to be of trivial infinite type if its component at infinite places of K is trivial and a Hecke character χ is said to be primitive modulo q if it does not factor through (O K /(q ′ )) × for any divisor q ′ of q such that N (q ′ ) < N (q). It is shown in [7, Section 2.1] that χ (1+i) 5 ̟ defines a primitive quadratic Hecke character modulo (1 + i) 5 ̟ of trivial infinite type for any odd prime ̟ ∈ O K . Thus, for technical reasons, instead of considering a family of L-functions of {L(s, χ ̟ )} for primes ̟ satisfying certain congruence conditions (so that the corresponding L-functions become primitive), we considering the following family of L-functions: We aim to prove the following result in the paper, which shows that more than nine percent of the members of the above family have non-vanishing central values. Notice here the percentage we obtain in Theorem 1.1 is exactly the same as the one given in [1,Theorem 1.1]. This is not surprising since our proof of Theorem 1.1 follows closely the proof of [1, Theorem 1.1] by Baluyot and Pratt. We now briefly outline the approach of the proof. Let X be a large number and for some fixed θ, ϑ ∈ 0, 1 2 , we define M = X θ , R = X ϑ . (1.2) We fix a smooth function Φ(x), compactly supported in [ 1 2 , 1], satisfying Φ(x) = 1 for x ∈ [ 1 2 + 1 log X , 1 − 1 log X ] and Φ (j) (x) ≪ j (log X) j for all j ≥ 0. Let H(t) be another smooth function to be optimized later such that it is compactly Here we recall from [9, Section 2.1] that every ideal in O K co-prime to 2 has a unique generator congruent to 1 modulo (1 + i) 3 which is called primary. Hence in (1.4) and in what follows, a sum of the form m≡1 mod (1+i) 3 indicates that we are summing over primary elements in O K . Now we introduce the mollified first moment S 1 and the mollified second moment S 2 of the family F given in (1.1) as follows: (1.5) Our aim is to evaluate both S 1 and S 2 asymptotically. The evaluation of S 1 is relatively easy, which is performed in Section 3 and our result is summarized in the following proposition.
It is a challenging task to evaluate S 2 . However, for our purpose, all we need is an upper bound of the right order of magnitude for S 2 . Thus, instead of evaluating S 2 asymptotically, we follow the approach in [1] to use sieves to derive an upper bound for S 2 in Section 4. The result is given in the following Proposition. Proposition 1.3. Let δ > 0 be small and fixed, and let θ, ϑ satisfy θ + 2ϑ < 1 2 . If X ≥ X 0 (δ, θ, ϑ), then In our proof of Proposition 1.3, the use of sieves allows us to reduce the difficulty of estimating certain character sums over primes to an evaluation on a character sum over algebraic integers in O K . We then adapt the methods developed by Soundararajan in [16] to treat the resulting sum. The most delicate part of the treatment consists of applying a two-dimensional Poisson summation to convert the desired character sum to its dual sum. A careful analysis on the dual sum ultimately leads to the bound of S 2 given in (1.6).
With both Proposition 1.2 and Proposition 1.3 available, we apply the Cauchy-Schwarz inequality to see that The optimal choice of H(t) in the above estimation has already been determined in [1,Section 8], which allows us to deduce the conclusion given in Theorem 1.1.
Preliminaries
As tools needed in the rest of the paper, we gather here some auxiliary results.
2.1. Residue symbol and Gauss sum. It is well-known that the Gaussian field K = Q(i) has class number one. We denote U K = {±1, ±i} for the group of units in O K and D K = −4 for the discriminant of K. We say an element d ∈ O K is a perfect square if d = n 2 for some n ∈ O K and we denote it by writing d = . We say an element d ∈ O K is square-free if the ideal (d) is not divisible by the square of any prime ideal in O K .
Recall that every ideal in O K co-prime to 2 has a unique generator called primary. As (1 + i) is the only prime ideal in O K that lies above the integral ideal (2) ∈ Z, we can fix a generator for every prime ideal (̟) ∈ O K by taking ̟ to be 1 + i for the ideal (1 + i) and by taking ̟ to be primary otherwise. By further taking 1 as the generator for the ring Z[i] itself, we extend the choice of the generator for any ideal of O K multiplicatively. We denote G for this set of generators. For a, b ∈ O K , we write [a, b] for their least common multiple such that [a, b] ∈ G. Similarly, we write (a, b) for their greatest common divisor such that (a, b) ∈ G.
For an odd n ∈ O K , the quadratic residue symbol · n modulo n is first defined when n = ̟ is a prime. In this case, for any a ∈ O K , we have a ̟ = 0 when ̟|a and a ̟ ≡ a (N (̟)−1)/2 (mod ̟) with a ̟ ∈ {±1} when (a, ̟) = 1. Then the definition is extended to any composite n multiplicatively. Moreover, for n ∈ U K , we define · n = 1. For two co-prime primary elements m, n ∈ O K , we have the following quadratic reciprocity law (see [9, (2 Recall that χ c is reserved for the quadratic residue symbol c · . For odd c, it is shown in [8, Section 2.1] that χ c can be regarded as a Hecke characters of trivial infinite type modulo 16c, provided that we define c a = 0 when 1 + i|a. We shall henceforth view χ c as a Hecke character whose conductor dividing 16c. We make one exception here that we regard χ ±1 as a principal character modulo 1 (so we have χ ±1 (a) = 1 for all a ∈ O K ). This further implies that we have L(s, χ ±1 ) = ζ K (s).
For any complex number z, we write e(z) = e 2πiz and we denote For r, n ∈ O K with (n, 2) = 1, the quadratic Gauss sum g(r, n) is defined by Let ϕ [i] (n) denote the number of elements in the reduced residue class of O K /(n), we recall the following explicitly evaluations of g(r, n) from [9, Lemma 2.2]. Lemma 2.2. (i) We have g(rs, n) = s n g(r, n), (s, n) = 1, g(k, mn) = g(k, m)g(k, n), m, n primary and (m, n) = 1.
(ii) Let ̟ be a primary prime in O K . Suppose ̟ h is the largest power of ̟ dividing k. (If k = 0 then set h = ∞.) Then for l ≥ 1, 2.3. The approximate functional equation. Let χ be a primitive quadratic Hecke character χ of K of trivial infinite type. A well-known result of E. Hecke says that L(s, χ) has an analytic continuation to the entire complex plane and satisfies the following functional equation (see [11,Theorem 3.8]) where m is the conductor of χ, |W (χ)| = (N (m)) 1/2 and In particular, we have the following functional equation for ζ K (s): For n ∈ O K and j ∈ Z, j ≥ 1, we denote d [i],j (n) for the analogue on O K of the usual function d k on Z, so that d [i],j (n) equals the coefficient of N (n) −s in the Dirichlet series expansion of the j-th power of ζ K (s). It follows that d [i],1 (n) = 1 and when n is primary, We further denote for j ∈ Z, j ≥ 1 and any real number t > 0, We then note the following approximate functional equation for Lemma 2.4 (Approximate functional equation). For any odd, square-free d ∈ O K , we have for j = 1, 2, The next lemma gives the behaviors of V j (t) defined in (2.5) for t → 0 + or t → ∞, which can be established similar to [16, Lemma 2.1].
Lemma 2.5. Let j = 1, 2. The function V j (ξ) is real-valued and smooth on (0, ∞). We have For any fixed integer ν ≥ 0 and large ξ, we have
Poisson summation.
A key ingredient needed in our treatment of the paper is the following two dimensional Poisson summation, which follows from [9, Lemma 2.7, Corollary 2.8].
Lemma 2.7. Let n ∈ O K be primary and · n be the quadratic residue symbol modulo n. For any smooth function W : R + → R of compact support, we have for X > 0, where 2.8. Estimations related to character sums. We now collect two lemmas on estimations related to character sums. They are consequences of a large sieve result of K. Onodera [15] on quadratic residue symbols in the Gaussian field, which is a generalization in K of the well-known large sieve result on quadratic Dirichlet characters of D. R. Heath-Brown [10]. The first lemma can be obtained by applying the large sieve result in [15] in the proof of [10,Corollary 2] and [16,Lemma 2.4]. Lemma 2.9. Let N, Q be positive integers, and let a 1 , · · · , a n be arbitrary complex numbers. Let S(Q) denote the set of χ m for square-free m satisfying N (m) ≤ Q. Then for any ǫ > 0, Let M be a positive integer, and for each m ∈ O K satisfying N (m) ≤ M , we write m = m 1 m 2 2 with m 1 square-free and m 2 ∈ G. Suppose the sequence a n satisfies |a n | ≪ N (n) ε , then Here the " * " on the sum over d means that the sum is restricted to square-free elements d in O K .
2.11. Analytical behaviors of certain functions. In this section, we discuss the analytical behaviors of certain functions that are needed in the paper. First, we have the following result that can be established similar to [16,Lemma 5.3].
For each k ∈ O K , k = 0, we write kd 1 uniquely by where G 0,̟ (s; k, ℓ, α, d) is defined by The function G 0 (s; k, ℓ, α, d) is holomorphic for ℜ(s) > 1 2 . Furthermore, on writing with k 3 square-free and k 4 ∈ G, we have uniformly for ℜ(s) ≥ 1 2 + ε, Next, let Φ(t) be the smooth function appearing in the definition of S 1 and S 2 given in (1.5) and V 2 (t) be given in (2.5), we define We further define for ξ > 0 and ℜ(w) > 0, Before we state our next lemma, we would like to recall that the Mellin transform g(s) of a function g is given by Now, we are ready to present a result concerning some analytic properties of h(ξ, w).
Lemma 2.13. Let F t be defined by (2.9) and let ξ > 0. The function h(ξ, w) is an entire function of w in ℜ(w) > −1 such that for any c with c > max{0, ℜ(w)}. Moreover, in the region 1 ≥ ℜ(w) > −1, it satisfies the bound Proof. We recall that for any smooth function W , the function W (t) defined in (2.6) can be evaluated in polar coordinates as It follows from this and the definition of V 2 (t) in (2.5) that we have, for c s > 2, Now, applying the relation (see [6,Section 2.4 we see that Substituting this into the above expression for h(ξ, w), we immediately obtain (2.11) by noticing that we can now take c s > max{0, ℜ(w)}. This also implies that h(ξ, w) is an entire function of w in ℜ(w) > −1.
Our next two lemmas provide bounds for certain dyadic sums involving G 0 and h(ξ, w). Lemma 2.14. Let K, J ≥ 1 be two integers and let k 2 be defined in (2.8). Then for ℜ(w) = − 1 2 + ε and any sequence of complex numbers δ ℓ satisfying δ ℓ ≪ N (ℓ) ε , we have Proof. We write any k = 0, k ∈ O K as k = u k ̟∈G, ai≥1 ̟ ai i with u k ∈ U K . For those ̟ i appearing in this product, we define (2.14) It follows from part (ii) of Lemma 2.2 and the definition of G 0 in Lemma 2.12 that G 0 (1 + w; k, ℓ, α, d) = 0 unless ℓ = gm with g|a(k), (m, k) = 1 and m square-free. We then deduce from this and Lemma 2.2 that when (ℓ, 2αd) = 1, We apply the above and the Cauchy-Schwarz inequality to see that Applying the bound for G 0 in Lemma 2.12 in the above expression, we obtain that Note that as ik m = 0, Using this in (2.15), we see via another application of Cauchy-Schwarz that Relabelling m by jm while noting that N (̟) > 3 for all ̟|m, we deduce that for ℜ(w) ≥ − 1 2 + ε, Using this, we see that Observe that g|a(k) implies b(g)|k by (2.14). For such k, we relabel it as f b(g) to obtain from (2.16) that Applying this in (2.17), we see that the assertion of the lemma follows from Lemma 2.9.
is bounded by and also by Proof. We apply Lemma 2.12 and Lemma 2.13 to bound respectively G 0 and h(ξ, w) to see that the expression in (2.18) is where the last estimation above follows from the observation that N ((ℓ, , we obtain the first bound of the lemma from the above estimation. We now derive the second bound by setting c = ε to write the integral (2.11) as 1 2πi This allows us to see that . We then deduce via the Cauchy-Schwarz inequality that Inserting this into (2.18) and applying Lemma 2.14, we readily deduce the second bound of the lemma.
The mollified first moment
In this section, we prove Proposition 1.2. By applying Lemma 1.2 and the definition of M (̟) in (1.4) in the expression of S 1 in (1.5), we see that By making a change of variable n = mk 2 with k being primary, we deduce that The rapid decay of V 1 given in Lemma 2.5 implies that the contribution from those k with (k, ̟) = 1 is O A (X −A ) for any large number A. Moreover, the condition (m, ̟) = 1 is automatically satisfied as N (m) ≤ M < N (̟). We may thus ignore these two conditions and apply the definition (2.5) of V 1 (ξ) to see that Now we note the following convexity bounds for ζ K (s) and ζ ′ K (s) for 0 < ℜ(s) < 1: Here the bound for ζ K (s) follows from [11, Exercise 3, p. 100] and the bound for ζ ′ K (s) can be obtained via a similar convexity principle.
By moving the line of integration to ℜ(s) = − 1 2 + ε, we apply (3.2) along with the rapid decay of the gamma function in vertical strips to see that the integral on this line is ≪ ε N (̟) − 1 4 +ε N (m) 1 2 −ε and this contributes an error term of size ≪ X We also obtain a contribution from the residue of a pole at s = 0, which we write as an integral along a small circle around 0 to see that and we have by the Fourier inversion formula that Applying the above, we see that Integration by parts shows that which implies that we have for any large number A, into a power series (note that Again by (3.5), we may extend back the integration above to all z with an negligible error. Then we have since by the expression for H(t) given in (3.4), we have that Thus we conclude that We apply (3.6) to (3.3) and use the definition of ω 1 (s) given in (2.5) to see that Now, the integral above can be evaluated according to the formula for a function g(s) having a pole of order at most n at s = 0. We then arrive at We may replace log N (̟)/ log M in the sum above by log X/ log M in view of the support of Φ. Then applying the prime ideal theorem and partial summation, we see that Our remaining task is to bound S = 1 . By writing n = rk 2 with r being primary, square-free and k primary, we see that the condition mn = in (3.1) is equivalent to m = r, as both m and r are primary and square-free. This allows us to recast S = 1 as We make changes of variables m → gm, r → gr with g = (m, r) to further recast S = 1 as We deduce from Lemma 2.5 that we may truncate the sums over k, r to N (k) ≤ X 1 4 +ε and N (r) ≤ X 1 2 +ε with negligible errors. Notice that this also implies that N (k) < N (̟). As we also have N (g) ≤ M < N (̟), we conclude that we have ̟ g 2 k 2 = 1. We then extend the sum on k to infinity again by Lemma 2.5 to see that We may now write ̟ = u ̟ ̟ ′ with u ̟ ∈ U K . As the treatments are similar, we may further assume that ̟ is primary, so that we can apply the quadratic reciprocity law (2.1) with (2.5) to obtain that It is easy to check that the contribution of the error term above to S = 1 is O(X 1−ε ) for sufficiently small ε = ε(θ) > 0. Now, we define for any function g(t) and any complex number s, g s (t) = g(t)t s/2 .
Using this notation, we have where we truncate the integral to |ℑ(s)| ≤ (log X) 2 due to the rapid decay of the gamma function in vertical strips. We further split the first expression on the right-hand side above into a sum of two terms: with E 1 restricting the sums over m, r to N (mr) ≪ exp(w √ log X) and E 2 the opposite, where w > 0 is a fixed sufficiently small constant.
3.3.
Evaluation of E 1 . In this section we estimate E 1 , which is given by (3.11) By partial summation, we have that Combining [11,Theorem 5.13] and [11,Theorem 5.35] together with [11, (5.52)], by noting that the conductor of the primitive character χ mr is ≪ exp(w √ log X) ≤ exp(2w √ log X), we see that for an absolute constant c 1 > 0. Here the term −u β1 /β 1 appears only when L(s, χ mr ) has a real zero for a positive constant c 2 .
Notice that for ℜ(s) bounded, we have uniformly in s that Applying the above estimation, we see from (3.11) that the contribution of the error term to E 1 is for some absolute constant c 3 > 0.
By an analogue of Page's theorem for the family of quadratic Hecke L-functions (which can be established by using combining the arguments in [4, §14, page 95], [11,Theorem 5.28 (1)] and [11,Lemma 5.9]), there exists a fixed absolute constant c 4 > 0 such that we have at most one character χ mr (notice as shown above, the conductor of χ mr is ≤ exp(2w √ log X)) for which the L-function L(s, χ mr ) has a real zero β 1 satisfying We may assume such a real zero exists and denote q * for the modulus of the exceptional character χ mr . It remains to estimate the contribution of the term − u β 1 β1 in (3.12) to E 1 . For this, we apply integrate by parts to see that Now, by choosing w > 0 sufficiently small in terms of c 1 in (3.13) and applying the above to (3.11), we see that for some positive constant c 5 and some bounded power of two denoted by γ * , log M ) and applying Fourier inversion given in (3.4), we obtain (3.14) We may truncate the above integral in (3.14) to |z| ≤ √ log M with a negligible error using (3.5). This leads to We move line of integration over s in (3.15) to ℜ(s) = − c6 log log X , for some small c 6 > 0 such that there is no zero of ζ K (1 + s + 1+iz log M ) in the region ℜ(s) ≥ − c6 log log X , ℑ(s) ≤ (log X) 2 . The contribution to E 1 of the integration over s on the new line of integration is O(X/ log X). There is also a contribution from the reside at s = 0, which we write as an integral along a small circle around 0 to see that Notice that when |s| = 1 log X , we have Applying the above bounds in (3.16) and estimating things trivially, we deduce that Now we need an upper bound for β 1 . For this, we recall a result of Landau [13] says that for an algebraic number field F of degree n = 2 and any primitive ideal character χ of F with conductor q, we have for X > 1, where N F (q), N F (I) denotes the norm of q and I respectively, D F denotes the discriminant of F and I runs over integral ideas of F .
It follows from this that we have, for any Hecke character χ modulo q of trivial infinite type in K, Similarly, we have that We then deduce from (3.18), (3.19) and the proof of [11, Theorem 5.28 (2)] that we have the following analogue in K of Siegel's theorem, i.e. for any primitive quadratic Hecke character χ modulo q of trivial infinite type, we have that where c 7 (ε) > 0 is an ineffective constant depending only on ε.
Applying (3.20) in (3.17), then treating the resulting upper bound of E 1 according to whether N (q * ) is ≤ (log X) 3 or not, we see that 3.4. Evaluation of E 2 . In this section we estimate E 2 . We let q = mr in (3.9) and we break N (q) into dyadic segments to see that Here S(Q) is defined as in Lemma 2.9, s 0 ∈ C such that ℜ(s 0 ) = 1 log X and |ℑ(s 0 )| ≤ (log X) 2 . We now employ zero-density estimates to estimate E(Q). For this, we write Φ s0 using inverse Mellin transform to obtain that Note that integration by parts implies that for every non-negative integer j, we have Upon taking logarithmic derivative on both sides above, we see that Applying the estimation (see [14, Theorem C.1]) that for |s| > δ, | arg s| < π − δ, we deduce that on the line ℜ(w) = − 1 2 we have L ′ L (w, χ) ≪ log(N (q)|w|).
It follows that, for our choice of ε 0 > 0 above, we can achieve, after finitely number of iterations, that We now combine (3.28), (3.29) and the above estimation to see that for some positive constant c 8 , Applying the above bound to (3.22), we see that for some absolute constant c > 0,
The mollified second moment
We now begin our proof of Proposition 1.3. As a preparation, we first include some results from the sieve methods. 4.1. Tools from sieve methods. We denote 1 A (n) for the indicator function of a set A of algebraic integers in O K , so that 1 A (n) = 1 when n ∈ A and 1 A (n) = 0 otherwise. Then we have We write ω [i] (n) for the number of distinct prime ideal factors of (n). We apply Brun's upper bound sieve condition (see [5, (6.1)]) to see that Let G(t) be a non-negative smooth function, compactly supported on [−1, 1] satisfying |G(t)| ≪ 1, |G (j) (t)| ≪ j (log log X) j−1 for j ≥ 1 and G(t) = 1 − t for 0 ≤ t ≤ 1 − (log log X) −1 . Then we have where the coefficients λ d are defined by Similar to what is pointed out in the paragraph above [1, (5.9)], we have We end this section by listing a few lemmas needed in the paper, which are analogues to [1,].
Lemma 4.2. Let 0 < δ < 1 be a fixed constant, r a positive integer with r ≍ (log X) δ , and z 0 as in (4.2). Let G be the set of generators of ideals in O K chosen in Section 2.1. Suppose that g is a multiplicative function on G such that uniformly for all primes ̟ ∈ G, we have |g(̟)| ≪ 1. Then uniformly for all ℓ ∈ O K , b∈G b|P (z0) 6) and (4.7), respectively. Let G be the set of generators of ideals in O K chosen in Section 2.1. Suppose that g is a multiplicative function on G such that g(̟) Suppose that h is a function on G such that |h(̟)| ≪ ε N (̟) −1+ε for all primes ̟ ∈ G. Then with E 0 (X) as in uniformly for all ℓ ∈ O K such that log N (ℓ) ≪ log X. (Here, the index ̟ ′ runs over primes ̟ ′ .) As the above lemmas can be established similarly to [1,], we omit the proofs by only pointing out that constant 4/π in (4.8) (and hence in Lemma 4.4 and 4.5) comes from expanding into Laurent series and noting that the residue of ζ K (s) at s = 1 equals π/4. 4.6. Initial treatment. Now we are ready to estimate S 2 . As Φ is supported on [ 1 2 , 1], we have log N (̟) ≤ log X, so that by positivity we may apply the sieve given (4.5) to see that As d|n and n is odd, we know that d is also odd. Thus, d ∈ G implies that d is primary. Also, we may write λ d = µ 2 [i] (d)λ d since λ d = 0 only for square-free d by (4.6). We further write (4.10) for some parameter Y to be determined later. Now, we apply Lemma 2.4 and (2.5) to write L( 1 2 , χ (1+i) 5 n ) 2 = D 2 (n), where (4.12) Here c > 1/2 and Applying (4.10) and (4.12) in (4.9), we see that (4.14) 4.7. Evaluation of S + R . In this section we evaluate S + R . We apply the divisor bound and the observation that |λ d | ≪ N (d) ε by (4.6) to deduce that Further note that R Y (n) = 0 unless n = ℓ 2 h with N (ℓ) > Y and h square-free. This together with the above bounds implies that, via Cauchy-Schwarz, where |α(m)| ≪ N (m) ε . Then Lemma 2.9 allows us to deduce that Next, we deduce from (4.12) that We evaluate D 2 (ℓ 2 h) by moving the line of integration to c = 1 log X without encountering any poles. We then apply Cauchy-Schwarz to see that It then follows from Lemma 2.10 that we have We thus deduce from (4.15), (4.16) and (4.17) that In this section, we begin to evaluate S + N . We apply (1.4) and (4.12) in (4.14) to see that Here F y (t) is defined in (2.9) and the last equality above follows from (4.11) by noting that when n is odd, ℓ|n and ℓ ∈ G implies that ℓ is primary. As both α and d are primary and square-free, we have [α 2 , d] = α 2 d 1 , where d 1 is defined in (2.7). Therefore, we can rewrite n as α 2 d 1 m in (4.19) to recast Z as where the last equality above follows from Lemma 2.7.
Applying the above expression of Z in (4.19), we deduce that where T 0 singles out the term k = 0 of the first expression on the right-hand side of (4.20) while B being the rest.
4.9. Evaluation of T 0 . It follows from Lemma 2.2 that g(0, n) = ϕ [i] (n) if n = and g(0, n) = 0 otherwise. This implies that (4.21) We now extend the sum over α to all elements in O K . As ϕ [i] (n) ≤ N (n), the error term introduced is where the last estimation above follows from the observation that , together with the bounds that |λ d | ≪ N (d) ε by (4.6) and |b m | ≪ 1 by (1.3). As m 1 m 2 ν = , the sum over m 1 , m 2 , ν in (4.22) is ≪ X ε . Also, it follows from the definition of d 1 in (2.7) that We thus conclude that the expressions in (4.22) are further bounded by O(X 1+ε /Y ). Hence by (4.21), we have Now we express the sum on α in terms of an Euler product to arrive that (4.24) We apply Lemma 4.4 to see that (4.25) We now write o(1) for E 0 (X) as E 0 (X) → 0 when X → ∞. It also follows from trivial estimation that we can omit the condition N (̟) ≤ z 0 in (4.25). Applying these in (4.25) and then to (4.24), we see that We apply (2.5) and (2.9) to see that (2) Applying (1.3), (3.4) and (4.28) in (4.27), we obtain for c = 1 log X , where Q(w 1 , w 2 , s) is holomorphic and uniformly bounded in the region ℜ(w 1 ), ℜ(w 2 ), ℜ(s) ≥ −ε, which satisfies (4.31) Q(0, 0, 0) = 1.
Applying (4.30) in (4.29), we deduce that We may truncate the integrals above to |z 1 |, |z 2 | ≤ √ log M and |ℑ(s)| ≤ (log X) 2 due to the rapid decay of the gamma function in vertical strips and (3.5). Notice further that similar to [14, Theorem 6.7], we can show that there exists a constant c ′ such that when ℜ(z) ≥ −c ′ / log |ℑ(z)| and |ℑ(z)| ≥ 1, we have We then change the contour of integration over s to the path consisting of the line segment , and the line segment L 3 from − c ′ log log X + i(log X) 2 to 1 log X + i(log X) 2 . The contributions of the integrals on the new lines are negligible due to the rapid decay of the gamma function on L 1 and L 3 and the estimation X s ≪ exp −c ′ log X log log X on L 2 . We are then left with the contribution from a residue of the pole at s = 0, which we present as an integral along a circle centered at 0 to obtain The main contribution to Υ 0 comes from the first terms of the Laurent expansions of the zeta functions and Q. We then deduce via (4.31) that In the above expression, we extend the integrals over z 1 , z 2 to R 2 with a negligible error by (3.5). Then applying the relation (see [1, (7.3.12) we deduce that We now apply (3.7) to evaluate the above integral as a residue to arrive that Substituting the above into (4.26), and noting that we obtain that Y .
(4.32) 4.10. Evaluation of B: the principal terms. In this section, we begin to evaluate B. We recall from (4.20) that we have We apply Mellin inversion to see that for any c > 1, where h is defined in (2.10). We further apply Lemma 2.12 to recast Q as Observe that integration by parts shows that when ℜ(w) ≥ − 1 2 + ε, we have for any integer j ≥ 1, It follows Lemma 2.12, Lemma 2.13 and the above estimation that we can move the line of integration of the w-integral in (4.34) to c = − 1 2 + ε. We encounter a pole at w = 0 only when χ ik1 is a principal character, which holds if and only if k 1 = ±i and this is further equivalent to kd 1 = ±ij 2 for some j ∈ G by (2.8). We thus deduce that , w)L(1 + w, χ ik1 ) 2 G 0 (1 + w; k, m 1 m 2 , α, d) dw. (4.37) We treat P ± first. Note that d 1 is square-free by (2.7) since d is square-free. It follows that kd 1 = ±ij 2 for some j ∈ G if and only if k = ±id 1 j ′ 2 for some j ′ ∈ G . Thus we relabel k as ±id 1 j 2 with j running through all elements in G in (4.37) and apply Lemma 2.13 to see that for c > 1 2 , we have (4.38) Note that part (ii) of Lemma 2.2 implies that for j ∈ G, ̟ ∤ 2αd and β ≥ 1, This allows us to deduce from the definition of G 0 given in Lemma 2.12 that where for any k, ℓ, α ∈ O K and s ∈ C, we define G(s; k, ℓ, α) = ̟∈G G ̟ (s; k, ℓ, α) similar to that given in [16, (5.8) (4.40) Here k 1 is the unique element in O K that we have k = k 1 k 2 2 with k 1 being square-free and k 2 ∈ G. It follows from Lemma 2.2 and the above expression of G(s; k, ℓ, α) that we have G(s; ij 2 , ℓ, α) = G(s; −ij 2 , ℓ, α) for j ∈ G. Thus applying (4.39) to (4.38), we see that and where we define for |v − 1| ≤ 1/ log X, and any u with ℜ(u) > 1/2, We further note that around s = 1, where γ K is a constant. This allows us to deduce from (4.41) and (4.42) that (4.43) Note that we have where the last step above follows by noting that we have G(v; i(1 + i) 2 j 2 , ℓ, α) = G(v; ij 2 , ℓ, α). We deduce from this that , and apply the above definition of G in (4.40) and Lemma 2.2 to obtain from (4.44) that This allows us to shift the line of integration in (4.43) to ℜ(s) = 1 log X to see that the integral on the new line is bounded by where we write m 1 m 2 = ℓ 1 ℓ 2 2 , µ 2 [i] (ℓ 1 ) = 1, ℓ 2 ∈ G. Note that (4.41) is equivalent to (4.43), so we can now set c = 1/ log X there. We then drop the condition N (α) ≤ Y in the sum over α and apply an argument similar to that in (4.23) to estimate the sum over N (α) > Y in (4.41) to see that ( 1 log X ) where H(s, w; ℓ, αd).
We can recast P ± given in (4.48) more explicitly as (4.49) Since λ d ≪ N (d) ε by (4.6) and b m ≪ 1 by (1.3), it thus follows from (4.55) that (4.50) 4.11. Evaluation of B: the remainder terms. In this section, we estimate R given in (4.37). We denote , w) .
We apply the Cauchy-Schwarz inequality inequality to see that for an integer K ≥ 1, where k 2 is defined by (2.8). Note that (2.7) implies N (d 1 ) ≥ N (d)/N (α), so that we have where the last estimation above follows from Lemma 2.10 and partial summation. We then deduce from this and (4.53) that (4.54) Applying (4.35) together with the first bound of Lemma 2.15 implies that we may restrict the sum of the right-hand side of (4.54) to K = 2 j ≤ N (α) 2 V J(1 + |w| 2 )(log X) 4 , in which case we apply the second bound in Lemma 2.15 to (4.54) to see that Substituting this into (4.52) and summing over K = 2 j , K ≤ N (α) 2 V J(1 + |w| 2 )(log X) 4 , we deduce via (4.35) that (4.55) As R = N (m 1 m 2 )R(m 1 m 2 , d), we apply the bounds that λ d ≪ N (d) ε by (4.6) and b m ≪ 1 by (1.3) to derive from (4.55) that (4.56) (4.57) We now require that the values of θ, ϑ defined in (1.2) to satisfy θ + 2ϑ < 1 2 .
This way, we see that We move the line of integration in (4.57) to ℜ(s) = − 1 log X to encounter a pole at s = 0. A change of variable s → −s together with (4.61) and (4.64) shows that the integral on the new line is the negative of the original integral in (4.57). It follows that the original integral equals half of the residue of the pole at s = 0. Representing this residue as an integral along the circle |s| = 1 log X and applying (4.59), we see that We proceed to sum over d in (4.65). By doing so, we apply (4.60), (4.62) and (4.63) to encounter the following sums: .
|
2020-07-28T01:00:59.971Z
|
2020-07-26T00:00:00.000
|
{
"year": 2020,
"sha1": "d8266e1fad7289dfdcf27a199e63beedf560a052",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d8266e1fad7289dfdcf27a199e63beedf560a052",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
7315227
|
pes2o/s2orc
|
v3-fos-license
|
Customized-Language Voice Survey on Mobile Devices for Text and Image Data Collection Among Ethnic Groups in Thailand: A Proof-of-Concept Study
Background: Public health surveys are often conducted using paper-based questionnaires. However, many problems are associated with this method, especially when collecting data among ethnic groups who speak a different language from the survey interviewer. The process can be time-consuming and there is the risk of missing important data due to incomplete surveys. Objective: This study was conducted as a proof-of-concept to develop a new electronic tool for data collection, and compare it with standard paper-based questionnaire surveys using the research setting of assessing Knowledge Attitude and Practice (KAP) toward the Expanded Program on Immunization (EPI) among 6 ethnic groups in Chiang Rai Province, Thailand. The two data collection methods were compared on data quality in terms of data completeness and time consumed in collecting the information. In addition, the initiative assessed the participants’ satisfaction toward the use of a smartphone customized-language voice-based questionnaire in terms of perceived ease of use and perceived usefulness. Methods: Following a cross-over design, all study participants were interviewed using two data collection methods after a one-week washout period. Questions in the paper-based questionnaires in Thai language were translated to each ethnic language by the interviewer/translator when interviewing the study participant. The customized-language voice-based questionnaires were programmed to a smartphone tablet in six, selectable dialect languages and used by the trained interviewer when approaching participants. Results: The study revealed positive data quality outcomes when using the smartphone, voice-based questionnaire survey compared with the paper-based questionnaire survey, both in terms of data completeness and time consumed in data collection process. Since the smartphone questionnaire survey was programmed to ask questions in sequence, no data was missing and there were no entry errors. Participants had positive attitudes toward answering the smartphone questionnaire; 69% (48/70) reported they understood the questions easily, 71% (50/70) found it convenient, and 66% (46/70) reported a reduced time in data collection. The smartphone data collection method was acceptable by both the interviewers and by the study participants of different ethnicities. Conclusions: To our knowledge, this is the first study showing that the application of specific features of mobile devices like smartphone tablets (including dropdown choices, capturing pictures, and voiced questions) can be successfully used for data JMIR Mhealth Uhealth 2014 | vol. 2 | iss. 1 | e7 | p. 1 http://mhealth.jmir.org/2014/1/e7/ (page number not for citation purposes) Jandee et al JMIR MHEALTH AND UHEALTH
Introduction
The paper-based questionnaire survey is the most frequent method used to collect health data, especially in low-and middle-income countries, where there is normally limited resources and technology [1,2]. Despite the fact that this has been the standard data collection method for decades, there are still extensively reported problems with paper-based questionnaires, including frequent errors, high storage costs, and high double-data entry costs. Currently, electronic data collection methods have been developed with the aim of merging the processes of data collection and data entry [2]. Many devices, such as personal digital assistants (PDAs) and smartphones, have been adapted to serve as electronic devices for data collection, and all of these are increasingly being used in place of paper-based questionnaires [1,3]. These PDAs and smartphones have limitations, however, particularly when it comes to downloading data, because they require telephone signals or wireless networks. In addition, all data can be corrupted if PDAs or smartphones are misplaced or stolen, and data can be lost if they are damaged [4][5][6]. However, electronic devices for data collection offer the advantages of improved data quality and consistency through the use of automated validation procedures and data range checks. They can integrate different kinds of formats (images, texts, voice) which can easily be transferred over long distances through wireless networks. Moreover, electronic devices for data collection have many advantages over paper-based questionnaires. There is no need for data entry or multiple printings, (making them budget-friendly), they avoid problems arising from illegible writing, and they can use media during interviews. The disadvantage is that all collected data is digitally stored and there are no hard copies available if problems occur during data collection. Thus, the electronic devices need to be designed and developed carefully in order to minimize problems and enhance data collection speed over paper-based questionnaires. Previous studies using PDAs, smartphones, and Internet-based devices [4,5,[7][8][9] have reported high response rates and short data-collection times.
In the context of poor research infrastructure and of increasing demands for large-scale health surveys, the affordability and availability of mobile phones and wireless networks create a viable alternative to traditional paper and pencil methods. A study conducted in a peri-urban settlement in South Africa to evaluate the use of mobile phones in surveys by using lay community health workers, reported that using mobile technology via mobile phones offer benefits over PDAs in term of data loss and uploading difficulties [5]. In contrast, another study compared the completeness of data collection using the paper method and electronic method using handheld computers in an office-based patient interview survey [4]. A better return rate was found on paper-vs electronic-based methods. This was due to technical difficulties experienced with electronic data collection, and stolen or lost handheld computers. However, only 0.04% of total items were missing on electronic surveys, whereas 3.5% were missing on paper ones. Although handheld computers produced more complete data than the paper method for return rate, they were not superior because of the large amount of missing data due to technical difficulties with the handheld computers [4]. In Uganda, hundreds of health workers have used PDAs provided by the Ugandan Health Information Network to collect health data in the field. They revealed that health workers report increased job satisfaction due to the greater efficiency and flexibility provided by the technology [10]. Additionally, a study from India points to users who would not return to the paper reporting, since the mobile phone reporting saved time [11].
Besides the reported advantages of using smartphones and PDAs during surveys in terms of data completeness and timeliness, the mobile-technology device may also offer an advantage when collecting textual and picture data in different ethnic dialect settings. Several major technological innovations contributing to survey-data collection have been developed including graphic user interfaces and multimedia computing. There were reports on the effectiveness of using auditory communication and audio-visual communication techniques as part of the data collection method, particularly in the multilingual settings [12,13]. The mobile technology device can be customized to speak the specific language for each group, and thus standardize data collection in the field. This study was conducted to develop and test new electronic data capture tools using mobile devices (ie, smartphone tablets) with customized-language, voice-based questionnaires for data collection among 6 ethnic groups who speak different languages. The use of mobile-survey data capture was compared with the classic paper-based questionnaire method for data collection about the knowledge, attitude, and practice of hill-tribe mothers/caretakers toward immunization of their children. The main objective of this study was to evaluate differences in data completeness, time consumed, participant's satisfaction, and problems encountered during the two methods of data collection.
Study Sites
We conducted this study in 8 villages under the responsible areas of Wawi Sub-district Health Promoting Hospital (Mae Suai, Chiang Rai Province, Thailand). These areas are composed of 6 ethnic hill-tribal groups who speak different languages, have different cultures, beliefs, and lifestyles. These 6 hill tribes, include Akha, Karen, Mien/Yao, Lisu, Lahu, and Yunnan Chinese. Most of their spoken languages have no writing system. Figure 1a presents examples of study sites of these minority groups on the highland.
Participants
A sample of 70 mothers with children older than 5 years in the study areas were recruited. The sampling frame was the names listed in the Wawi-database. The mothers registered in the Wawi database were selected by stratifying village locations and different ethnic groups. According to the routine activities for management of the Expanded Program on Immunization (EPI) as well as other health promotions under the Thailand Ministry of Public Health, each mother was approached (through a home visit) on a monthly basis by a designated Village Health Volunteer (VHV) to the approximately 10 allocated responsible households. Thus, the survey on EPI was conducted by 16 selected interviewers who are VHVs in the study areas. The interviewers were randomly selected and assigned to collect data from the randomly-selected hill-tribe mothers/caretakers.
Mobile-Technology Initiatives
The hardware requirement for the mobile device to be used in the field is that the mobile-device hardware must run on the Java programming language. In terms of technical information about the mobile technology, the programming was done with Eclipse (an open source development tool) on Android. Sixteen smartphone tablet devices were distributed to the 16 VHVs.
The concept and workflow of the smartphone questionnaires are summarized in Figure 2. Three data collection functionalities were programmed and installed on the mobile devices, including dropdown choices, picture capturing, and voiced questions. Data/information was collected offline and could be synchronized onto a server database when data collection was complete, and when there were telephone signals or wireless networks available. It should be noted that the personal information was treated confidentially within the system applications during the study period.
Dropdown-choice functionality was developed and integrated into the mobile devices. The questions that were developed into dropdown-choice format, included sociodemographic parameters of hill-tribe mothers/caretakers and their children such as age, occupation, ethnicity, income and debt status, number of children in the family, type of house construction, drive time to the nearest hospital, vehicle and convenience for travelling to the responsible, district hospital in the area, child birth order, place of child birth, living with biological parents status, and source of vaccination. These questions with dropdown choices were asked at the beginning of the interview followed by other data collection techniques, specifically, picture capturing, and voiced questions. Respondents' answers were automatically saved to the tablet's memory. Figure 3a presents the feature for dropdown choices on the VHV's smartphone questionnaire.
Picture-capturing functionality is one of the methods for secondary data collection. This study had to collect data from the Maternal and Child Health Handbook (MCHH) regarding immunization history, scheduled vaccination date, and actual vaccinated date. Thus, capturing a picture of the history of immunizations over the years as shown on the MCHH booklet was incorporated into the questionnaire set. Collecting such information by an individual, self-reported interview or manually extracting data from the book would be quite cumbersome; thus, capturing a picture from the book would make it easier for interviewers. This could also eliminate data missing during the data collection process. The immunization history page was captured as a picture on the smartphone data-collection device by VHVs and saved automatically to its memory. If the picture was not clear, the VHV would repeat the picture-capturing process. Figure 3b presents the picture-capturing functionality for secondary data on the VHV's smartphone. After all pictures were synchronized to the server, electronic forms were developed for data entry from the pictures. Immunization history data in the picture (scheduled and vaccinated date) was entered onto an electronic form. When all data was entered and submitted by the investigator, the immunization status (completely and incompletely immunized) and vaccine schedule compliance status (on time and out of schedule) were summarized and presented on the mobile device, which became a useful tool for VHVs to monitor the vaccine program of each child. Figure 4a-c presents the data entry from a captured picture and the summary statistics of individual immunization status, as well as their compliance to immunization schedule status.
Voiced-question functionality was purposively designed to collect data from 6 ethnicities who speak different languages, and where most of their languages have no writing system. The necessity of doing voiced questions on the mobile device arose from the fact that the VHVs cannot speak all six languages. This approach would also reduce the use of translators when performing data collection, and, in turn, lower mistranslations and standardize the question-asking process. The development of a voice questionnaire on smartphone tablets meant that the questionnaires were translated and incorporated onto the system screen into the 6 hill-tribe languages and Thai: Akha, Karen, Mien/Yao, Lisu, Lahu, Yunnan Chinese. This was done by local experts who are able to speak and read both Thai and his/her dialect languages. When performing data collection, the VHV would press the bullet button on screen for a question and the audio of the corresponding question would be spoken to the mothers/caretakers. The VHV then recorded the answers, all of which were multiple choice. In this study, the voiced questions in the smartphone tablet, included knowledge, attitude, and practice (KAP) of hill-tribe mothers/caretakers toward EPI. Figure 3c presents the feature for voice-question functionality regarding KAP toward EPI on VHV's smartphone questionnaire.
Training of Interviewers
All interviewers had experience using mobile technology (smartphone tablet) from the StatelessVac project, which was funded by a grant from the Bill & Melinda Gates Foundation through the Grand Challenges Explorations initiative [14]. The StatelessVac project was conducted in Chiang Rai Province from April, 2012 to April, 2013, and aimed to use mobile technology for enhancing routine EPI schedule reminders and behavioral change communication among ethnicity groups. Although the VHVs had practiced using mobile phone technology before, the use of mobile technology in this study was different from the StatelessVac. In this case, its purpose was data collection in surveys. Therefore, the 16 selected VHVs in this study received additional training on using smartphone, voice-based surveys and also to reinforce their data collection skills. They were trained for 2 days on the use of both smartphone tablets with voiced questions and paper-based questionnaire methods. Not only was training on using the mobile device provided, but the training also included communication skills and a real-life pilot practice interviews with hill-tribe mothers/caretakers using both data collection methods, explanation of questionnaires, translation of questions in each dialect languages, and practice to ask permission with informed consent from hill-tribe mothers/caretakers. All interviewers were trained individually to ensure that the two data collection methods were conducted correctly.
Data Collection
For the purpose of comparing the two data collection methods, customized-language, voice-based survey and the standard paper-based questionnaire regarding KAP, a cross-over design was adopted. The two groups of hill-tribe mothers/caretakers participants in this study were: (1) those surveyed by using a smartphone, voice-based then paper-based sequence, and (2) those using a paper-based then smartphone, voice-based sequence. The mothers/caretakers were randomly selected to the study groups with different sequences. The washout period between the two data collection methods was set at 1 week after the initial data collection method. Most studies with a time interval of the questionnaire administration for health-status instruments have ranged from 2 days to 2 weeks; however, a study investigating the 2-day and 2-week time durations reported no statistically significant differences in the test-retest reliabilities of the administered tests [15]. As it is generally recognized that a very short time interval might lead to the carryover effects, and with respect to the study population, a 1-week interval was selected to compromise the recollection bias in this study. The 16 VHVs who were interviewers in this study visited hill-tribe mothers/caretakers one by one in their routine job until the sample size of 70 was reached. Work flow for data collection is summarized in Figure 5. Figure 1b presents data collection using smartphone customized-language voice-based survey in the villages of the study area (note that all pictures presented in the figure were taken with permission from the villagers).
Study Variables
The survey was composed of 16 sociodemographic items in typical multiple-choice fashion on the paper-based questionnaires, and with dropdown choices on the electronic questionnaires on smartphone, customized-language, voice-based applications. The picture capturing was on mobile devices only. There were 55 voiced questions, including 25 items on knowledge toward EPI vaccines, 15 items on attitude toward vaccination, and 15 items on practice for preparing or activities to vaccination. The outcomes for this particular study were not about analysis of vaccine coverage or on KAP issues (which will be described in another study) but on the comparisons of methodology outcomes from the two data collection methods. The main outcomes of interest were data quality composing of data completeness, time consumed, and participants' satisfaction with the data collection methods. Data completeness and time consumed were based on repeated surveys with different methods from the same participant. The participants' satisfaction toward the mobile technology survey was assessed using a simple Technology Acceptance Model (TAM) [16,17] in terms of perceived ease of use and perceived usefulness. Questions included how easy the survey had been to answer, whether the participant understood the questions clearly, their convenience during answering, and whether the electronic data collection had saved time. The satisfaction data toward using the smartphone, voice-based questionnaire survey method was collected from mothers/caretakers after the data collection process was finished.
Ethical Considerations
This study was a part of the project "Assessment of Expanded Program on Immunization Coverage and its Determinants among Hill Tribe Children, Wawi Sub-District, Mae Suai, Chiang Rai Province, Thailand" (awaiting publication). The project was reviewed and approved by the ethics committee of the Faculty of Tropical Medicine, Mahidol University (Thailand). This study involves vulnerable research participants belonging to ethnic groups in Chiang Rai province, Thailand. All participants were informed about all details regarding the study, and asked to sign an informed consent form for participating by using their dialect language. The document was translated by VHVs.
There was no identification of name and family name of the respondents on the case record forms. The individual information was kept completely confidential to the researcher during data collection and analysis. The respondents were able to stop giving an interview at any time and did not need to give reason for the withdrawal of their consent.
Picture Capturing of Data
During the study period, 70 hill-tribe mothers/caretakers were randomly selected from 363 mothers in the EPI-coverage assessment project. As part of the initiative of data collection on smartphone survey, the function of picture capturing was developed and used to collect the history of immunizations over the years in the MCHH booklet. Pictures of all pages with EPI recorded on the booklet of each child were taken and then directly transcribed into database. The data entry screen was designed corresponding to all data fields on the booklet pages, and thus all data were manually entered onto the system for further analysis on EPI coverage. The results of EPI coverage were beyond the purpose of this study.
Completeness of Data
To assess the effectiveness of the two data collection methods, 35 study participants were randomly assigned to one of the two study arms in the cross-over of the standard paper-based questionnaire survey and smartphone, voice-based survey methods. The questions captured on smartphone tablets were programmed to collect and store answers in a sequential manner with no skip pattern, and thus no data entry errors occurred after data synchronization. Hence, the use of smartphones with the voiced-question method produced complete answers significantly different from the answers obtained from the use of the paper-based questionnaire method. In the paper-based questionnaire method, 69% (48/70) of hill-tribe mothers/caretakers answered all questions completely (as opposed to the 70/70, 100% of smartphone voice-based method). Out of the 55 items on KAP, the number of questions that had incomplete answers on the paper-based survey method ranged from one item (among 17/70, 24% of respondents) to six Items (1/70, 1% of respondents). As shown in Table 1, the separate group analysis reveals that the sequence of data collection method (Group 1: paper before voice, and Group 2: voice before paper) had certain effect on data completeness. The incomplete answers of the paper-based method of Group 1 still ranged from one item (10/35, 29% of respondents) to six items (1/35, 3% of respondents), but for Group 2 only one item (7/35, 20% of respondents) was incomplete.
Time Consumed During Data Collection
During the data collection process with both the smartphone, voice-based and paper-based questionnaire methods, the time consumed was measured. Time consumed was calculated in minutes from the time the first question was asked until the last KAP question was answered. Table 2 shows the mean duration of time consumed for data collection and the frequency of participants who spent different time slots to answer all questions. The mean duration of time spent on paper-based data collection was 32.39 minutes, compared with 22.51 minutes for a smartphone, voice-based survey. Of all participants, 37% (26/70) and 49% (34/70) spent 0-15 and 16-30 minutes, respectively, for data collection with the smartphone, voice-based method while 23% (16/70) and 43% (30/70) for the paper-based questionnaire method. The time consumed was significantly different between the two methods; the smartphone, voice-based questionnaire method was significantly faster than the paper-based method (P<.001). When analyzed separately, for the two sequences of data collection methods, (paper-voice and voice-paper groups), significantly shorter times for the smartphone, voice-based method were still observed.
Participants' Satisfaction
All of the 70 hill-tribe mothers/caretakers were assessed for their satisfaction with being interviewed by a smartphone, customized-language, voice-based questionnaire survey after they finished the data collection process. The percentages of those who scored ≥80% in each term of satisfaction were: 57% (40/70) for ease of answering using the voiced questionnaire, 69% (48/70) for understanding questions clearly, 71% (50/70) for convenience of responding to questions, and 66% (46/70) for reduced time for data collection ( Table 3). Many of the hill-tribe mothers/caretakers had a high level of satisfaction toward the voice-based, questionnaire survey method; they expressed their approval for mobile technology being used for surveys. This was despite a few minor problems during data collection (eg, the system halted for a few seconds, or a poor quality picture was captured by the VHVs).
Principal Findings
The technical challenge of applying novel ideas to develop customized-language, voice-based questionnaire functionalities on smartphones for data collection was manageable. Using a smartphone tablet for surveys in the field does not require a telephone signal or wireless network to synchronize data onto a server database. In this study, signals varied, as villages were located in remote highland areas, thus survey activities were conducted off-line and the data were synchronized when the telephone signals or wireless networks became available. This study confirms that data collection via smartphone can be an alternative method to paper-based questionnaires, even in remote/rural areas without the constraint of signal availability [5,9,18]. In this study, though the survey data were transmitted directly from the study tablets to the secured database server at the central office, there was no data encryption mechanism while transferring information to the server. This feature will be implemented in future design.
The results of this study suggest that novel functionalities of picture capturing, and voiced-questions on smartphones can produce data quality, in terms of data completeness, which affect the validity and reliability of the study outcomes. This supports the findings of other data-management studies [19,20]. The same result was found in another study in Thailand [21], in which mobile-phone devices were used to improve antenatal care (ANC) and EPI services in border areas. It was suggested that cell phones developed and integrated into the health care system could be used successfully, especially in low-resource settings [21]. Many previous studies reported the use of handheld computers (PDAs) for data collection [1,4,18,22], but a few revealed that they encountered numerous instances of missing data caused by technical difficulties, or from problems with loss or theft of the devices. Despite such problems, mobile technology use in data collection remains a feasible, acceptable, and preferred method, because it can produce more complete data than the classic paper-based method. Many researchers conducting studies using PDAs [4,22] suggested the use of other hardware solutions, such as tablet smartphones or cell phones, similar to what was done in this study. However, generalizing the result of this study in terms of data completeness could be limited by the cross-over design and small sample size. Even though there was a washout period between the two data-collection methods, the participants may still remember the 55 KAP questions asked using the first method; thus, the comparison of the methods measured data completeness rather than correct answers of the KAP. The focus of this study was to compare data quality between the two methods for the completeness, rather than correctness, of the answers.
Picture capturing for secondary data entry functionality on smartphones in this study demonstrates the successful application of new technology to collect data, especially when collecting data from ethnic minorities who speak different languages. If the data were to be collected on paper-based questionnaire, there could be two chances of transcription errors, from the mothers' interview or booklet extraction to paper questionnaire and then from paper questionnaire to computer database. In contrast, there should be fewer data transcription errors when using picture-capturing secondary data due to cutting out one step of the data collection process; data are entered directly from the picture of data to the computer database. With the purpose to prove the concept of using picture capturing for secondary data, the current system has not yet designed for the feature of double-data entry to cross-checking data validity. The double-data entry function should be considered in future electronic questionnaire survey design. These features represent how technology can be used as a support tool to facilitate or enhance the work of interviewers and to increase efficiency of survey data collection with the complexity of automated interviewing systems and instruments [12,13].
In this study the use of the smartphone, voice-based questionnaire to collect data on KAP regarding EPI, compared with the paper-based questionnaire, has shown that the data collected was more complete, as the program was designed to ask the questions in sequence automatically. As found in this study, there were incomplete answers when using the paper-based data collection method, but none when using the smartphone, voiced-questionnaire survey method. This may be due to the fact that when performing data collection on paper questionnaires some respondents and/or the interviewers unintentionally skipped a required question, and a few respondents decided not to answer some questions. In contrast, the programming on smartphone survey does not allow the respondents to skip any required question(s). This finding suggests for future design of electronic questionnaires to be more flexible allowing respondents who may want to skip some questions. Thus, the result was similar to the findings of household surveys in South Africa that use mobile phones as data-collection instruments [4,5,23]. Furthermore, previous studies showed that data recording, transferring, and entry mistakes were not found when electronic devices (PDAs, mobile phones, tablets) were used for data collection [8,24,25].
Not only was the benefit of data completeness with the smartphone, voice-based questionnaire confirmed, but the results of this study also suggest that using the smartphone, voice-based questionnaire surveys reduced the time consumed for data collection. This study's results were similar to the findings of other studies that reported time spent in data collection using mobile technology [3,[26][27][28]. Moreover, the study showed the time consumed during the KAP survey in using the smartphone, voice-based questionnaires that customized to the participant's own dialect resulted in substantial time saving over paper-based questionnaires. However, the limitation for the use of smartphone, voice-based questionnaires in this study was that none of the voiced-questions were open-ended. With existing technology, collecting open-ended voice answering for the voiced-questions can be done, but that will require programming and adapting the voice recognition solution. If open-ended voice questions are needed, one should plan their data-collection tool with such system functionality. However, as stated in previous studies, when planning to collect data electronically one should consider the appropriateness of methods to be used in accordance with study design, types of data collection, and characteristics of study participants [7,29].
A previous study was conducted in clinical research to compare users' satisfaction in using two devices for electronic data collection, laptops and handheld computers [25]. It was reported that users found the laptops easier, faster, and more satisfying to use than the handheld computers. This is similar to the findings of a study in China on using handheld data collection tools [22]. Though some technical problems occurred during the data collection process, it remained a feasible, acceptable, and preferred method by Chinese interviewers. In previous studies conducted to assess the acceptability and feasibility of using a mobile phone application for health care delivery among health care workers, conflicting results were reported. The health care workers expressed their positive perceptions toward mHealth, but poor uptake in their actual practice was demonstrated [30]. The survey in this study was conducted as part of the routine home visit by the community health providers (VHVs) who work closely with the village, thus they reported that the use of smartphones could enhance their routine work for health care service in EPI coverage assessment and can be used to educate hill-tribe mothers/caretakers when data collection was finished toward EPI vaccines. Besides interviewer's satisfaction, this study also assessed satisfaction among participants for data collection with smartphone questionnaires. The results on users' satisfaction of smartphone-voiced questionnaire surveys in this study confirmed that the underserved, hard to reach, and mostly illiterate users in the remote areas accepted the technology. The questions about technology acceptance used in this study suggest that high percentages of hill-tribe villagers thought that the customized-language, voice-based questionnaire method for data collection made them understand the question more and enabled them to answer accordingly. The survey of VHVs' satisfaction and their overall opinions could be used for improving the design of future electronic questionnaire survey.
One of the limitations of this study was that it was a small-scale survey, thus the costs for conducting the smartphone-questionnaire surveys were higher than the paper-based questionnaires. However, the costs of both data-collection methods would be similar if the study would be conducted on a larger scale. Both methods have different costs according to study scale; however, smartphone questionnaires would still be a feasible alternative for collecting field data even in a low-resource setting. [11,18].
The strength of this study was the data-collection tool, which was developed into a smartphone questionnaire using voiced questions to ask all participants in their dialect languages. As participants of this study speak different languages, and have varied beliefs and cultures, developing and implementing the tool was a challenge. Many languages do not have writing systems, thus most studies conducted among hill-tribe people often used structured, paper-based questionnaires with interviewers and/or translators to collect data [31][32][33][34]. This classic method might affect the validity and reliability of data because the way that interviewers/translators pose or translate the questions might induce all participants to answer in a certain direction, and even when asking the same question to different participants the interviewers/translators may use different words/meanings. In developing the smartphone, voice-based questionnaire survey in this study, all questions were first translated and recorded as electronic files in the 6 hill-tribe languages (Karen, Akha, Mien/Yao, Lisu, Lahu, and Yunnan Chinese, as well as Thai), then the translated questions were cross-checked for clear and correct definitions of translation before piloting the tool for the actual local people who speak that language in the field. The final version was used in the study after a few revisions. The results of this study suggest that mobile devices with customized-language, voice-based feasibility would have a potential benefit in minimizing information bias.
Conclusions
This is the first study to present novel features on using smartphones for survey questionnaires with image-capture and voiced-interrogation functionality. The mobile device can be effectively used for photographing secondary data and collecting primary data with a customized-language and voiced-questionnaire survey. This study has been conducted as a proof-of-concept that smartphone, customized-language, voice-based questionnaire surveys can be successfully used for data collection, especially for the study involving people speaking multiple languages in the study setting. The benefit of using mobile technology to collect data can be observed in terms of reduced time spent in data collection and data entry processes, and minimizing or eliminating missing data when collecting data in the field. Moreover, this study suggests that both interviewers and participants accepted and preferred the technology being used in terms of its ease-of-use and usefulness of the device functionalities (dropdown choices, picture capturing, and voiced questions). There has been an increasing trend in using mobile technology in health care services and health research in the past decade, and further studies can be done, including, but not limited to, the use of several potential beneficial features of mobile technology applications in survey research, including character/voice recognition, geospacing/temporal data management, and animation questions.
|
2018-04-03T04:46:48.356Z
|
2014-03-06T00:00:00.000
|
{
"year": 2014,
"sha1": "303688422fe9dca8bc550210ca986d9080a4e9f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/mhealth.3058",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5441b7646a7fb481e3e2ada6dc0a566b300c0c62",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250627438
|
pes2o/s2orc
|
v3-fos-license
|
Spatial Autoregressive Model for von-Mises Fisher Distributed Principal Diffusion Directions
The principal diffusion directions are one of the most important statistics derived from diffusion tensor imaging (DTI). It is directional data that depict the anatomical structures of brain tissues. However, only a few approaches are available for covariate-dependent statistical modeling of principal diffusion directions. We thus propose a novel spatial autoregressive model by assuming that the principal diffusion directions are von-Mises Fisher (vMF) distributed directional data. Using a novel link function relying on transformation between Cartesian coordinates and spherical coordinates, we regress the vMF distributed principal diffusion directions on the subject’s covariates, measuring how the clinical factors affect the anatomical structures. The spatial residual dependence along fibers is captured by an autoregressive model. Key statistical properties of the model and a comprehensive toolbox for Bayesian inference of the directional data with applications to medical imaging analysis are thoroughly developed. The numerical studies based on synthetic data demonstrate that our model has more accurate estimation of the effects of clinical factors. Applying our regression model to the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data, we obtain new insights. illustrate the most interesting ones here for space. The results provide more insightful and meaningful results by inferring the principal diffusion directions. Based on the range of E [ m gkj | data ]-values for each predictors, we conclude that APOE has the largest effect among all, followed by MMSE, and Age.
Introduction
Tissue micro-structure of human brain is an important medical characteristic in clinical and surgical anatomy. With the development of neuroimaging techniques, diffusion tensor imaging (DTI) has become a powerful tool to measure the structures in a non-invasive way (Soares et al., 2013). The rich use of DTI in brain science triggers many meaningful interdisciplinary studies. For example, using DTI, we measure the voxel-by-voxel white matter tracts within a tissue of the human brain. The depiction at the left panel of Figure 1 is the anatomical structure measured at a voxel. The movement of water molecules can be characterized by giving their respective diffusion ellipsoids (see the right panel The diffusion tensor coefficient D = [E 1 E 2 E 3 ]diag(l 1 , l 2 , l 3 )[E 1 E 2 E 3 ] T describes the anatomical structure at the voxel. In this way, we obtain an image whose voxel-wise variables are positive definite diffusion tensors, describing the anatomical structure of a tissue. In addition, there are several other summary features (e.g., fractional anisotropy) that are also derived from the diffusion tensors for different clinical applications.
There are a variety of clinical usage of DTI data. In recent years, several methodological approaches are introduced to handle the statistical randomness of the diffusion tensors (e.g., Schwartzman et al., 2008;Zhu et al., 2009;Yuan et al., 2012;Lee and Schwartzman, 2017;Lan et al., 2021). For example, Schwartzman et al. (2008) adopted Gaussian distribution for symmetric matrices to model diffusion tensor and develop statistical inference tools for eigenvalues and eigenvectors when samples are drawn from that distribution. Following Recently, Lan et al. (2021) proposed a spatial random process based on Wishart distribution to characterize spatial modeling of diffusion tensors.
White matter alignment of the brain is investigated in several clinical applications.
Among various markers for white matter fibers, one of the important marker is the DTIderived principal diffusion direction which is the principal eigenvector, E 1 . They are interpreted as tangent directions along fiber bundles at the corresponding voxel ( Figure 2).
The estimated diffusion directions are then used as an input for tractography algorithms to reconstruct fiber tracts (Wong et al., 2016) and obtain white matter structural connectivity profiles. For clinical applications, low-resolution summary of networks, called connectome are usually constructed using these connectivity profiles considering some parcellation of the brain and their associations with subject's covariates are studied (Roy et al., 2019;Zhang et al., 2019;Guha and Rodriguez, 2020). However, such summarization may often be crude, which could lead to inefficient statistical inference. In this paper, we thus study the association between the principal diffusion directions and subject level covariates (e.g., age, gender, disease status) directly to reveal the factors driving the brain's anatomical structures. To our best knowledge, only a limited number of methods are developed to investigate such association. The challenges in modeling principal diffusion directions are similar to the existing works on modeling diffusion tensors, as they are both on manifold. In multivariate statistics, the principal diffusion directions belong to the class of directional data statistics (Mardia, 2014). The directional data of dimension p is on the (p − 1)-dimensional sphere, denoted as S p−1 (Mardia and Jupp, 2009, Section 9.3.2) 1 . Since the most commonly used multivariate distributions (e.g., multivariate normal distribution) are supported on the Euclidean space, they cannot adequately characterize directional data, and thus it sacrifices both geometric interpretability and statistical reliability. To tackle this issue, we propose our statistical model relying on von Mises-Fisher (vMF) distribution (Mardia, 1975). This is a classical 1 Since we focus on the applications to DTI in this paper, we use p = 3 in the rest of the paper. distribution to model directional data, providing parsimonious parameterization to quantify the mode direction and its corresponding variation. Relying on a vMF distributed error model, we propose a spatial generalized linear model to inferring the effects of covariates on diffusion directions. We consider a so-called scaled Cartesian-spherical link function relying on transformation between Cartesian and spherical coordinates. This allows us to project the diffusion direction on the Euclidean space. This innovative link function also enjoys the desired monotonic property and provides a platform to regress the diffusion direction on the subject-specific covariates.
Furthermore, spatial dependence is a very important characteristic in neuroimaging applications (e.g., Reich et al., 2018;Lan et al., 2021). It helps to capture more potential variations. The key step can be to construct a spatial correlation function on the diffusion direction space, as discussed in Kang and Li (2016). In the previous works (e.g., Wong et al., 2016;Lan et al., 2021), the spatial dependence based on Euclidean distance was implemented. One important feature of our proposed methodology is that we incorporate the fiber tractography information into consideration. Thus, the spatial variation of the principal diffusion directions is assumed auto-correlated along a fiber from the beginning to the end. To accommodate this special type of spatial dependence, we consider an autoregressive model to induce the sequential dependence along the fiber, suggested in several discussions (Zhu et al., 2011;Kang and Li, 2016).
Our methodological development is primarily motivated by Alzheimer's Disease Neuroimaging Initiative (ADNI) (Mueller et al., 2005) study. Based on their cognitive performance, the subjects are categorized into different clinical groups, namely, healthy controls (CN), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and Alzheimer's Disease (AD). The subject specific covariates (e.g., age, gender, and genetic information) may have heterogeneous effects on the response. We thus specify the regression model with group-specific coefficients. Furthermore, due to the manifold nature of the data, we develop a novel toolbox for Bayesian angular inference of principal diffusion direction, addressing the key questions in a clinical study. We validate the performance of our model by using both real ADNI data and the synthetic data which is generated mimicking the ADNI dataset. Our proposed model registers an overwhelmingly better performance than other competing models. This demonstrates the importance of our proposal.
Our application to ADNI data also reveals several insightful scientific findings using our toolbox of Bayesian angular inference for principal diffusion direction.
To our best knowledge, this is the very first paper to propose a spatial generalized regression model for the unit vector-valued response. The rest of the paper is organized as follows. In Section 2, we provide the details of our motivating data. Driven by the motivating data, we provide our methodology in Section 3. The model comparison is given in Section 5. Finally, we apply our proposed model to the ADNI dataset in Section 6 and end with some concluding remarks in Section 7. All the supplementary sections are summarized in Supplementary Materials.
ADNI Data
Our proposed methodology is motivated by ADNI data (Mueller et al., 2005). ADNI is a multi-site study that aims to improve clinical trials for the prevention and treatment of Alzheimer's disease. Scientists at 63 sites in the US and Canada track the progression of AD in the human brain, and diffusion tensor imaging is one of their measures. In this paper, we focus on ADNI-2 which starts on September 2011 lasts five years (Aisen et al., 2010). We randomly selected 30 subjects from the groups of CN, EMCI, LMCI, and AD to create our study cohort. Let i = 1 : N g denote the number of subjects in the clinical group g, where N g = 30 and g ∈ {CN, EMCI, LMCI, AD} for our data analysis. For each subject, we collect the subject-level information including age, gender, mini mental state examination (MMSE) score, and Apolipoprotein E (APOE) information. Age and gender are basic demographics. The MMSE is a performance based neuropsychological test score whose value ranges between 0-30; a subject with Alzheimer's disease is usually associated with lower MMSE. APOE is polymorphic with three major alleles ( -2, -3, and -4). Generally, the -4 variant was the largest known genetic risk factor for Alzheimer's disease in a variety of ethnic groups (Sadigh-Eteghad et al., 2012). Therefore, we use APOE-4 as a binary indicator showing whether a subject has -4 variant or not. Many recent studies reveal the brain fiber tracts associated with cognitive performance play importance roles in the progression of Alzheimer's disease. Among these fiber tracts, fornix (Oishi et al., 2012;Nowrangi and Rosenberg, 2015) and corpus callosum (Teipel et al., 2002;Di Paola et al., 2010) play important roles. In terms of brain anatomy, the fornix is a C-shaped bundle of nerve fibers in the brain; the corpus callosum is thick nerve tract, consisting of a flat bundle of commissural fibers, beneath the cerebral cortex in the brain. In this paper, we focus on these two tracts and consider the tractography atlases based on Yeh et al. (2018) (see Figure 3). In each fiber tract, there are K fibers tracked by a fiber tracking algorithm. From the tractography atlases of a given fiber tracts (e.g., fornix or corpus callosum), we identify j = 1 : J k voxels from a fiber starting one end of the k-th fiber to the other end.
vMF Regression for Principal Diffusion Directions
For the i-th subject of g-th clinical group, we use E gikj to denote the corresponding principal diffusion direction measured at j-th voxel on the k-th fiber. We let X ig to denote the design matrix containing covariate information (including intercept) of the i-th subject of g-th clinical group. To tackle the clinical problem that how the covariate effects drive the variation of principal diffusion directions, we provide the methodology in this section which regresses the principal diffusion direction on in sphere space on the subject's covariates in Euclidean space.
vMF Distribution
The principal diffusion directions E gikj are on S 2 by definition. To accommodate this, we let E gikj to follow vMF distribution, a popular probability distribution to characterize the randomness of the directional data (Mardia, 1975). The probability density function of vMF distributed E gikj (see Mardia and Jupp, 2009, Equation 9.3.4) is denoted as E gikj ∼ vMF µ gikj , κ and · stands for the 2 norm.
In the above density function, the principal diffusion direction e gikj only contributes to the term µ T igkj e gikj . This term is essentially cos δ µ igkj , e gikj where δ µ igkj , e gikj is the separation angle between unit vectors µ igkj and e gikj . This implies that µ igkj is the mode direction since e gikj = µ igkj maximizes the likelihood. The likelihood increases as µ igkj and e gikj has a smaller angle. The density function (Equation 1) is maximized at µ gikj and minimized at −µ gikj . The concentration parameter κ controls the concentration of the distribution around the mode direction 2 µ gikj . To be specific, the tangential component is a vector describing whether E gikj is concentrated at the mode direction µ gikj closely or not. It converges in probability to 0 as κ → ∞ (Mardia and Jupp, 2009, Equation 9.3.15) (see Figure 4). represents the mode direction µ igkj ; the dashed blue arrows represent the confidence regions C where P r(E gikj ∈ C) = 1 − α; As κ → ∞, the region C becomes narrow. In Panel (b), the yellow arrow represents the mode direction µ igkj , the blue arrow represents the random vector E igkj , and the green arrow represents the R igkj ∈ S 2 . R igkj and µ igkj are orthogonal to each other. the tangential component (I − µ gikj µ T gikj )E gikj of E gikj , a vector describing whether µ gikj is concentrated around the mode direction µ gikj closely.
Linking to Predictors
Modeling covariate effect and spatial dependence are easier in the Euclidean space. However, directional data lies on a manifold. Therefore, it is not optimal to represent directional data using the Cartesian coordinate system. Instead, we use spherical coordinates to obtain a transparent representation of the directions where the new set of parameters (Azimuth angle and elevation angle) are supported on a Euclidean space. Two new parameters, Azimuth angle θ ∈ [−π, π] and Elevation angle φ ∈ [− π 2 , π 2 ] are the two inputs in the spherical coordinates (see Figure 5), where Azimuth angle θ is the counterclockwise angle in the x-y plane measured from the positive x-axis and the Elevation angle φ is the elevation angle from the x-y plane. Let (x, y, z) be the three inputs in the Cartesian coordinates, the transformation between Cartesian and spherical coordinates is π 2 ] to denote this projection. By scaling the two radians (θ and φ) to (0, 1), we further use the logit function to project the values to the real line, to denote this innovative function which project the directional data in S 2 to R × R.
We implement (·) as the link function in generalized linear models (Dobson and Barnett, 2018, Section 3.4), that is referred to as prediction terms, and it is specified as a function of the covariate X ig : where • is function composition. Since u(·) and g(·) are both bijective, (·) is bijective.
Autoregressive Modeling
Incorporating spatial dependence while analyzing neuroimaging data is usually important to achieve efficient statistical inference. Different from other neuroimaging applications (e.g., Reich et al., 2018), the key step here is to construct a spatial correlation function supported on the principal diffusion direction space, as discussed in (Kang and Li, 2016).
In the previous works, the spatial dependence based on Euclidean distance was employed.
These works enjoy traditional geostatistical modeling of spatial statistics (Wong et al., 2016;Lan et al., 2021), but may be suboptimal if the analysis is within the fiber tracts-based regions of interests.
The spatial profiling of the DTI statistics along a fiber tract reveals that the spatial dependence depends on their geodesic distance along a fiber, but not their Euclidean distance (Wong et al., 2007;Goodlett et al., 2009;Zhu et al., 2011). For example, Wong et al. (2007) shows that there is a spatial dependency of diffusion parameters along the corticospinal tract in healthy individuals. Goodlett et al. (2009) and Zhu et al. (2011) further propose to induce spatial dependence considering arc length distances, computed relative to a fixed end point of the fiber bundle while modeling the scalar diffusion properties (e.g., fractional anisotropy, mean diffusivity). In Figure A.1 in Section A of supplementary materials, we visualize E gikj of typical fibers to endorse this dependence. Following these works, we also incorporate the spatial dependence depending on their geodesic distance along a fiber.
By giving additive spatial residual terms, the link function (·) further eases to induce residual spatial dependence to capture spatial variation.
Through this construction, the residual terms ( gikj and ξ gikj ) only rely on the previous q terms along the fiber. Such characterization can be viewed as a special case of Vecchia's method (Vecchia, 1988).
We characterize the conditional densities of the Vecchia decomposition (Equation 2) using an autoregressive model-based framework (Hamilton, 2020). For clarity, we first introduce the definitions and notations related to the autoregressive modeling approach.
An autoregressive process of order q, AR-q can be stated formally as in Definition 1: for t ∈ Z. For t ∈ Z, we define that {X t : t ∈ Z} follows a AR-q process with correlation Given a finite set of indices, t = 1, . . . , T , the random variables [X 1 , ..., X t , ..., X T ] in an AR-q process (Definition 1) follows a mean-zero multivariate normal distribution with variance-covariance matrix as σ 2 V (Φ (q) ), where V (Φ (q) ) is a positive definite matrix whose (i, j)-th is γ |i−j| = cov(X i , X j )/σ 2 . The specification of γ g is (Hamilton, 2020, Equation 3.4.36) Therefore, the AR-q process (Definition 1) can induce a mean-zero multivariate normal distribution with variance-covariance matrix as V (Φ (q) ), and we continue use the process notation to denote the induced distribution such as [X 1 , ..., X t , ..., X T ] ∼ N 0, σ 2 V (Φ (q) ) .
Due to the autoregressive process construction, the joint likelihood can be written similar to the formulation of the Vecchia's method (Equation 2). By applying the AR-q process to our residual terms gikj and ξ gikj , we finally have where (τ 2 , Φ (q) ) and (τ 2 ξ , Φ (q) ξ ) are the corresponding parameters of the AR-q models. The above two vectors are independent over g = 1 : G, i = 1 : I g , and k = 1 : K.
The MCMC scheme consists of Metropolis Hastings and Gibbs steps. Based on the posterior samples, we infer the properties of an underlying data mechanism in a Bayesian paradigm. However, the principal diffusion direction data is directional on the sphere, making the traditional Bayesian inference suboptimal. In the next section, we thus introduce a novel Bayesian angular inference framework, which are motivated to tackle the important scientific questions related to diffusion directions. In all the numerical studies presented in the following sections, we collect 3, 000 MCMC samples after discarding 2, 000 as burn-in for Bayesian inference. The implementation codes are attached in Section C of supplementary materials.
Bayesian Angular Inference for Principal Diffusion Directions
In this section, we introduce our novel Bayesian angular inference for principal diffusion directions. The proposed inference framework not only enriches the toolbox for statistical inference of directional data, but also tackles the scientific questions related to DTI analysis. In Section 4.1, we introduce a novel angular expectation, which provides a more appropriate metric for inferring directional statistics. Based on the convenient tool of angular expectation, we further introduce tangent-normal decomposition of covariate effects (Section 4.2) and regions of differences characterized by separation angle (Section 4.3) to efficiently quantify covariate effects and regions of differences.
Angular Expectation
In general, for a random unit vector X, the angular expectation is defined as AX = arg min x∈S p−1 δ(x, X)π(X)dX, where A is introduced as an operator returning the angular expectation, p is the dimension of X, and π(X) stands for the distribution of X.
The angular expectation A is the main tool for our proposed Bayesian angular inference scheme, providing a more reasonable inferential route. Given a typical subject in group g with design matrix X g0 , the mode directions i.e., µ g0 = {µ g0kj : k = 1 : K, j = 1 : J k } are of primary interest and the corresponding posterior distribution can be expressed as where α g and β g are vectors of all corresponding coefficients α gkj and β gkj of group g in Model 5, respectively.
The posterior distribution can be approximately learned using the MCMC outputs.
For summarization, traditional posterior mean is sub-optimal as it does not guarantee that the estimates are always on a unit sphere. We thus empirically estimate the angular expectation using the MCMC samples {µ (t) Relating to ADNI data analysis, the angular expectation A µ g0 |X g0 ; data profiles the anatomical structure after model-based adjustment with the covariate X g0 .
Tangent-Normal Decomposition of Covariate Effects
A transparent inference framework to illustrate the covariate effects is essential for clinical studies. In our proposed model, the coefficients α g and β g do not offer a straightforward illustration of the covariate effects on the mode directions. Therefore, we incorporate the tangent-normal decomposition (Mardia and Jupp, 2009, Page 169) to summarize the covariate effects. Specific steps are described below.
Let X g be the design matrix of a typical subject in a clinical group g, where the continuous predictors are all set to the corresponding sample means and the categorical predictors are the corresponding sample modes of the subjects within the clinical group g. Now, let A be a covariate whose effect is to be investigated. We let X g (A) be the design matrix of an hazard subject in clinical group g if A is of a more hazard condition (e.g., one year older, one point decrement in MMSE, or having − 4 variant). The corresponding predictive posterior of the mode directions, i.e., µ g = {µ gkj : k = 1 : K, j = 1 : J k } can be expressed as follows and the empirical posterior predictive distribution can be obtained using the MCMC outputs as well. We use {µ (t) g : t = 1 : T } and { µ g (A) (t) : t = 1 : T } to denote the corresponding samples obtained from the MCMC outputs: Typical Mode Direction: Harzard Mode Direction: To understand the covariate effect, we decompose µ gkj as in Figure 6 where µ gkj is the tangent direction and R gkj is the normal direction. In terms of principal diffusion directions, the term R gkj can be interpreted as an deviation direction that the covariate A exerts. The scalar m gkj ∈ (0, 1) which controls the magnitude of this effect can be used to quantitatively describe us the importance of this covariate.
Computationally, the tangent-normal decomposition can be proceeded for each MCMC sample t, thus we obtain the empirical posterior distribution of R gkj and m gkj , denoted as {R gkj : t = 1 : T }. In practice, we can use A[R gkj |data] as the Bayesian empirical estimate to spatially profile the deviation directions caused by the covariate effects. Similarly, E[m gkj |data] can be used to quantify the magnitude of the covariate effect. We describe the implementation of tangent-normal decomposition due to covariate effects in our ADNI application (Section 6.1). Figure 6: The graphical illustration of tangent-normal decomposition of covariate effects.
The yellow arrow is the typical mode direction; the blue arrow is the harzard mode direction; the green arrow is the deviation direction; The scalar m gkj ∈ (0, 1) which controls the magnitude of this effect.
Regions of Differences Characterized by Separation Angle
Especially in imaging analysis, characterizing regions of differences between clinical groups is critical. For principal diffusion directions, we propose a novel separation angle based method to characterize regions of difference. For each clinical group g, we get the angular expected mean for each subject at each voxel A µ gikj |X gikj ; data . We get the sample angular mean of the subjects in group g, defined as µ gkj = arg min x∈S p−1 Ng i=1 δ x, A µ gikj |X gikj ; data In this way, the regions of differences among two groups can be characterized by separation angle defined as ∆ g,g (v) = δ( µ gkj , µ g kj ) The results of ∆ g,g (v) can be used to depict the regions of differences between any two clinical groups. We describe the implementation of regions of differences characterized by separation angle in our ADNI application (Section 6.2).
Model Comparison
In this section, we conduct numerical studies on the ADNI principal diffusion direction data (Section 2) and synthetic principal diffusion direction data to demonstrate the performance of our proposed vMF regression in comparison to other traditional alternatives.
The codes and relevant data files are attached in Section B of the supplementary materials for reproducing the results. The basic assumption of our proposal is that the principal diffusion directions are the realizations of vMF distribution as E gikj ∼ vMF(µ gikj , κ). This assumption guarantees that E gikj ∈ S 2 . However, there is a limiting equivalence between vMF distribution and multivariate Gaussian for large κ as presented in Lemma 2 (Song and Dunson, 2022, Lemma 1).
with respect of µ.
Hence, we introduce two benchmark methods based on multivariate Gaussian distribution. To make the benchmark methods parsimonious, the two methods are specified as follows: 1) Gaussian Regression 1: the multivariate regression model of diffusion directions, i.e., E gikj , 2) Gaussian Regression 2: the multivariate regression model of transformed means, i.e., E gikj . The multivariate regression model of diffusion directions is simply to treat the diffusion direction E gikj as a normal distribution E gikj ∼ N (µ gikj , Σ 1 ), where µ gikj (p) = X ig u gkj (p), µ gikj (p) is the p-th element of µ gikj for p = 3, and u gkj (p) is the corresponding coefficient. Alternatively, the multivariate regression model of transformed means is to treat the transformed means E gikj = [A gikj , B gikj ] T as a normal , whereθ gikj = X ig a gkj and φ gikj = X ig b gkj . a gkj and b gkj are corresponding coefficients. The Gaussian Regression 1 enjoys simplicity but does not guarantee that the support of principal diffusion directions to be in S 2 . The Gaussian Regression 2 takes the advantage of proposed novel link function but ignores the randomness caused by the vMF distribution.
First, we apply the models to the motivating ADNI Data to measure the performances of the models. We randomly partition the subjects by their clinical groups. Among all the subjects, 50% of the subjects are treated as training data and 50% of the subjects are treated as validation data. Based on the model fitting results from the training data, we obtain the Bayesian posterior expectationμ gikj on the validation data. For Gaussian Regression 1, we obtain the posterior mean of µ gikj ; For Gaussian Regression 2 and our proposed vMF regression, we use the angular expectation as introduced in Section 4. For the vMF regression, we set P = 1, . . . , 5. To compare the methods, we measures their prediction error on the validation data. Two measures are used to calculate the prediction error: the separation angle 3 δ μ gikj , E gikj and the root square error μ gikj − E gikj .
In Figure 7 and 8, we visualize the prediction errors on fornix and corpus callosum, respectively. Given the mean of errors over subjects and voxels, our proposal provides overwhelmingly better performance than the benchmark methods. The advantages of our proposal become more transparent when inspecting the distribution of the prediction errors: the proposed vMF regression has a good stability and reliability since they are with fewer outlying larger errors. This implies that our proposal provides a more convincing result in analyzing ADNI data.
To further validate our method, we create synthetic principal diffusion direction data.
To ensure that the synthetic data adequately mimics the ADNI data, the design matrix containing patients' characteristics are borrowed directly from our real data, where 5 subjects' covariate information of each clinical group are used for the training data and another 3 For Gaussian Regression 1, we standardize µ gikj to be unit vectors. The coefficients are constructed accordingly to make the principal diffusion direction follow templates in the middle panel Figure 2. We simulate the data following our proposed full model (Model 5). When simulating the data, we set P = 5 and generate the autocorrelation coefficients from U (−1, 1). The variances are set as τ 2 = τ 2 ξ = σ 2 = σ 2 = 1. We also consider multiple choices for the concentration parameter as κ = 10, 30, 50 to validate Lemma 2 that the vMF distribution and the Gaussian distribution become equivalent when κ is sufficiently large. We generate 50 replicated datasets in total for each setting.
In Figure A.2 in Section A of supplementary materials, we illustrate the prediction errors over all the subjects, voxels, and replications. Similar to the real data results, our proposal again registers an overwhelmingly better performance than the Gaussian alternative. We also find that larger κ produce better results for the normal distribution-based methods.
This is an expected result due to Lemma 2. Furthermore, the vMF regression with P = 5 performs best among P = 1, . . . , 9 while the data is generated with P = 5. Thus, this is our default approach to choose the best P in general.
Bayesian Angular Inference for ADNI Data
In this section, we continue with our ADNI data application and implement Bayesian Angular Inference to lay out some scientific investigations on the ADNI data, revealing the underlying mechanism of Alzheimer's disease. In Section 6.1, we illustrate the analytic results of covariate effects through our proposed tangent-normal decomposition. In section 6.2, we use separation angle metric to detect regions of differences. Both of the two approaches provide new insights on the principal diffusion direction data.
Tangent-Normal Decomposition of Covariate Effects
We now analyze the covariate effects applying the tangent-normal decomposition. As defined in Section 4, the tangent-normal decomposition requires a definition of X g , the design matrix of a typical subject in a clinical group g. Given our data, we define clinically meaningful X g as follows. For each clinical group, we average the continuous covariates which are age and MMSE of all the subjects in the group. As the APOE 4 variant has been largest known genetic risk factor for AD in a variety of ethnic groups (Sadigh-Eteghad et al., 2012), we make non-APOE 4 variant with average age and MMSE-score as typical. To follow the state of the art in reducing gender bias (e.g., Chilet-Rosell, 2014;Labots et al., 2018), we take either male or female in the X g to investigate either male's or female's covariate effects, respectively. Given X g , we define X g (A) be a design matrix of an hazard we conclude that APOE has the largest effect among all, followed by MMSE, and Age.
Thus, we illustrate APOE-based results here. Like many fMRI studies (Trachtenberg, Filippini, Ebmeier, Smith, Karpe and Mackay, 2012;Trachtenberg, Filippini, Cheeseman, Duff, Neville, Ebmeier, Karpe and Mackay, 2012), APOE-effects in fornix exhibit heterogeneities on the left and right hemisphere of the brain, and it becomes more heterogeneous with increasing disease severity (see Figure 9). Such heterogeneous effect profiles are also observed for MMSE and Age. However, the magnitudes, quantified by E[m gkj |data], are very small in case Age for both fornix and corpus callosum. The effect magnitudes for fornix are in general larger than those in corpus callosum. It will thus be interesting to run similar analysis as ours for other structural MRI-based markers in the future. More importantly, our novel covariate-dependent analysis on principal diffusion directions allows us to understand how the covariates affects is present in terms of directional statistics. For example, via Figure 10, we can learn how the important covariate effects of APOE is present since our modeling approach applies to the principal diffusion directions directly. By inspecting the pattern of the deviate direction (green arrows), it helps to reveal the physiological disruption of white matters, providing more insightful information for in-depth investigation (e.g., Desikan et al., 2010) of the effects of human traits on brain's structural anatomy.
Regions of Differences Characterized by Separation Angle
In this section, we investigate the regions of differences across different clinical groups.
Here the differences are characterized by separation angles. For better visualization of the changes, we only show the separation angles in Figure 11 with the voxels whose values larger than 5 o . In general, when we compare the clinical groups to the healthy controls, the separation angle increases with increasing severity of the cognitive impairment. This is a basic finding as we expected. In corpus callosum, anterior middle regions are apparently different between EMCI-to-LMCI and LMCI-AD. These results are consistent with many recent scientific reports regarding (Walterfang et al., 2014;Bachman et al., 2014).
Conclusion and Discussion
In this paper, we develop a novel spatial generalized linear regression framework for modeling diffusion directions using a vMF-distributed error model. The regression model is shown to accurately capturing the local variation when random diffusion tensors are supported on a sphere. Given the nature of fiber tract-based data, the spatial variations are subsequently captured by an autoregressive framework. The numerical evaluation on the real data and the synthetic data demonstrate that our proposal has an overwhelming better performance. Important scientific findings are given through our proposed Bayesian angular inference.
In this paper, cross-sectional data is used for the analysis. However, the longitudinal data analysis of ADNI data is becoming more popular these years (e.g., Wang and Guo, 2019;Kundu et al., 2019). In ADNI data, the longitudinal data analysis may be more challenging given the involvement of asynchronous observations, i.e., the neuroimages and the covariates are not measured at the same time. As a future work, it is an appealing avenue to extend our generalized vMF regression to allow longitudinal analysis.
Another important direction will be to analyze the changes in fiber orientations among the converted subjects. These subjects are the ones whose disease status changed during their follow-up visits, with either increasing or decreasing severity. It is thus interesting to identify the factors fundamental to these changes. Our generalized vMF regression may be modified to analyze this important scientific problem.
|
2022-07-19T15:20:37.559Z
|
0001-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f9e69d677523ea83981f8ba86f8e246f21da466a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f9e69d677523ea83981f8ba86f8e246f21da466a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
271630967
|
pes2o/s2orc
|
v3-fos-license
|
Loco-regional recurrence of adrenocortical carcinoma: A case report
Introduction Adrenocortical carcinoma (ACC) is a rare and aggressive endocrine malignancy with a high recurrence rate. Approximately half of the patients are asymptomatic, while others experience symptoms due to the tumor's size or hormone secretion. Ro resection if possible is the best option for treatment of primary as well as locoregional recurrent ACC. Case presentation A 20-year-old female who previously underwent open left adrenalectomy for Stage III ACC presented with complaints of heaviness and vague discomfort in the left upper abdomen. Current hormonal assays were normal. Imaging revealed a lesion in the spleen suggestive of recurrence. She underwent elective surgery involving en bloc resection of the spleen, diaphragm, and associated structures. Postoperative recovery was uneventful, histopathology confirmed recurrence and subsequent PET-CT showed no recurrence. She is currently on mitotane and remains symptom-free with no signs of recurrence after initial surgery. Clinical discussion Complete resection (Ro) if possible, for recurrent and metastatic disease has been linked to long-term survival and offers significant palliative benefits, particularly in cases involving symptomatic steroid production. Conclusion ACC has a high frequency of local recurrence therefore management of recurrence should be considered from the initial diagnosis. Ro resection of recurrence is the best potential treatment. Follow-up protocols and improving integration between surgical, oncological, and supportive care departments are crucial for overcoming healthcare challenges in Nepal.
Introduction
Adrenocortical carcinoma (ACC) is a rare and among the most aggressive endocrine malignancies [1].There is global variation in annual incidence varying from 1 to 2 cases per million [2,3].It has a bimodal distribution in the first and fourth decades of life with more prevalence in women [2].About 50 % of patients are either asymptomatic and the rest experience mechanical symptoms due to the tumor's size or present with symptoms related to hormonal secretion [4].Complete resection of tumor with integrity of capsule is the primary treatment for ACC.Complete tumor resection is a crucial prognostic factor, and invasive tumors should be excised en bloc with surrounding tissues or organs [5].
ACC can recur at any time, with most recurrences occurring in the first 2 years following surgery [6].Approximately 25 % of patients have isolated locoregional recurrence.Locoregional recurrence can involve the pancreas, spleen, liver, diaphragm, and retroperitoneum [6].Even after what seems to be a curative resection, most patients experience early tumor recurrence or distant metastasis.Following complete resection, over 50 % of patients will experience a recurrence within five years [6].Mitotane is frequently employed as a chemotherapeutic agent in treating ACC.Its mechanism involves uptake by the adrenal cortex, where it induces necrosis, resulting in a specific cytotoxic effect.There are varying reports concerning the use of mitotane in ACC with some suggesting a significant improvement in survival while others do not [7].
There is paucity of literature on ACC and none on treatment for recurrence of ACC from Nepal.This case adhering to SCARE 2023 guidelines details successful management of local recurrence of left adrenal tumor to spleen after a year of adrenalectomy [8].At the current presentation, the abdomen was not distended, nontender and systemic examination was normal.Hormonal assay revealed normal cortisol, dehydroepiandrosterone and metanephrine, normetanephrine and aldosterone to renin ratio.Contrast enhanced Computed tomography (CECT) scan revealed well defined soft tissue density lesions measuring 4 × 3.1 × 2 cm in splenic parenchyma with heterogeneous enhancement in postcontrast phase with focal calcification (Fig. 1).The lesion abutted the left 10th, 11th intercostal muscle and left crus of diaphragm (Fig. 1).Positron emission tomography (PET) CT scan revealed metabolically active hypodense lesion with focal calcification in the spleen suggestive of metastasis without evidence of recurrent lesion at operated site (Fig. 2).
Case presentation
With diagnosis of locoregional recurrence of ACC, she was planned for elective surgery by team with expertise in oncologic surgery.Incision was made on the previous scar on the left subcostal region.Intraoperatively, there were no ascites, peritoneal and omental deposits or liver metastasis.A 3 × 4 cm soft to firm mass in the posterolateral surface of the spleen with invasion into the lateral dome of the diaphragm was noted (Fig. 3).In addition, a 1× 1 cm firm nodule in Gerota's fascia was identified.En bloc resection of spleen, gastrosplenic ligament, Gerota's fascia and left costal margin of the diaphragm was done followed by repair of diaphragm (Fig. 3).Perioperative course was uneventful and she was discharged on tenth post-operative day.Histopathology revealed tumor cells arranged in nests and large sheets infiltrating splenic parenchyma with fibrous capsule border with diaphragm margin being free of tumor (Fig. 4).PET CT done after 3 months of surgery revealed no FDG avid activity at the operated site or elsewhere (Fig. 5).She is currently on mitotane and on regular follow-up with no symptoms, signs, laboratory and imaging findings suggestive of tumor recurrence at two years.
Discussion
The clinical presentation of ACC depends on their size and hormonal status.Excess production of steroids, androgens, and estrogens is more prevalent than mineralocorticoid excess.The most common and easily identifiable presentation of functional adrenocortical carcinomas is steroid excess (56 %).Patients presenting with steroid excess typically exhibit the classic signs of Cushing's syndrome, which include truncal obesity, rounded facies, buffalo hump, striae, hypertension, glucose intolerance, thinning of the skin, and osteoporosis [9].Likewise, our patient on her initial presentation had complaints of facial puffiness due to higher cortisol level and on representation had symptoms probably due to mass effect.
ACC has a high frequency of local or systemic recurrence therefore management of recurrence should be considered from the initial diagnosis.Challenge in management of ACC is quantifying the risk of recurrence [10].Patients with high risk of recurrence, defined by tumor size >8 cm, microscopic invasion of blood vessel/tumor capsule or a Ki-67 index >10 %, should be offered adjuvant mitotane routinely and considered for radiotherapy to adrenal bed [5].Our patient, stage III ACC with proliferative index (Ki 67): 35 % was advised and planned for adjuvant mitotane but she was lost to follow-up and eventually presented with symptoms of locoregional recurrence after a year of surgery.Previous studies have documented the use of mitotane in both primary and adjuvant therapy settings for ACC concluding that the effectiveness of adjuvant mitotane following complete resection of ACC remains uncertain and complete surgical excision is the best modality for longterm survival [11,12].However, we cannot ascertain the early recurrence in our case was due to tumor characteristic itself and/or lack of adjuvant mitotane therapy.
Modalities of complete resection, debulking surgery and palliative radiotherapy have been proposed to treat locoregional recurrence [5].Complete resection (Ro) if possible, for recurrent and metastatic disease has been linked to long-term survival and offers significant palliative benefits, particularly in cases involving symptomatic steroid production [12].Our patient had isolated locoregional recurrence treated with en bloc excision of involved organs.This approach aligns with that of prior studies which suggested repeat surgery in carefully chosen patients can enhance survival outcomes for isolated locoregional recurrence [6].
In Nepal, the geographical distribution of tertiary care centers poses a significant challenge for comprehensive cancer management.The concentration of such facilities in the capital poses the difficulties in maintaining a seamless continuum of care.This case highlights the urgent need for improved multidisciplinary coordination among healthcare providers across different regions to enhance patient outcomes.In this scenario, the patient initially underwent surgery by the Department of Urosurgery at a tertiary care center located 120 km from her home.Subsequently, she was scheduled for adjuvant mitotane therapy at a regional hospital but was lost to follow-up.This lapse in continuous care might have led to a locoregional recurrence of her condition.Eventually, she was referred to our hospital's Department of Surgical Gastroenterology, where a team of gastro surgeons performed the necessary surgery.
Conclusion
ACC has high locoregional recurrence rate and Ro resection of the recurrence is the best potential treatment offering long term survival.It should be performed in a specialized center and should be under close follow up adherence with mitotane therapy.Ensuring robust follow-up and facilitating better integration between surgical, oncological, and supportive care departments are essential steps toward addressing these challenges in the Nepalese healthcare setting.
A 20-year-old female who had undergone open left adrenalectomy (Ro resection) for Stage III left adrenocortical carcinoma, European Network for the Study of Adrenal Tumors (ENSAT) one year back was referred to department of surgical gastroenterology with complaints of heaviness and vague discomfort in left upper abdomen for 3 months.At initial presentation, she had stage III ACC with proliferative index (Ki 67): 35 % and was advised and planned for adjuvant mitotane but she was lost to follow-up.She had facial puffiness and weakness for 2 months with hypertension, Cushingoid facies and central obesity prior to presentation.Laboratory investigations revealed high 8 am serum cortisol level 61.6 μg/dl (Normal <30 μg/dl); 24-h urine cortisol level 1980 μg/day (normal range 58-403 μg/day) and Adrenocorticotropic
Fig. 1 .Fig. 2 .
Fig. 1. (Left: coronal, right: axial) CECT Abdomen/pelvis showing well defined, heterogeneously enhancing soft tissue density lesion (red arrow), in splenic parenchyma.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
|
2024-08-02T16:04:21.644Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "24aa6712816ae45e595f42f52e2a9d426365237b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2024.110095",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "95c28e70a848bdad873fec189a1f6bda9a0e7eac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
54023054
|
pes2o/s2orc
|
v3-fos-license
|
Influence of soil forming processes on soil-based viticultural zoning , Ubalde et al . HOW SOIL FORMING PROCESSES DETERMINE SOIL-BASED VITICULTURAL ZONING
The aim of this study was to elucidate the soil forming processes of representative vineyard soils, and to discuss the implications on a soil-based viticultural zoning at very detailed scale. The study area is located in Priorat, Penedès and Conca de Barberà viticultural areas (Catalonia, North-eastern Spain). The studied soils belong to representative soil map units determined at 1:5,000 scale, according to Soil Taxonomy classification. The soil forming processes, identified through morphological and micromorphological analyses, have significant effects on some soil properties. For example, the different processes of clay accumulation in soils developed from granodiorites in Priorat or gravel deposits in Conca de Barberà, are primarily responsible for significant differences in clay content, available water capacity and cation exchange capacity. These soils properties, especially those related to soil moisture regime, have a direct influence on vineyard management and grape quality. However, soil forming processes are not always reflected on soil classification, especially in soils modified by man. We show that climate or geology alone cannot be used in viticultural zoning at very detailed scale, unless soil forming processes are taken into account.
INTRODUCTION
During recent years, viticultural zoning studies have increased significantly in relation to the expansion of the international wine market.Viticultural zoning can be defined as the spatial characterization of zones that produce grapes or wines of similar composition, while enabling operational decisions to be implemented (Vaudour, 2003).Among the various environmental factors and for a specific climate, soil is the most important factor on viticultural zoning, due to its direct effect on vine development and wine quality (Sotés and Gómez-Miguel, 2003).The soil properties which have the most influence are the physical ones, namely the properties that control the soil water content (Seguin, 1986), due to their direct effect on equilibrium between vegetative vigour and grape growing (Van Leeuwen and Seguin, 1994), and consequently on grape and wine quality (Esteban et al., 2001;Trégoat et al., 2002;Gurovich and Páez, 2004).In general, relationships between soil minerals and wine quality cannot be established (Seguin, 1986), except for nitrogen (Choné et al., 2001;Hilbert et al., 2003), unless severe deficiencies affecting vineyard growing occur (Van Leeuwen et al., 2004).For example, a calcium excess may be responsible of iron deficiencies (iron chlorosis), which can greatly affect grape production.However, some studies have shown an effect of soil cations on grapes and wine quality (Peña et al., 1999;Mackenzie and Christy, 2005).The physicochemical properties of soils are determined by the soil forming processes under which they form (Ritter, 2006).Some soil forming processes, such as clay accumulation or mineral weathering may have a great influence on soil physical properties, which are the most important for grapevine cultivation (Ubalde et al., 2007(Ubalde et al., , 2009)).
There are several approaches through soil studies which are oriented to viticultural zoning (Van Leeuwen et al., 2002), but the methods that provide more information are soil survey techniques, since they bring both the knowledge of spatial variability of soil properties and soil classification according to its viticultural potential (Van Leeuwen and Chery, 2001).Therefore, soil maps are usually used as the basic maps for zoning studies.In Dutt et al. (1981) distinct viticultural regions were determined by considering the soil temperature regime.Astruc et al. (1980) considered the water availability as the most important factor, followed by carbonates and other chemical components.Morlat et al. (1998) considered the effective soil depth as the main property, since this is directly related to water availability by the roots.Many viticultural zoning studies note the importance of water availability, since this property integrates edapho-climatic, biological and human factors (Duteau, 1981;Sotés and Gómez-Miguel, 1992;Van Leeuwen et al., 2002).Soil survey methods based on Soil Taxonomy classification (SSS, 1999) were useful for viticultural zoning studies at different detail levels (Gómez-Miguel and Sotés, 2001;Gómez-Miguel andSotés, 2003, Ubalde et al., 2009).Soil forming processes, through their effects on edaphic properties and their implications on Soil Taxonomy, may have a great importance on a viticultural zoning based on soil surveys.However, as mentioned above, many viticultural zoning studies are based on the relationships between grape and wine quality and certain soil properties or different soil forming factors, namely climate (Coombe, 1987;Hamilton, 1989), geology (Van Schoor, 2001) and topography (Dumas et al., 1997), but there are no studies that consider possible relationships with soil forming processes.
In this study, representative soils of a very detailed soil survey, which was carried out for viticultural zoning purposes, were selected.The study area is composed of high quality producing vineyards of Catalonia, namely the viticultural regions of Priorat, Conca de Barberà and Penedès.The relationship between soils and grape and wine quality in the study area is discussed elsewhere (Andrés- de-Prado et al., 2007, Ubalde et al., 2007, 2009).In this paper we want to analyze whether the soil forming processes, through their effects on soil properties and classification, deserve to be considered in a viticultural zoning based on soil surveys.At our knowledge, this approach has never been addressed before.In short, the aim of this study was to elucidate the soil forming processes of representative vineyard soils, and discuss about the implications on soil classification and viticultural zoning.
MATERIALS AND METHODS
The study area is high quality producing vineyards, located in different protected viticultural areas of Catalonia: Conca de Barberà, Priorat and Penedès.The area is enclosed approximately between 41º 3' N and 41º 48' N and between 0º 40' E and 1º 53' E. The altitude ranges approximately between 220 m and 550 m.
The study area has an old viticultural history, which started in some cases during the 4th century BC.Since the 1980s -1990s, the systems of grapevine cultivation have evolved to highly mechanized farms, which seek to obtain maximum profitability but maintaining high quality products.Thus, a widespread practice was the removal of old stone walls, in order to obtain larger plots.In these cases, land levelling usually involved a change in the arrangement of soil horizons, sometimes leading to a decline of soil fertility.
The vineyards are situated on the Catalan Coastal Range and the Ebro Basin.The Catalan Coastal Range is an alpine folding chain formed by both massifs and tectonic trenches (Anadón et al., 1979).The Conca de Barberà soils are located in the footslope of the massif, named 'Serra de Prades' in this region.The soils are developed from gravel deposits of different ages, which are composed of siliceous Paleozoic materials (Silurian and Carboniferous slates and granites) (IGME, 1975a).The Priorat soils are located in the hillslope of the Priorat Massif, which is composed of Carboniferous slates and granodiorites (IGME, 1978).The slates are named 'llicorella' in this region, and they are considered the main responsible for grape quality.The selected Penedès soils are located in 2 subdivisions, which can be called Upper Penedès and Middle Penedès.The Middle Penedès soils are located in a tectonic trench named Penedès Basin, where calcareous Miocene materials (marls, conglomerates, limes) outcrop (IGME, 1982).The Upper Penedès soils are located in the Ebro basin margin, next to the Alt Gaià Massif.Calcareous materials from Oligocene and Eocene predominate in this region (IGME, 1975b).
The climate type is Mediterranean, characterized by a warm, dry summer, even though there are differences in temperatures and precipitation according to the altitude and distance to the sea.The mean annual precipitation ranges from 520 mm in Penedès to 589 mm in Priorat, showing seasonal variations (Fig. 1).In all regions, the precipitation has a bimodal distribution (peaks in spring and autumn) and a minimum of precipitation in summer, particularly in July.The highest temperatures occur in summer, particularly in July or August, while the lowest temperatures occur in winter (January).Comparing different regions, the warmest one is Penedès, with an average annual temperature of 14.9 ºC, and the cooler one is Priorat, with an average annual temperature of 12.7 ºC.The soil moisture regime is xeric and the soil temperature regime is mesic (Priorat and Conca de Barberà) or thermic (Penedès) (SSS, 1999).
The studied soils belong to soil map units determined according to the Soil Survey Manual of the Department of Agriculture of United States (SSS, 1993), at very detailed scale (1:5,000).Soil map units were delineated as polygons from soil observations, which were selected according to different landforms and lithologies.The density of soil observations was 1 observation by cm 2 of map, of which a sixth part corresponded to soil pits and the rest to soil auger holes.The depth of soil profiles was the shallowest of a root-limiting layer or 200 cm.When applying a ratio of soil pit: soil In some cases, a micromorphological study was undertaken in order to clarify or identify pedogenic processes which were difficult to detect with the naked eye.For the micromorphological study, thin sections were elaborated from undisturbed soil material according to Benyarku and Stoops (2005).Samples were taken of deep horizons, since surface horizons were disturbed by ploughing.One to two samples were collected for each selected profile.We described a total of 23 thin sections from 19 soil profiles and 8 soil map units.The criteria of Stoops (2003) were used in thin section description.
When soil profiles were fully characterized, they were classified according to Soil Taxonomy (SSS, 2006) at series level.Each series consists of soil layers that are similar in colour, texture, structure, pH, consistence, mineral and chemical composition, and arrangement in the profile.In the study area every 3 to 4 soil profiles belonged to one soil series, by average.
The soil series were used to delineate the soil map units (SMU), following the criteria of Van Wambeke and Forbes (1986).The soil survey party plotted the map unit boundaries onto orthophotographs.These boundaries were determined by means of soil observations, looking for differences in slope gradient, landform, colour and stoniness.When all SMU were delineated, they were listed and codified and the soil map legend could be designed.The final number of SMU was approximately twice the number of soil series.The mean surface of the delineated SMU was 1.4 hectares.
Significant differences among soil series were analysed by ANOVA, considering the analytical properties of soil series as dependent variables and soil series as categorical factors.This analysis was done for each horizon separately.Means were separated by Newman-Keuls post-hoc analysis (p < 0.05).The software used was STATISTICA (StatSoft, Inc.).
RESULTS AND DISCUSSION
In this study, a wide range of soil forming processes was identified in vineyard soils of Catalonia, which is reflected in their classification.The studied soils belong to Entisol, Inceptisol and Alfisol orders (SSS, 2006), according to a wide variety of soil forming processes and their resulting diagnostic horizons and characteristics (Table 1).Entisols are characterized by little or no evidence of soil formation, so that any diagnostic horizons are not developed, except to an ochric epipedon.Within that order, the suborders found are Orthents, Fluvents, Psamments and Arents.Orthents are formed on recent erosional surfaces, and most of them are shallow soils with a root-limiting layer (lithic or paralithic contact).Fluvents are formed in alluvial and colluvial parent materials, and are characterized by being deep soils, which are rich in organic matter in depth.Psamments are characterized by being sandy.Arents are anthropogenic soils, deeply mixed by methods of moving by humans.Arents should present fragments of diagnostic horizons not arranged in any discernible order.With regard to Inceptisols, these soils are characterized by being in early stages of soil formation.These soils could undergo distinct accumulation processes of carbonates and gypsum or simply evidences of physicochemical transformations or removals.Soils with well-developed carbonate accumulations (calcic horizon) or cementations (petrocalcic horizon) are classified as Calcixerepts and soils with gypsum accumulations (gypsic horizon) are classified as Gypsic Haploxerepts.The Haploxerept group is also used when accumulations processes are too incipient to form calcic or gypsic horizons, or when a change of colour occurs.In this case, the diagnostic horizon described is cambic.Finally, Alfisols are characterized by silicate clay illuviation (argillic horizon).In the study area, these soils could present carbonate accumulations (calcic horizon), covering clay accumulations.The presence of carbonates in parent material determines the carbonate accumulation processes identified in Penedès area, much more intense than those of Priorat and Conca de Barberà.Calcium carbonate accumulations in soils are possible thanks to a Mediterranean climate, which are responsible of seasonal soil water deficits.However, some processes, such as clay illuviation in calcareous soils, can only be explained by a wetter relict climate, which would allow a substantial base leaching and a slight acidification.The time effect can be observed in Conca de Barberà soils, which are developed from colluvial deposits of different ages but same origin (IGME, 1975a).In modern colluvial deposits, the most developed soil forming process is in situ clay neoformation.However, a process of clay illuviation and then a process of secondary carbonate accumulation could take place in the old colluvial deposits.Obviously, the variations in climate over time had strongly influenced these processes.Regarding the relief factor, Priorat soils on hillslopes or Penedès soils on valley bottoms were more exposed to processes of soil rejuvenation than Conca de Barberà soils on more stable positions.The main effects of biological activity are related to bioturbation.However, biogenic carbonate accumulations are described in Penedès region.Finally, human activity has a strong influence on the formation of some soils of the study area.The most aggressive activities are related to land levelling and terracing.Soil tillage and the application of fertilizers and manures also affect surface horizons.
Soil forming processes in Priorat
The selected Priorat soils are developed in the Priorat Massif, which is composed mainly of Paleozoic slates, which are intruded by granodiorite veins in some areas.Generally, these soils are poorly developed, that is, they show little evidence of soil formation.This is because these soils are formed on recent erosional surfaces (hillslopes), with shallow parent materials, which are greatly affecting soil properties.Moreover, the properties of the parent materials are not particularly favourable for the development of soil structure.Slates are highly exfoliated, favouring high rock fragment contents, and the weathering product of granodiorites is granitic sands, named 'sauló' in the study area, which greatly hinder the aggregation of particles (soil structure formation).
As mentioned above, soils developed from granodiorites are characterized by being shallow and with very high sand content, with regard to the parent material composition.The parent material is a granitic regolith up to 2-5 m, which is a product of in situ alteration of the granodiorite, and it corresponds to a sandstone formation with a small proportion of clay and silt (IGME, 1978).This sandstone could be broken up with a shovel, but it is too compact to permit root development.The parent material is composed of eye-visible crystals of quartz, feldspar (plagioclase and orthose) and mica (biotite) (Fig. 2).These minerals are generally unaltered, but locally some biotite crystals are transformed to chlorite and vermiculite.Generally, this regolith is light-coloured, but in some cases is strongly rubefacted.This red colour is related to clay accumulations, whose origin is mainly biotite alteration, which resulted in pseudomorphic units of oriented clay (Fig. 3).However, some clay could have an illuvial origin, as suggested by McKeague (1983) in similar soils.The clay pedofeatures are pure microlaminated coatings on sand grains (0.05 -0.1 mm width).
On the other hand, soils developed from slates are shallow and with high rock fragment contents, representing a strong limitation to root development.However, the parent material, composed of iron and magnesium silicates, present a planar exfoliation that roots can use for their development.In addition, clay accumulation processes are found in some cracks, creating intercalations of clayey material in the rock (Fig. 4).These intercalations could suppose until 15% in total slate volume.The described pedofeatures are coatings and infillings of clay in pores and cracks of coarse components.In all these types of accumulations, clay is pure, that is, it does not present other particles sizes (silt).Accumulations show a microlaminated internal contexture, sometimes hard to see.The origin of clay is probably illuvial, as it meets the characteristics of an ideal argillic horizon (McKeague, 1983): continuous coatings on both sides of the pores, strongly oriented, with microlamination, without sand grains and clearly different from the matrix which does not contain any fragment of oriented clay.
In all soils, redoximorphic mottles of iron and manganese are described related to clay accumulations.The pedofeatures are impregnative nodules, associated to pores and coarse components.These nodules are dark, with a gradual boundary, an irregular shape and a diameter between 0.1 and 0.4 mm.These nodules indicate an incipient hydromorphy, caused by perched water tables, of limited influence area, which would be possible thanks to high clay content.
The Priorat soils are classified as Entisols, since the soil forming processes are not enough developed to determine any diagnostic horizon, except to an ochric epipedon.In general, soils developed from granodiorites are classified as Xeropsamments, which are characterized by a texture coarser than loamy fine sand and less than 35 % of rock fragments (Table 2).However, soils developed from rubefacted granitic regolith, are classified as Typic Xerorthents.These soils cannot be classified as Alfisols, since evidences of illuvial clay is required for an argillic horizon, and in this case, the clay origin is biotite alteration.Moreover, these soils cannot be classified as Inceptisols because the deep horizons maintain the rock structure, and consequently the criteria for cambic horizon are not accomplished.With respect to soils developed from slates, they are classified as Lithic Xerorthents, in spite of presenting an exfoliated rock with intercalations of material enriched in illuvial clay.There is a subgroup in the Alfisols, named Lithic ruptic-inceptic Haploxeralfs, which is defined by presenting a lithic contact and an argillic horizon discontinuous horizontally.However, in the studied soils, the thickness of material with illuvial clay is generally lower than 7.5 cm, so the criteria for argillic horizon are not accomplished.
In Priorat soils, the clay formation supposes an improvement of the soil water reservoir for the vineyard, a fact which is especially important in a stressful environment, related to a Mediterranean climate with a dry, warm growing season, soil shallowness and high content of gravels or sands, which confer very quick internal soil drainage.In soils developed from slates, the available water capacity (AWC) of plough horizons is moderate (42.4 mm between 0 and 45 cm depth), so the water retained by the clayrich materials among the rock cracks is worth considering (23.3 mm between 45 and 138 cm depth) (Table 2).Moreover, the presence of redoximorphic features related to clay features would indicate that clay accumulation is causing an alteration in the soil moisture regime.In soils formed from granodiorites, processes with major implications for grapevine cultivation are also identified.These soils, in addition to shallowness, are composed practically by sand (Table 2), so that there are not particles of silt or clay to retain water.As a result, these soils produce a higher water stress than soils formed from slates, because they have a very low AWC (12.4 mm between 0 and 42 cm depth).The existence of rubefacted granodiorites with neoformed clay (Typic Xerorthents) result in soils with finer textures, with a significant increase of the AWC (45.8 mm between 0 and 37 cm depth) in comparison with the nonrubified Xeropsamment.Another soil property improved with clay accumulations is the cation exchange capacity (CEC) of surface and deep horizons.In surface horizons, CEC significantly increases from 4.4 to 9.9 cmol c /kg.This increase represents a substantial improvement of nutrient availability for the vine and the possibilities of development of soil structure and stability of soil aggregates, which is especially important in these soils that are poor in organic matter (contents lower than 0.5%).In short, clay accumulations significantly improve the AWC and CEC, although not always involve major changes in soil classification.
Soil forming processes in Penedès
The Penedès soils differ from the soils in the other areas by their parent materials which are richer in calcium carbonate, so that carbonate-related soil forming processes are better represented.Much of the carbonate accumulations are due to the precipitation of calcite from saturated solutions, which is leached from upper horizons or from lateral water flow caused by an impervious horizon.However, some carbonate accumulations come from biological activity, which cause a carbonate microdistribution around biopores (Boixadera et al., 2000).The features of biological accumulations are infillings of citomorphic calcite (quesparite) in pores (Fig. 5).The features of carbonate illuviation are representative of different degrees of calcification.First, a process of crystallization produces acicular crystals and few hypocoatings of micrite and microsparite (Fig. 6).Then, a process of recrystallization produces abundant coatings and well-developed hypocoatings, pendants, nodules and infillings of sparite and microsparite (Fig. 7).Later, carbonates (micrite) begin to occupy the micromass.In this stage, processes of displacement and replacement of grains or clay coatings by carbonates can occur.The most evolved stage corresponds to carbonate cementation (petrocalcic horizons).
Besides carbonate accumulation, processes of gypsum accumulation are found in the Upper Penedès soils.The gypsum-related features are coatings of lenticular crystals.In addition, mixed silt and clay hypocoatings around pores and coarse components are common in clayey soils.These features correspond to wholesoil hypocoatings (Fitzpatrick, 1990;1993), originated by the downward flow of a suspension of fine material, which may disperse after a single rain.It is a characteristic feature of clayey, continuously cultivated soils, which loose their structure, crack and form wide planar vertical pores.
Most of the Penedès soils are classified as Inceptisols, because carbonate or gypsum accumulations are enough expressed to identify calcic, petrocalcic or gypsic horizons.Generally, they are classified as Typic Calcixerepts, Petrocalcic Calcixerepts and Gypsic Haploxerepts, respectively.However, not all soils with carbonate accumulations can be classified as Calcixerepts, since they do not meet the criteria for a calcic horizon.A calcic horizon requires a minimum thickness of 15 cm, a minimum CaCO 3 content of 15 % and identifiable secondary calcium carbonate, with some exceptions.Some described soils show incipient accumulations or present too low CaCO 3 content.Generally, these accumulations lead to cambic horizons, and soils are classified as Typic Haploxerepts.Even in some cases, where carbonate accumulations are not visible to the naked eye, a cambic horizon cannot be determined, and soils are classified as Entisols.Moreover, accumulations of mixed silt and clay (whole-soil hypocoatings) do not have any connotation in soil classification.Table 3 shows the analytical properties of a soil with a well-developed calcic horizon (Typic Calcixerept), a soil with incipient accumulations of carbonates and accumulations of mixed silt and clay (Typic Xerofluvent), as well as a soil with a gypsic horizon (Gypsic Haploxerept).
The soil forming processes in Penedès are marked by the accumulation of secondary carbonates, which can be highly evolved, as it is indicate by the types of accumulations and their morphology.This evolution is reflected with the calcium carbonate content, with mean values near 60 % (Table 3), and with carbonate cementations.The evolution of carbonates in these soils may be a limiting factor for grapevine cultivation.High contents in calcium carbonate can cause a weakening in nonresistant vines, due to iron chlorosis.The carbonates increase the concentration of the HCO 3 -anion in the soil solution, and this blocks the absorption of iron by plants.The main consequences are the rickets, the foliage destruction, a reduced production and even the death of the plant.These problems may be mitigated by the choice of resistant rootstocks, such as 41B and 140R.Furthermore, very intense processes of carbonate accumulation, which leads to a micromass cementation, may constitute a limitation for the development of the root system.Moreover, carbonate accumulations in the form of nodules increase the coarse fragment content, and thus reduce the available water capacity (AWC).In deep horizons of a Typic Calcixerept, a loss of 11 mm of AWC can be quantified (between 50 and 100 cm depth), considering a volume of 20% of carbonate accumulations.However, the main implications of carbonate accumulations on vineyard management are related to rootstock selection and ploughing, which should not be too deep to prevent mixing of calcic horizons with surface horizons.
Soil forming processes in Conca de Barberà
The selected Conca de Barberà soils are developed on deposits of gravels coming from the massif of 'Serra de Prades', mainly Carboniferous slates and sandstones, with granodiorite intrusions.During the Quaternary, these deposits covered the Ebro basin margin in the form of alluvial fans, which left two types of gravel deposits.There are ancient deposits, hanged at a considerable height over the current river bed, and other modern deposits, at a little height about the current river bed and connected with the fluvial terraces.
The modern deposits correspond to extensive, flattened alluvial cones, merged with each other, which are formed by Paleozoic materials (mainly slates) and with little matrix.In soils developed from these deposits, a process of clay accumulation is identified, in the form of coatings (<0.05 mm) and infillings (<0.25 mm) of clay, covering the pores and sides of coarse components (Fig. 8).These coatings are quite impure, showing silt and clay embedded.The clay origin is probably the neoformation from mica alteration, since many coatings with embedded altered mica crystals could be observed.Many of these coatings are fragmented and incorporated into the micromass.Other authors found that in these conditions the clay origin is probably clay neoformation from mica (Mermut and Jongerius, 1980;McKeague, 1983).
The old deposits were much more extensive before, so that now some vestiges are only preserved.These deposits have a thickness of 3-4 m and are formed by very weathered polygenic gravels (granodiorites, sandstones and slates) and reddish cement composed of clay and sand.In these soils, processes of clay and carbonate illuviation are identified.The textural features are coatings and infillings of microlaminated pure clay, up to 0.8 mm width, covering the cracks and sides of coarse components and pores (Fig. 9).Many of these coatings are fragmented and incorporated to micromass, so few of them are related to present pores.The carbonate-related features described in the field are carbonate pendants (up to 15 mm width), and microscopically the described features are microsparite coatings and infillings in pores, and sometimes on clay coatings, sparite pendants, micritic nodules and fragments of laminar petrocalcic horizons.Both soils presented redoximorphic features, in the form of suborthic manganese nodules, between 0.1 and 1 mm of diameter, with rounded shapes and clear limits (not impregnative).The nodules in old deposits are more frequent and more altered than nodules in modern deposits.In general, the presence of these nodules indicates an incipient hydromorphy.However, in old deposits a paleohidromorphy seems more probable.
The soils of modern deposits are classified as Inceptisols, since the rubefaction process associated with the clay accumulation is enough developed to identify a cambic horizon.The soils of old deposits are classified as Alfisols, which are characterized by illuvial clay accumulation.The classification at subgroup level is Calcic Palexeralfs, since carbonate accumulations are enough developed to define a calcic horizon.Thus, it is evident how a longer time of soil formation allows in these deposits a higher number of soil forming processes.Comparing both soils, the soils of old deposits are redder, with significant higher content of clay, available water capacity (in deep horizons) and cation exchange capacity (Table 4).However, their deep horizons are significantly more compacted, which meant a major limitation to root development.
The soil forming processes of these soils have a direct influence on their physical properties.In the modern gravel deposits, which a priori could have very quick drainage because of gravels, the clay neoformation makes possible the existence of more balanced textures, allowing a moderate to high available water capacity (AWC).These soils have very favourable properties for grapevine cultivation, as the balanced textures assure minimal water retention and the gravels facilitate the drainage of water surplus.In addition, these soils favour the development of a deep root system, so that in years of drought the roots could take water from deep water tables.In the old gravel deposits, the properties are more unfavourable than other soils, because the roots have more difficulties to explore deep horizons.This is due to a greater compactness, related to higher clay content.Moreover, the presence of fragments of laminar petrocalcic and other forms of accumulations, which are representative of a long genetic process, are indicative of possible problems related to micromass cementation.However, these soils have a moderate to high AWC, inferred by the clayey matrix and the relatively porous rock fragments, which are capable to retain water.
Soils affected by human activities
In the study area, some soils where human activity caused major changes in soil composition are found.The main changes are related to topography and horizon arrangement.The main effects described as a result of these changes are the buried of fertile surface horizons, and horizons not arranged in any discernible order.One of the features used to determine human activity is the presence of very abrupt limits between horizons.Thus, despite the drastic change in profile composition, the anthropic soil origin is not reflected in its classification.
The soils formed by land levelling may have serious problems of soil erosion and often produce a negative effect on productivity and vigour of vines, and also on grape quality, especially in white varieties, due to a decrease in acidity and aromatic potential (Bazzofi et al., 2009).However, soils deeply affected by men cannot always be considered worse than unaltered soils, because sometimes grape quality is better in less fertile soils, especially in red varieties.Bazzofi et al. (2009) found a significant increase in the content of anthocyanins and total polyphenols of grape berries on soils affected by land levelling, improving grape quality for red wines.Moreover, in table grape production areas of Sicily (Italy) strong earthworks are conducted to bury fertile surface horizons with calcareous materials, in order to improve grape quality (Dazzi, 2008).
CONCLUSIONS
In the region of the Catalan Coastal Range (Catalonia, Spain), a high variety of soil forming processes has been identified, in relation to the existing differences in soil forming factors.In this study, we found that the soil forming processes, identified through morphological and micromorphological analyses, have significant effects on soil properties.The different processes of clay accumulation in soils developed from granodiorites in Priorat or gravel deposits in Conca de Barberà, are primarily responsible for significant differences in clay content, available water capacity and cation exchange capacity.Similarly, carbonate accumulation in Penedès soils have significant effects on calcium carbonate content and also on available water capacity.These soil properties, especially those related to soil moisture regime, available water capacity and calcium carbonate content have a direct influence on the type of management and quality of grapevine production according to different authors.Especially important are the effects that have drastic earthworks on profile characteristics.However, soil classification does not always reflect these important pedogenic processes which have remarkable implications on vineyard soil management.
For instance, clay accumulations in soils developed from slates in Priorat, incipient carbonate accumulations in Penedès, or drastic changes in the arrangement of horizons, with a decrease in soil fertility, in the case of soils modified by man.The main conclusion of this study is that parent material or climate alone cannot be used in viticultural zoning at very detailed
Figure 2 .
Figure 2. Mineral composition of granitic regolith (quartz in pure white, feldspars in impure white, mica in dark), with mica alteration in the centre of the picture (3.36 mm width, PPL).
Table 1 .
Classification and main characteristics of vineyard soils in the Catalan Coastal Range.
Luvisols106Influence of soil forming processes on soil-based viticultural zoning, Ubalde et al.
Table 2 .
Analytical properties of representative vineyard soils in Priorat region a.
Table 3 .
Analytical properties of representative vineyard soils in Penedès region b .Numbers between brackets correspond to standard errors and different letters indicate significant differences (p < 0.05) among the same horizons of different soil series, according to Newman-Keuls test (n=3). b
Table 4 .
Analytical properties of representative vineyard soils in Conca de Barberà region c .Numbers between brackets correspond to standard errors and different letters indicate significant differences (p < 0.05) among the same horizons of different soil series, according to Newman-Keuls test (n=3). c
|
2018-12-01T02:52:27.688Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "963b0dfa6858f8498b7e1fc568641ecaad7ba09a",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/jsspn/v11n1/art09.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "963b0dfa6858f8498b7e1fc568641ecaad7ba09a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geology"
]
}
|
14028574
|
pes2o/s2orc
|
v3-fos-license
|
A Constituent Picture of Hadrons from Light-Front QCD
It may be possible to derive a constituent approximation for bound states in QCD using hamiltonian light-front field theory. Cutoffs that violate explicit gauge invariance and Lorentz covariance must be employed. A similarity renormalization group and coupling coherence are used to compute the effective hamiltonian as an expansion in powers of the canonical QCD running coupling constant. At second order the QCD hamiltonian contains a confining interaction, which is being studied using bound state perturbation theory. Explicit constituent masses appear because of symmetry violations, and confinement also produces mass gaps, leading to the possibility of an accurate non-perturbative constituent approximation emerging in light-front QCD.
Introduction
The solution of Quantum Chromodynamics in the non-perturbative domain remains one of the most important and interesting unsolved problems in physics. QCD is believed to be the fundamental theory of the strong interaction, but even its definition in the non-perturbative domain is problematic. There are many sources of difficulty, but they can all be traced to the fact that QCD is formulated as a theory of an infinite number of degrees of freedom that span an infinite number of energy scales.
The basic assumption upon which all of our work is based is that it is possible to derive a constituent picture for hadrons from QCD [1][2][3][4][5]. If this is possible, nonperturbative bound state problems in QCD are approximated as coupled, few-body Schrödinger equations. For a meson, we then have, where, | Ψ = φ qq | qq + φ qqg | qqg + · · ·.
(2) P − is the light-front hamiltonian, P ⊥ is the total transverse momentum, P + is the total longitudinal momentum, and M is the invariant mass of the state. We assume that to 'leading order' a low-lying meson can be approximated as a quark/antiquark pair, with additional quarks and gluons producing 'perturbative' corrections that can be systematically computed. Many severe problems must be overcome to arrive at this formulation of the bound state problem; however, the final advantages are huge. The result is a formulation of the non-perturbative problem in a form directly accessible to physical intuition, which has proven essential for guiding approximations in atomic calculations. Variational methods and large matrix diagonalization are powerful numerical tools that can be used after the hamiltonian is determined.
I must emphasize that it is not our intent to simply force the constituent approximation on the theory by employing a Tamm-Dancoff truncation on the number of particles ab initio. We worked on such an approach initially [6], and gained valuable insights; but it became clear that we have no good method of controlling the nonlocalities resulting from particle number truncation without a dynamical mechanism that naturally limits the number of particles in a state.
Any student of field theory should immediately be suspicious of the possibility that a constituent approximation can arise, although QED provides an important accepted example that guides much of our work [2]. How can a constituent approximation arise in any field theory?
Fock space is extremely large, an infinite sum of cross products of infinite dimensional Hilbert spaces. It is not obvious that the low-lying eigenstates should have significant support only in the few-body sectors of Fock space. In fact, this simply does not happen in perturbation theory. In perturbation theory high-energy many-body states do not decouple from low-energy few-body states. Consider an electron mixing with high-energy electron/photon states. The error made by simply throwing away the high energy components of the state is infinite. Moreover, there are an infinite number of scales and both the electron and photon that 'dress' the low-energy bare electron are in turn dressed by additional pairs, ad infinitum.
The lesson here is quite old. Without regularization and renormalization a constituent picture is impossible. Renormalization may allow us to move the dynamical effects of high-energy, many-body states from the eigenstate to effective interactions between effective quarks and gluons.
Low-energy many-body states also do not decouple from low-energy few-body states. In fact, it is common lore that hadrons are excitations on an extremely complicated vacuum. Students of QCD expect the infinite-body vacuum to be an integral part of every hadron eigenstate. This is the problem that leads us to use light-front coordinates, just as it motivated the use of the infinite momentum frame for the formulation of the parton model. In light-front coordinates physical particle trajectories satisfy the kinematic relativistic constraint because all velocities are equal to or less than the velocity of light. Since longitudinal momentum is conserved, the only states that can mix with the zero momentum bare vacuum are those in which every bare parton has identically zero longitudinal momentum. For a free particle of mass m, the light-front energy is This energy diverges as p + approaches zero, which must happen as the number of particles grows for fixed total longitudinal momentum. Thus, in light-front coordinates all many-body states become high energy states, leading us back to the original problem of replacing the effects of high energy states with effective interactions. This argument is naive, but there is little profit in elaborating further at this point. Finally, manifest gauge invariance and manifest covariance apparently require all states to contain an infinite number of particles. This is most easily seen, for example, by considering rotation operators. Rotations are dynamical in light-front coordinates and the generators contain interactions that change particle number. No state with a finite number of particles transforms correctly under rotations. We use cutoffs that violate these symmetries, which must then be repaired by effective interactions that remove all cutoff dependence. The constituent approximation is possible only if these symmetries are also treated approximately. Proposing the violation of manifest gauge invariance is heresy in the QCD community, but heresy sometimes leads to progress in science.
There is a long list of questions concerning how a constituent approximation can arise in QCD, but I mention only one. How can confinement emerge without a complicated vacuum? Since we have a hamiltonian, we can use a variational calculation to study what happens as a quark/antiquark pair are separated to infinity. Since the addition of gluons can only lower the energy, we must find that Here R is the quark separation, and the only way this matrix element can diverge is if the hamiltonian contains a two-body interaction that diverges. We will see below that this apparently happens. This discussion is not intended to convince the skeptical reader that a constituent approximation is valid. However, the assumption that a constituent picture emerges from QCD provides strong guidance. A hamiltonian approach is indicated. Cutoffs that limit mixing of high and low energy states are required, and they must violate explicit rotational covariance and gauge invariance. All non-perturbative effects attributed to the vacuum in other approaches must directly appear in few-body effective interactions.
Light-Front Renormalization Group
The renormalization of the hamiltonian and all other dynamical observables begins with the observation that no physical result can depend on the cutoff. In the Schrödinger equation, the eigenvalue, M , cannot depend on the cutoff. The hamiltonian, P − , must depend on the cutoff, as does the eigenstate. Wilson's renormalization group was formulated starting with the observation that physical matrix elements cannot depend on the cutoff, and we have adapted his approach to the light-front problems we face [7]. It is not possible to discuss cutoff-independence if the cutoff is fixed, so the central operator in Wilson's renormalization group is a transformation that lowers the cutoff. Given a transformation, T , that lowers the cutoff by a factor of 1/2, for example, we can define a renormalized hamiltonian to be one which has a finite cutoff but results from an infinite number of transformations. The transformation determines what operators must be precisely controlled for this limit to exist. Near a fixed point (i.e., a hamiltonian that does not change under the transformation), these operators can be classified as relevant and marginal.
In the perturbative regime relevant and marginal operators are determined by their naive engineering dimension. In light-front field theory there is no longitudinal locality, only transverse locality, so it is the transverse dimension of an operator that determines its classification. However, while there are a finite number of relevant and marginal operators in equal-time field theory, the violation of longitudinal locality in light-front field theory implies that ratios of longitudinal momenta can appear, allowing entire functions of longitudinal momentum fractions to appear in each relevant and marginal operator. At first sight this appears to be a disaster; however, one paradox we faced above was how complicated interactions associated with non-perturbative effects such as confinement could arise in few-body operators. This is possible because of the violation of longitudinal locality.
To develop a light-front renormalization group we must decide what cutoff to implement and then derive a transformation that runs this cutoff. It is possible to use a cutoff on the total invariant-mass of states, as is commonly done in DLCQ calculations for example; however, such cutoffs lead to strong spectator dependence and small energy denominators appear in the resultant effective interactions. We use a cutoff on the change in free energy. If the hamiltonian is viewed as a matrix such a cutoff limits how far off diagonal matrix elements can appear.
There is not enough space to elaborate the transformation that runs this cutoff, so I refer the reader to the literature [1,2,4,8,9]. The transformation is unitary, leading to what G lazek and Wilson call a similarity renormalization group [8,9].
if |E 0i − E 0j | > Λ. If this cutoff is lowered to Λ ′ , the new hamiltonian matrix elements where To follow the details of the discussion it is important to remember that there are implicit cutoffs in this expression because the matrix elements of v have already been cut off so It is rather easy to understand this result qualitatively. We have removed the coupling between degrees of freedom whose free energy difference is between Λ ′ and Λ, so the effects of these couplings are forced to appear in the new hamiltonian as direct interactions. To first order, the new hamiltonian is the same as the old hamiltonian, except that couplings of states with energy differences between Λ ′ and Λ are now zero. To second order, the new hamiltonian contains a new interaction which sums over the second-order effects of couplings that have been removed. The second-order term in the new hamiltonian resembles the expression found in second-order perturbation theory, which is not surprising since the new hamiltonian must produce the same perturbative expansion for eigenvalues, cross sections, etc. as the original hamiltonian.
Equation (9) shows how the hamiltonian changes when the cutoff is lowered, and the next step is to determine from this change what hamiltonians can emerge from an infinite number of transformations. The simplest result is a fixed point hamiltonian, one which does not change under the transformation. In 3 + 1 dimensions the only known fixed points are free field theories. Coupling coherence is a generalization of the fixed point idea [2,7,10]. A coupling coherent hamiltonian reproduces itself in form, but one or more couplings run while all additional couplings are invariant functions of these running couplings. For example, in QCD the canonical coupling runs at third order. To second order in this coupling, all interactions must reproduce themselves exactly, with Λ → Λ ′ . It is not trivial to implement this simple-sounding constraint, but at each order it determines the hamiltonian. In all calculations to date the resultant hamiltonian is unique, and all broken symmetries are restored to the order at which the hamiltonian is fixed.
To second order a generic coupling coherent hamiltonian that contains v must also contain, or Note that v in these expressions is the same as that above only to first order. The coupling coherent interaction in H is written as a power series in v which reproduces itself under the transformation, except the cutoff changes. In higher orders the canonical variables also run. The light-front similarity renormalization group and coupling coherence fix the QCD hamiltonian as an expansion in powers of the running canonical coupling.
QCD: A Strategy for Bound State Calculations and Confinement
While realistic calculations will no doubt require a more elaborate procedure, a relatively simple strategy for doing bound state calculations can now be outlined [2,4]. i)Start with the canonical hamiltonian, H can , and use the similarity renormalization group and coupling coherence to compute, Truncate this series at a fixed order.
ii) Choose an approximate hamiltonian that can be treated nonperturbatively, You must choose Λ and H Λ 0 to minimize errors. iii) Accurately solve H Λ 0 as the leading approximation. iv) Compute higher order corrections from V Λ using bound state perturbation theory. v) To improve the calculation further return to step (i) and compute the hamiltonian to higher order.
There are two principal reasons that this strategy will fail for QCD. First, the hamiltonian is computed perturbatively so that errors in the strengths of all operators are at least as large as a power of α. Small errors in the strengths of irrelevant operators tend to produce even smaller errors in results. However, errors in marginal operators tend to produce errors of the same order in results and small errors in relevant operators tend to produce exponentially large errors in results. At the minimum we expect that we will have to fine tune relevant operators, which means tuning a finite number of functions of longitudinal momenta. Second, chiral symmetry breaking operators (where light-front chiral symmetry should be distinguished from equal-time chiral symmetry [1]) will not arise at any order in an expansion in powers of the strong coupling constant. We must work in the broken symmetry phase of QCD ab initio and insert chiral symmetry breaking operators. Simple arguments lead us to expect that only relevant operators need to be considered if transverse locality is maintained, but there are no strong arguments for transverse locality in these operators.
Despite these limitations, this strategy may be applied to the study of bound states containing at least one heavy quark [5], as discussed by Martina Brisudová in these proceedings; although even here masses should be tuned, as expected. The strategy is conceptually simple and there are no ad hoc assumptions.
The first step is to compute the effective QCD hamiltonian to order α. I refer the reader to the literature for details on the canonical hamiltonian [11]. The first applications of the approach are to mesons [5], and we assume that for sufficiently small cutoffs we can choose H Λ 0 to contain only interactions in H Λ that do not involve particle production or annihilation, as dictated by our initial assumption that a constituent picture will arise. This means we can first focus on operators that act in the quark/antiquark sector. I emphasize that all operators must be computed, and without the confining interactions in sectors containing gluons the entire approach would make no sense.
First consider the second-order correction to the quark self-energy. This results from the quark mixing with quark-gluon states whose energy is above the cutoff. If we assume that the light-front energy transfer through the quark-gluon vertex must be less than Λ 2 /P + , the coupling coherent self-energy for quarks with zero current mass is Here the quark has longitudinal momentum p + , while the longitudinal momentum scale in the cutoff is P + . The first and most interesting feature of this result is that I have been forced to introduce a second cutoff, which restricts how small the longitudinal momenta of any particle can become. Without this second cutoff on the loop momenta, the self-energy is infinite, even with the vertex cutoff. This second cutoff should be thought of as a longitudinal resolution. As we let ǫ → 0 we resolve more and more wee partons, and in the process we should confront effects normally ascribed to the vacuum. In this case the wee gluons are responsible for giving the quark a mass that is literally infinite. Theorists who insist on deriving intuition from manifestly gauge invariant calculations may find this interpretation repugnant, but within the framework of a light-front hamiltonian calculation it is quite natural. This second, infrared cutoff poses a problem. If we introduce a second cutoff, shouldn't we introduce a second renormalization group transformation to run this cutoff and find the new counterterms required by it? No. I will insist that all divergences associated with ǫ → 0 cancel exactly in all physical results for color singlet states. The important question is how can these divergences cancel so that mesons have a finite mass, and the answer to this question leads to confinement.
A nearly identical calculation leads to the second-order self-energy of gluons, and the dominant term goes like The quark and gluon masses are infinite, which is half of the confinement mechanism. In addition to one-body operators we find quark-quark, quark-gluon, and gluon-gluon interactions. As we lower the cutoff, we remove gluon exchange interactions, and these are replaced by direct interactions. The analysis of all of these interactions is nearly identical, and I consider only the quark-antiquark interaction. This interaction includes two pieces, instantaneous gluon exchange which is in the canonical hamiltonian, and an effective interaction resulting from high energy gluon exchange. To study confinement we need to examine the longest range part of the total interaction, which is a piece that diverges as longitudinal momentum exchange goes to zero. I outline the calculation [2,4].
High energy gluon exchange cancels part of the instantaneous gluon exchange interaction, leaving Here the initial and final quark (antiquark) momenta are p 1 and p 2 (k 1 and k 2 ), and the exchanged gluon momentum is q. The energies are all determined by the momenta, p − 1 = p 2 ⊥1 /p + 1 , etc. This part of the interaction is independent of the spins. If Λ ≈ Λ QCD , we expect further gluon exchange to be suppressed, and we are left with this singular interaction between the quark and antiquark.
The next step in the analysis is to take the expectation value of this interaction between arbitrary quark-antiquark states, Ψ 2 | V | Ψ 1 . If we define and expand the wave functions about q = 0, we find a divergence in the expectation value, Unless φ 1 and φ 2 are the same, this vanishes by orthogonality. If they are the same, this is exactly the same expression we obtain for the expectation value of the quark plus antiquark divergent mass operators; except with the opposite sign. Therefore, there is a divergence in the quark-antiquark interaction that is independent of their relative motion and which exactly cancels the divergent masses! These cancellations only occur for color singlets, and they occur for any color singlet state with an arbitrary number of quarks and gluons. Moreover, these cancellations appear directly in the hamiltonian matrix elements, so we can take the ǫ → 0 limit before diagonalizing the matrix. This is half of the simple confinement mechanism. At this point it is possible to obtain finite mass hadrons even though the parton masses diverge. However, since the cancellations are independent of the relative parton motion, we must study the residual interactions to see if they are confining. Since I am interested in the long-range interaction, I will study the fourier transform of the potential and compute V (r) − V (0) so that the divergent constant in which we are no longer interested is canceled.
when x ⊥ = 0 and |x − | → ∞; and when |x ⊥ | → ∞ and x − = 0. This potential is not rotationally symmetric, but it diverges logarithmically in all directions. If the potential is not rotationally symmetric, how can rotational symmetry be restored? In light-front field theory rotations are dynamical. While it may be possible for rotational symmetry to be realized approximately in low-lying quark-antiquark states, exact symmetry requires additional explicit partons and even approximate rotational symmetry will require additional partons if we study highly excited states. We expect excited physical states in which a quark and antiquark are separated by a large distance to contain gluons. There is no reason to assume that the gluon content of these states is the same when the state is rotated, so rotational symmetry will be restored in highly excited states only if we allow additional partons. This complicates our attempt to derive a constituent picture, but we only need the constituent picture to work well for low-lying states. The intermediate range part of the potential is rotationally symmetric, and we may expect the ground state hadrons to be dominated by the valence configuration.
Isn't the confining potential supposed to be linear and not logarithmic? There is no conclusive evidence that the long-range potential is linear, and heavy quark phenomenology shows that a logarithmic potential can work quite well; however, lattice calculations provide strong evidence for a linear potential. However, low-lying states are not sensitive to the longest-range part of the interaction; and light quark-antiquark pairs prevent even exited states from being sensitive to the longest-range part of the interaction. In any case, I do not want to argue that these calculations show that the long-range potential in light-front QCD is logarithmic. Higher order corrections could produce powers of logarithms that add up to produce a linear potential.
The above argument seems to apply directly to QED at first sight. Is QED confining? There is a confining interaction between charged particles in the hamiltonian, but there is no strong interaction between charged particles and photons. To see if confinement survives in QED we should include the confining interaction in H 0 , and then compute corrections in bound state perturbation theory. In QED the second order correction from photon exchange below the cutoff exactly cancels confinement. This implies that if confinement is included in H 0 , higher order corrections are large. If the Coulomb interaction, which appears in H also, is included in H 0 , bound state perturbation theory appears to converge rapidly. On the other hand, in QCD gluons also experience a confining interaction. When second order bound state perturbation theory is used to study the effect of the exchange of confined gluons, it is seen that gluon exchange does not cancel the confining interaction in H 0 ; so this picture of confinement is at least self-consistent.
The important point is that H contains a confining interaction that we are free to include in H 0 , giving us some hope of finding a reasonable bound state perturbation theory for hadrons that resembles the bound state perturbation theory that has been successfully applied to the study of atoms.
Summary
A constituent picture of hadrons may emerge in QCD if we use: • hamiltonian light-front field theory • a cutoff of order Λ QCD on energy changes that violates manifest covariance and gauge invariance • a similarity renormalization group and coupling coherence Bound states can be studied using bound state perturbation theory. The effective hamiltonian is computed as an expansion in the strong coupling, and then divided into H = H 0 + V , with V treated perturbatively. H 0 must include all essential interactions. We have found that H contains an order α logarithmically confining two-body interaction between all colored partons, and we have begun studies of bound states using this confining interaction, as discussed in the talk by Martina Brisudová.
|
2014-10-01T00:00:00.000Z
|
1996-04-01T00:00:00.000
|
{
"year": 1996,
"sha1": "c0c8519afef8fd523e777e86c72c6fbd1d03f5ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c0c8519afef8fd523e777e86c72c6fbd1d03f5ca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
36164469
|
pes2o/s2orc
|
v3-fos-license
|
Why every office needs a tennis ball: a new approach to assessing the clumsy child
The Case: A 7-year-old boy is brought to your office by his mother at the urging of his school. Although he reads extremely well and seems to understand material that is taught, he has great difficulty producing written work, initiating and finishing tasks and participating in gym class. He has
Motor coordination problems in otherwise healthy children of normal intelligence are common. Such children are often noted by parents, caregivers and teachers to have problems with daily tasks such as dressing themselves, to trip when they run, to spill things frequently and to have messy handwriting and drawing. They may be labelled as "clumsy," "awkward" and "lazy." Research has shown that children with these motor coordination problems often end up with serious academic and social impairments and problems with self-esteem. Developmental coordination disorder (DCD) is the term used when a child's delayed motor skill development affects his or her ability to perform age-appropriate daily activities (Box 1).
A total of 5%-6% of children meet the criteria for DCD. 1 This means that, on average, at least 1 child in every primary school classroom is affected. Children with DCD are more likely than their peers to experience learning, emotional and behavioural problems (including learning disabilities, anxiety and attention-deficit hyperactivity disorder). Further, the deficits of DCD usually persist through adolescence and into adulthood. Early recognition of the condition by primary care providers may reduce its ultimate academic, emotional and behavioural impact.
Epidemiology and natural history
DCD is commonly diagnosed after age 5, when minor motor problems (often noted when the child was young) are highlighted by the structured demands of a school environment. 2 The ratio of boys to girls varies from 2:1 to 5:1, depending on the group studied. The cause of DCD is poorly understood, since the results of genetic studies, im-aging tests and other laboratory investigations are all inconclusive.
Children with DCD may appear to be inattentive because they have difficulty stabilizing their bodies and joints, so they look restless. They may also actively avoid tasks that require motor skills and become anxious in social situations. DCD and attention-deficit hyperactivity disorder frequently occur together, but the contribution of the motor difficulties to children's academic and social problems is often overlooked.
Although the pathophysiology is unknown, affected children appear to have underlying difficulties in motor planning (planning movements such as sitting down on a chair or figuring out how to jump), the timing and amount of force needed during movement (e.g., using too much or too little force to pick things up, being late reaching to catch a ball), and the integration of information from sensory and motor systems (e.g., relying heavily on visual information to climb stairs or fasten buttons). 3 Children may also show poor balance, slow reaction and movement times, and difficulty executing fine motor skills needed for performing self-care activities, handwriting and drawing. 2 The natural history of DCD is of con- A. Performance in daily activities that require motor coordination is substantially below that expected, given the person's chronological age and measured intelligence. This may be manifested by: • Marked delays in achieving motor milestones (e.g., walking, crawling, sitting) • Dropping things • Clumsiness • Poor performance in sports • Poor handwriting B. The disturbance in criterion A significantly interferes with academic achievement or activities of daily living C. The disturbance is not due to a general medical condition (e.g., cerebral palsy, hemiplegia or muscular dystrophy) and does not meet criteria for a pervasive developmental disorder D. If mental retardation is present, the motor difficulties are in excess of those usually associated with it cern, not because of the motor coordination problem itself but because of its impact on everyday activities and participation. Parents express concern about coordination difficulties when the child is young, but by early school age, these concerns are more evident as problems with self-care and academic activities. By the end of elementary school, social isolation, poor self-image and victimization are evident. Physical health concerns (childhood obesity and reduced physical fitness) and mental health problems (anxiety and depression) are commonly noted by early adolescence (Fig. 1).
Screening
Annual health examinations are ideal times to screen for DCD. Parents can be asked to complete a self-administered questionnaire (see example in Appendix 1, available at www.cmaj.ca/cgi/content /full/175/5/471/DC1), or the physician can conduct a structured interview, listening for difficulties commonly associated with DCD. In addition, the physician can assess the child using simple screening activities administered in his or her office (see Appendix 2, available at www.cmaj.ca/cgi/content/full/175/5/471 /DC1). Children with symptoms or signs of a motor coordination disorder require further evaluation. An assessment that takes into account the differential diagnosis of DCD (Box 2) is necessary, since DCD is a diagnosis of exclusion. Elements of the child's history, physical examination and laboratory test profile that would make alternate diagnoses more likely are indicated.
Referral and treatment
Early referral to an occupational therapist or pediatric multidisciplinary team can help confirm the diagnosis and rule out comorbid conditions such as speech or language difficulties, attentional problems, learning difficulties and mental health problems. This type of team can also help devise early management plans that may improve the child's developmental outcomes. Successful treatment approaches involve various allied health professionals, and the child's parents, physician and teachers. 2,3 Armed with a diagnosis of DCD, parents are in a position to advocate for their child and to adapt their child's environment to encourage independence and self-esteem. Children with DCD lack confidence in situations where motor skills are required. Simple changes such as Velcro fasteners instead of buttons and laces can speed up dressing. Physical activities that naturally incorporate repetition and a constant environment, such as swimming, can be encouraged rather than team games. Teachers can reduce a child's stress and encourage academic progress by "matching" the child's abilities to the task. For example, reducing writing requirements, giving more time to complete tasks and encouraging different roles in physical education class can be helpful. 4 Resources containing teaching tips and strategies for parents and educators can be found at the CanChild Centre for Childhood Disability Research (www.canchild.ca).
The case revisited
Physical examination reveals that the patient has normal hearing and vision, is slightly overweight and has low muscle tone (slouches and has unstable posture in sitting and standing positions). Administration of the screening activities shows that the boy's onelegged balance is poor. His pencil grasp is awkward, he uses excessive pressure, and his printing is slow. His sitting posture at the desk is "floppy" and he props his head upright by leaning on his other hand. The patient is unable to bounce and catch a tennis ball (see video clip, available at www.cmaj.ca/cgi/content /full/175/5/471/DC2).
In the parent questionnaire, the mother indicates that her son has great difficulty with many motor-based activities, is slow to learn new motor skills and becomes easily frustrated. Further questions about his disruptive behaviour in the classroom reveal that he misbehaves only when written work is required; he is not otherwise inattentive.
DCD is diagnosed. The physician provides the boy's parents with a variety of educational materials and suggests a referral to an occupational therapist and a review in 3 months.
|
2017-08-08T19:36:12.287Z
|
2006-08-29T00:00:00.000
|
{
"year": 2006,
"sha1": "8720db926d89927bf8a4fa69bd514c040248f72a",
"oa_license": null,
"oa_url": "http://www.cmaj.ca/content/175/5/471.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "81c80bb39320a6aedf7a7f7e3d5924c155bafb53",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
257331143
|
pes2o/s2orc
|
v3-fos-license
|
The effects of online tourism information quality on conative destination image: The mediating role of resonance
With the increasing popularity of mobile applications, people enjoy browsing online tourism information on social media. This information may cause psychological resonance, which in turn stimulates travel intentions. This study examined the relationship between online travel information quality (OTIQ), resonance, and conative destination image. A partial least squares structural equation model was used to analyze the survey data of 426 users who recently used social media to browse online tourism information. The results show that four dimensions of OTIQ (value-added, relevancy, completeness, and design) affect cognitive resonance, and three dimensions of OTIQ (interestingness, design, and amount of information) affect emotional resonance. Both cognitive resonance and emotional resonance directly affect the conative destination image. This study contributes to online tourism marketing research by identifying the factors of OTIQ that rise tourists’ resonance. It also contributes to destination image research by extending the application of resonance theory and examining the role of cognitive resonance and emotional resonance in forming a conative destination image. Understanding how QTIQ builds a destination image can help destinations improve the quality of online tourism information to attract potential tourists. This study also provides recommendations to destination marketers to formulate appropriate marketing strategies in the age of innovative technology.
Introduction
Social media as a new marketing tool for tourism destinations, its mechanism of influence of online tourism information on destination image formation has been receiving attention from the tourism industry and academia (Lam et al., 2020). By July 2022, the number of social media users worldwide had surpassed 4.7 billion, and they used social media for more than 2 h a day on average (Kepios, 2022). The valuable information provided by online social media not only makes new marketing of commodities, services, and communication possible but also changes the way people make travel decisions (Khan, 2017). According to a U.S. statistic, 97% of millennials post photos on social media while on vacation. And about 52% of travelers decide to go to a specific destination after seeing photos/videos of friends, family, or peers on social media (Statistics, 2022). Every week, over 1 million travel-related hashtags are searched on Instagram (a worldwide popular social media app; Statistics, 2022). Social media has become their first-choice channel for getting tourism information (Wang et al., 2022). Social media users watch travel pictures, opinions, reviews, comments, and travel experiences shared by others on social media, either automatically pushed by Apps or by their deliberated search (Majeed et al., 2020). Although we know that a series of tourist information generated by users has shaped the image of a destination in people's minds, which influences people's travel decision-making (Fan et al., 2020), we do not know what types of information influence people most in choosing a destination. Therefore, a study is needed to understand the various tourism information on social media and its impact on online tourism marketing.
Recent studies indicated that online tourism information quality (OTIQ) is an important indicator in shaping the online tourism market and contributing to the formation of destination image (Kim et al., 2017). Researchers have examined the effects of influential online tourism information on online users' travel purchases intention (Hateftabar, 2022), visit intentions (Cheng et al., 2020), and word-ofmouth recommendations (Majeed et al., 2020), which are the components of destination conative image. Although previous studies have indicated that online information content is the key factor influencing tourists' conative behavior (Lam et al., 2020;Majeed et al., 2020), there is a lack of studies to explain the mechanism of how OTIQ influences tourists' conative behaviors regarding the formation of the conative destination image.
The resonance theory originates from sociological research and emphasizes positive cognitive and emotional consequences (Giorgi, 2017). Researchers applied the resonance theory to explain how customers can match the information with their internal worldview when they are receiving online information (Camilleri and Kozak, 2022). The resonance theory states that adequate information will arouse the audience's cognitive resonance and emotional resonance, and they can influence customers' further behavioral intentions (Cheng et al., 2020). A literature search of mainstream databases (EBSCOhost, Scopus, and ScienceDirect) revealed that there were only two empirical tourism studies on resonance. These two studies tested the effect of users' resonance with travel blogs on word-ofmouth and travel behavioral intentions (Cheng et al., 2020;Mohanty et al., 2022). In this case, it is possible that high-quality online information can rise tourists' resonance reactions, ultimately influencing tourists' perceptions of destination conative image. However, no research has been found on the psychological resonance of receiving high-quality tourism information in tourism research.
This study aims to examine the relationship between online tourism information quality (OTIQ), resonance, and conative destination image. Firstly, this study contributes to online tourism marketing research by identifying the factors of OTIQ that rise tourists' resonance, thereby prompting them to generate a conative destination image. Understanding this mechanism helps destinations improve the quality of online tourism information to attract potential social media users to become destination tourists. Secondly, this study extends the application of resonance theory in destination image research by examining the role of cognitive resonance and emotional resonance in forming a conative destination image. Finally, the results of this study help destination marketers grasp accurate online tourism information and effectively carry out online tourism marketing promotion.
The study is organized as follows. A brief literature review of online tourism information quality, resonance theory, and conative destination image is provided in the next section. Then, the research methods and data analysis are described in Sections "Research hypotheses" and "Research methods", respectively. In Section "Findings", conclusions are drawn, the theoretical and practical implications are discussed, and the limitations of this study and recommendations for further research are presented.
Conative destination image
Previous studies on the destination image have shown that destination image can affect a destination's competitiveness in the market (Liang and Lai, 2022), thus how a destination can stand out from a favorable destination image is one of the most explored areas in tourism research. Hunt (1975) defined destination image as an individual's beliefs, impressions, and perceptions of a specific place. Initially, researchers proposed a two-dimensional tourism destination image model, in which the tourism destination image contains both a cognitive image and an affective image (Gartner, 1994). The cognitive image relates to a person's knowledge and beliefs about a tourism destination, and the affective image relates to how they feel about the destination (Baloglu and McCleary, 1999). Later on, researchers developed a conative destination image construct, which involves tourists' behavioral intentions (Ryan and Ninov, 2011). A conative destination image is a collection of future actions, and it contains three test items, i.e., "intention to recommend, positive word of mouth, intention to revisit" (Woosnam et al., 2020). Lojo et al. (2020) stated that to effectively market a destination and form a market position, it is most important to identify the factors that influence the formation of the conative image of the destination. Besides cognitive image and emotional image, previous articles have shown that many factors (e.g., travel experience, positive emotions, etc.) can influence recommendation intention, word-of-mouth, and revisit intention (Hosany et al., 2017;Wang et al., 2023). However, most studies examined the factors for the conative image based on travel experience after visiting a destination. For tourists before visiting a destination, Wang et al. (2020) pointed out that their conative destination image is highly susceptible to user-generated tourism content on social media, however, there is a lack of studies to link how online tourism information affects tourists' psychological status, and then influences tourists' conative image.
Online tourism information quality
At the beginning of the 21st century, tourism information began to appear on the internet and gradually evolved into a channel for tourism organizations to communicate with tourists (Xiang and Gretzel, 2010). The emergence of Web 2.0 makes tourists easy to create and share information with other tourists on social media (Aghaei, 2012). Therefore, social media have become the most influential marketing tool for the public, businesses, and government organizations (Hays et al., 2013), because tourists consider social media to be the most trusted source of information about a destination (Fotis et al., 2012). Tourists can consult travel information on social media to support their travel decisions pre-trip, during a trip, and post-trip (Wang et al., 2022).
Frontiers in Psychology 03 frontiersin.org As information recipients, tourists obtain online travel information on social media published by information providers (Lu et al., 2016). However, tourists may be confused by the vast amount of online travel information pushed by information providers (Dharmasena and Jayathilaka, 2021), therefore, they have to pay attention to the quality of online travel information. However, previous studies mainly focused on the content of the travel information rather than the quality of travel information until Kim et al. (2017) introduced the concept of the OTIQ. Based on Chaiken's (1980) heuristic system model, Kim et al. (2017) developed a multilevel OTIQ model for social media that includes content qualities (i.e., value-added, relevancy, timeliness, completeness, and interestingness) and non-content qualities (i.e., web page design and amount of information). Researchers have applied the multidimensional OTIQ scale to examine the impact of online travel information on the formation of the destination image in different scenarios (Rodríguez et al., 2019;Guo and Pesonen, 2022). However, these studies only tested tourists' behavioral intentions when viewing online information on social media without investigating their psychological changes. There is a lack of studies on how online information affects travelers' psychological status.
The role of resonance theory
Resonance is one of the most widespread sociological concepts that was initially used to explain an individual's understanding of an organizational framework (Snow et al., 1986). Researchers have applied it to describe the fit between information and audience worldviews (McDonnell et al., 2017). There are two main types of resonance, cognitive resonance and emotional resonance. Cognitive resonance is based on the audience's beliefs and understanding, and emotional resonance is based on the audience's feelings, passions, and desires (Giorgi, 2017). These two types of resonance lead to positive consequences (Snow et al., 1986;Su et al., 2019). For their antecedents, researchers found that cognitive resonance can be achieved when people can interpret their understanding of information in a way that matches their expectations (Shang et al., 2017), and emotional resonance can be achieved when acquired information arises people's curiosity and desire (Kang et al., 2020). Emotional resonance usually interpenetrates with cognitive resonance and eventually leads to a strong resonance that lasts for a period of time (McDonnell et al., 2017). Therefore, if tourists find that the travel information on social media meets their expectations and generates desires, they will have cognitive resonance and emotional resonance. However, what types of online travel information (content qualities and non-content qualities) can arise tourists' cognitive resonance and emotional resonance are still a question that should be investigated.
Research hypotheses
In recent years, social media has been recognized as the most important source of acquiring tourism information (Lee et al., 2019). People are not only read travel information on social media but also involve in discussing travel information in the comment area (Camilleri and Kozak, 2022). According to the multi-level OTIQ model for social media (Kim et al., 2017), tourists would receive content qualities and non-content qualities (value-added, relevancy, timeliness, completeness, interestingness, design, and amount of information) of online information when they browse destination travel information on social media. When the information people received is consistent with their expectation and meet their demands, they are likely to have cognitive resonance with them (McDonnell et al., 2017). On the other hand, when the information people received is interesting and amazing, this information is more likely to facilitate conversations and emotional exchanges with the audience (Mangold and Faulds, 2009), therefore, they are more likely to respond emotionally to the information content (Shang et al., 2017). In general, the information posted online includes attractions and accommodations in a destination (Wang et al., 2022), which helps tourists plan their trips. Furthermore, online information about a destination also includes tourism activities and entertainment (Wong et al., 2020), which stimulates tourists' interests. Then, if tourists find the online travel information sufficient, useful, and relevant, they will have a cognitive resonance with the online travel information; if tourists find the online travel information interesting and amazing, they will have an emotional resonance with the online travel information. Therefore, the following two sets of hypotheses are proposed.
Cognitive resonance and emotional resonance are not mutually independent, these two resonances have a complex permeating process, as it is not only the mundane that people focus on but the emotional touch that the event brings (Giorgi, 2017). Since people searching for information is driven by emotions (McDonnell et al., 2017), tourists' emotions will be affected by their cognitive resonance from the context of online travel information they obtained from social media. If tourists cannot get cognitive resonance from online travel information, they will not have a feeling of emotional resonance. It means that tourists find online travel information interesting once they find online travel information useful. Therefore, cognitive resonance is a necessary condition for tourists to have emotional resonance from online travel information.
H3: Tourists' cognitive resonance positively influences their emotional resonance. Giorgi (2017) suggested that resonance is an antecedent of the audiences' intentional output. Previous studies have shown a positive relationship between engagement with travel information on social media and tourists' behavioral intention (Tran, 2020). Mohanty et al. (2022) found that cognitive resonance and emotional resonance can facilitate social media engagement behaviors. Therefore, the cognitive resonance and emotional resonance obtained by engaged tourists from online travel information may contribute to the formation of the Frontiers in Psychology 04 frontiersin.org destination image. Cheng et al. (2020) stated that when browsing travel information on social media, tourists are emotionally inspired by the received information, which in turn forms the destination image or forms a visit intention. Wu and Lai (2022) pointed out that tourists are influenced by media information and are keen to go to destinations which impress them and conform to their self-congruity. This means that tourists' cognitive resonance and emotional resonance to online travel information may directly influence their perception of the conative image of a destination. Therefore, the following hypotheses are proposed.
H4:
Tourists' (a) cognitive resonance and (b) emotional resonance positively influence their conative image of a destination.
This study aims to examine the relationship between OTIQ, cognitive resonance, emotional resonance, and conative image. The hypothetical model is shown in Figure 1.
Measurement
The measurable items of seven factors (amount of information, completeness, design, interestingness, relevancy, timeliness, and valueadded) of OTIQ used in this study are inspired by Kim et al. (2017). The measurable items of cognitive resonance and affective resonance are borrowed from Cheng et al. (2020). The measurable items of the conative image are borrowed from Afshardoost and Eshaghi (2020). All the measurement scales have been well-validated in previous studies. To suit the research setting, an expert meeting was conducted to slightly adjust the original measures according to the content of this case. The panel of experts includes two scholars in tourism, a social media company executive, a member of the tourism board, and three tourists who have used social media for planning travel. Experts recommended some modifications such as removing two items referring to video quality and no specific location in measuring conative image.
Questionnaire design and data collection
This study divided the questionnaire into three sections. The first section consisted of screening questions to ensure that respondents met the criteria for the study. Respondents were screened based on three criteria: (i) being at least 18 years old; (ii) having used social media in daily life; and (iii) having browsed tourism destination information on social media within a week. The second part consisted of seven dimensions of OTIQ, cognitive resonance and emotional resonance, and conative destination image. Respondents were asked to answer the questionnaire based on their last experience browsing online travel information. The questionnaire was measured using a 7-point Likert scale (where "1" = strongly disagree, "7" = strongly agree). The third section was about the respondents' background information. The English questionnaire was translated into Chinese, and the Chinese questionnaire was back-translated into English by two English-Chinese translators to eliminate translation bias. Expert Research model. Psychology 05 frontiersin.org consultation with five tourism scholars was conducted to validate the content of the questionnaire. A pilot test was conducted with 30 social media users to further ensure the accuracy and readability of the questionnaire. Some adjustments were made to the wording of some items, such as adding explanations for specific social media platforms. An online survey was conducted from the 1st of September to the 31st of October 2022. The online survey is a non-probability sampling method widely used in tourism research (Cho et al., 2020). Using an online survey in this study can reach more potential participants who have experience in browsing online travel experiences. 480 samples were collected. The 54 questionnaires scored the same scores on most of the questions, so only 426 samples were valid for data analysis. The effective rate of the questionnaire was 88.75%.
Findings
The partial least squares structural equation modeling (PLS-SEM) was used to evaluate the research model. Compared with CB-SEM, PLS-SEM has fewer restrictions on the normal distribution of the data (Hair et al., 2017). This study used SmartPLS (v.3.3.9) for data analysis . This sample size was satisfied by a power analysis based on the part of the model with the largest number of predictors (Hair et al., 2021). Table 1 shows the characteristics of 426 valid respondents. Among the participants, 63.1% were females. The 18-20, 21-30, and 31-40 age groups accounted for 18.8, 42.0, and 19.5%, respectively. 61.2% of participants had a bachelor's degree. 91.8% of participants used social media 2-3 times or more a day. Table 2 shows the means, standard deviations (SD), and factor loadings for the 43 measurable items. All factor loadings were above 0.7 and ranged from 0.701-0.917. As shown in Table 3, the Cronbach's alpha coefficients for each construct exceeded 0.7, the values of Composite Reliability and rho_A for each construct also exceeded 0.7, and the average variance extracted (AVE) scores were greater than 0.50, therefore, substantial reliability and convergent validity were achieved (Hair et al., 2019).
Reliability and validity
The correlation between constructs was less than the square root of the AVE score (numbers on the diagonal) and all values of the heterotrait-monotrait (HTMT) ratio were below the indicator 0.9 (Henseler et al., 2015). Table 4 shows the HTMT ratios. Because of high HTMT ratios for the two pairs of 'Design' and ' Amount of information' (0.857), 'Timeliness' and 'Design' (0.855), a bootstrapping method was used to assess the inference of HTMT. The values of the confidence interval (97.5%) for both pairs are less than 1, thus, it did not violate the hypothesis of discriminant validity, and the discriminant validity was confirmed (Hair et al., 2021).
This study used Harman's single-factor test to clarify the absence of common method variance. The results indicated that the first factor explained 39.079% of the variance; as such, the issue of common method variance did not exist in this study (Podsakoff et al., 2003). In addition, the values of all variance inflation factors (VIF) were below 3 (as shown in Table 2), indicating that there was no collinearity problem in this study (Hair et al., 2019).
Results of partial least squares analysis
This study used a bootstrapping technique of 5,000 re-samples to examine the significance of the statistical hypothesis model (Hair et al., 2017). The results of PLS-SEM are shown in Figure 2; Table 5. The R-squared values of the three endogenous latent variables (cognitive resonance, emotional resonance, and conative image) were 0.520, 0.601, and 0.479, respectively. Four dimensions of online tourism information quality (value-added, relevancy, completeness, design) significantly influence cognitive resonance (β value-added = 0.188, value of p = 0.005; β relevancy = 0.195, value of p = 0.043; β completeness = 0.212, value of p < .001; β design = 0.181, value of p = 0.003), supporting hypotheses H1 (a), H1 (b), H1 (d), H1 (f). Three dimensions of online tourism information quality (interestingness, design, amount of information) significantly influence emotional resonance (β interestingness = 0.186, value of p < 0.001; β design = 0.254, value of p < 0.001; β amount of information = 0.161, value of p = 0.012), supporting hypotheses H2 (e), H2 (f), H2 (g). Cognitive resonance significantly influences emotional resonance (β = 0.285, value of p < 0.001) and conative image (β = 0.426, value of p < 0.001); emotional resonance significantly influences conative image (β = 0.335, value of p < 0.001). Thus, the hypotheses of H3 and H4 were supported. To assess whether the omitted constructs have a substantial effect on endogenous constructs, effect size f 2 values are to be calculated (Hair et al., 2017). The values of f 2 for the above significant paths ranged from 0.027 to 0.201, all higher than Cohen's (2013) criteria effect of size 0.02.
Conclusion
Online travel information quality has a significant effect on cognitive resonance in four dimensions (value-added, relevancy, completeness, and design), and among the four dimensions, completeness has the greatest impact, relevancy ranks second. Three dimensions (interestingness, design, and amount of information) of OTIQ have a significant effect on emotional resonance, and among the three dimensions, design has the greatest impact. This means that complete and relevant information is what audiences are searching for, and design (interface layout and attractive headings) can trigger their emotions. This study also found that cognitive resonance and emotional resonance positively influence conative destination image. These results are consistent with Mohanty et al.'s (2022) study that cognitive resonance and emotional resonance stimulated by the information on Travel Vlogs have an effect on travel intention.
Theoretical implications
Although researchers have classified OTIQ into content cues and non-content cues (Kim et al., 2017;Rodríguez et al., 2019), they did not distinguish how content cues and non-content cues affect audiences' psychological status. This study indicates that content cues (e.g., relevancy, completeness) strongly stimulate tourists' cognitive resonance, and non-content cues (e.g., design) strongly stimulate tourists' emotional resonance. Tourists search for travel information on social media and have a feeling of cognitive resonance once they find the content is relevant and useful for planning the trips, and they have an emotional response once they have an impression of the design of the social media. This study contributes to online tourism marketing research by explaining how online travel information influences tourists' psychological resonance.
Previous studies have applied the concept of resonance in destination image tourism research (e.g., Cheng et al., 2020), but no studies have evaluated the relationship between cognitive resonance and emotional resonance and no study has compared which resonance has a stronger effect on the formation of the destination image. This study provides evidence to support that cognitive resonance is more important than emotional resonance in forming a conative image of a destination. Cognitive resonance not only directly affects the formation of a conative image but also indirectly affects the formation of a conative image through emotional resonance. This implies that after customers receive online tourism information, they may first filter out information that matches their cognition, thereby affecting emotional resonance. In addition, by applying the concept of resonance, this study provides a complete picture of how OTIQ can create the conative image of a destination. This study contributes to destination image research by extending the application of resonance theory and examining the role of cognitive resonance and emotional resonance in forming a conative destination image.
Practical implications
This study provides marketers with important insights into online tourism marketing. DMOs should carefully design the information posted on social media to effectively market the destination. The information provided should be value-added, relevancy, and completeness. For example, cooperate with influential KOLs to publish the cultural customs of tourist destinations, Instagram-worthy locations, niche attractions, trending stores, travel itineraries, and other information in a thematic manner on social media. In addition, the interface design is also important. For example, a well-designed virtual appearance interface (subcultural beauty, color scheme, layout of images and video modules) can make a destination stand out from the crowd of online information (Jamshidi et al., 2021). These designs can inspire not only cognitive resonance but also direct emotional resonance.
Social media are one of the strongest marketing mediums in the world, so the results of this study are not only applicable to destination marketing but also applicable to other online tourism marketing areas. For example, hotels can push a large number of esthetically pleasing products and brand information related to target customers on social Results of PLS-SEM analysis.
Frontiers in Psychology 10 frontiersin.org media to attract customers' attention, while paying attention to customers' reactions to each product information in order to make further online marketing. Travel agents should use precision marketing to post complete and valuable travel information on social media, design different packages for multiple users, and use attractive interfaces to appeal to customers' emotional resonance. High-quality online information helps to gain customers' recognition and earn a corporate reputation.
Limitations and future research
The targets of this study were the tourists before visiting a destination. Further research can compare the differences in the paths of two groups of tourists who have and have not visited the destination before. The data were collected in China. Further research is recommended to collect data in different countries to compare any differences in forming resonance and the conative image of a destination. This research model only consists of OTIQ, resonance, and conative image, researchers can extend this research model with other constructs to understand the impact of QTIQ and resonance on destination image in a comprehensive way.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants to participate in this study was not required in accordance with the national legislation and the institutional requirements.
Author contributions
XuW: idea inception, data collection, data analysis, and writingoriginal draft. XiW: idea inception and refine, writing-review and editing, evolution research goals, and aims. IL: experiment design, data curation, writing-review and editing, evolution research goals, and aims. All authors contributed to the article and approved the submitted version.
|
2023-03-04T16:18:35.452Z
|
2023-03-02T00:00:00.000
|
{
"year": 2023,
"sha1": "82e94dfc4dfd02ed3250cc5f8f377cdec54e346d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1140519/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f161c5b6faad412c7909917830e8623862003fa",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267164421
|
pes2o/s2orc
|
v3-fos-license
|
Photoacoustic viscoelasticity assessment of prefrontal cortex and cerebellum in normal and prenatal valproic acid-exposed rats
Mechanical properties of brain tissues are from principal features from different points of view; diagnosis, the performance of the brain and neurological disorders. Particularly viscoelastic properties of the brain tissues are determinative. In this study based on a proposed accurate and non-invasive method, we have measured the viscoelastic properties of prefrontal cortex and cerebellum, two important brain regions involved in motor learning and pathophysiology of autism spectrum disorder (ASD). In this regard, using photoacoustic systems, viscoelastic properties of tissues from the cerebellum and prefrontal cortex of normal and prenatal VPA (Valproic acid)-exposed (i.e. autistic-like) offspring rats are measured. Results of our study show that the cerebellums of normal tissues are stiffer than the tissue obtained from autistic-like rats, while the viscoelasticity of the prefrontal cortex of normal tissues is higher than that of autistic ones. The proposed method for the measurement of viscoelastic properties of the brain tissue has the potential not only for the fundamental studies but as a diagnosis technique.
Introduction
Autism spectrum disorder (ASD) is a group of the most frequently diagnosed neurodevelopmental disabilities characterized by impairments in social skills, a range of deficits in cognitive function, and altered motor learning [1,2].The prevalence of ASD has been reported as 1-1.7%, with an increasing trend over time [3].Although the etiology of ASD is poorly understood, and it makes diagnosis and treatment challenging, many studies have focused on the genetic and epigenetic factors contributing to autism pathophysiology [4].However, mechanical factors that may severely affect brain pathology have not yet been thoroughly assessed in ASD.Structural and mechanical properties of the brain are interlinked with brain composition [5] and gray and white matter properties [6].Furthermore, it is generally agreed that the brain tissue's mechanical properties not only strongly influence normal brain function and development but also can alter the progression of neurological disorders [5].Several studies have shown a direct correlation between abnormal mechanical properties and neurodegenerative conditions, such as Alzheimer's disease, encephalomyelitis, and multiple sclerosis [7].Currently, some ASD-related quantitative differences in brain morphometry in various regions (particularly in the prefrontal cortex and cerebellum) have been rendered [5].Abnormalities in white matter and disorganized neuronal connectivity have also been previously shown in ASD brains [7].However, whether any mechanical changes can be detected in the brain areas and regions in an autistic-like model (such as maternal exposure of rats to VPA) has not been reported yet.
Examining the local mechanical properties of the brain tissue in pathological conditions would enable scientists to shed light on the new area of exploration for treatment targets or diagnostic markers in ASD that may affect brain structure.Moreover, the brain is mechanically more compliant than other biological tissues and can exhibit viscoelastic deformations [7].A quantitative evaluation of the viscoelasticity characteristics of the multiple brain areas might pave an appealing new way for finding the underlying mechanisms contributing to the functioning of the normal brain and neurodevelopmental disorders.There are several studies assessing various mechanical properties of the brain tissue, including elasticity [8,9] and viscoelasticity [6,10] in different brain areas and under different conditions such as ageing.Although numerous studies have been conducted on the viscoelasticity of the brain tissue, there is still a lack of information regarding how brain diseases may affect these mechanical properties.
With the recent acknowledgement of the brain areas that are crucial for social and cognitive functions, the cerebellum and prefrontal cortex abnormalities are associated with autistic symptoms [11].Accumulating evidence indicates that the cerebellum is involved not only in motor coordination but also in cognitive and social functions [12].Structural and functional cerebellar abnormalities have been frequently reported in patients diagnosed with autism [11][12][13].Recent studies have suggested a pivotal implication of the prefrontal cortex in autism to explain some symptoms [14,15].The prefrontal cortex is crucial for social interaction, emotional behaviors and higher-order cognitive, language, and executive functions that are disrupted in ASD [16].Studies investigating structurally and functionality in ASD patients have shown a positive and direct correlation between prefrontal cortex abnormalities and autism traits [11].In addition, cerebellar-prefrontal cortex functional connectivity changes have also been identified in human and mouse models of ASD [17].In the present study, we attempted to examine whether induction of autism may affect the viscoelastic properties of these brain tissue.Mechanical properties of the brain tissue play an important role in modulating the function and dysfunction of the brain [18]; therefore, characterizing the mechanical properties of the brain tissue may help better understand the pathological changes that occur in brain diseases.
There have been several methods for studying the viscoelasticity of brain tissue such as rheology in which externally controlled deformation is applied to the brain sample and the resulting strain and stress are measured [19,20]; ultrasound elastography (USE) which is composed of an acoustic actuation for disturbance introduction and ultrasound imaging followed by fitting to a rheological model [21]; indentation which involves the employment of a small force to the surface of brain tissue following by measuring its resulting deformation [6,22]; magnetic resonance elastography (MRE) which uses magnetic resonance imaging (MRI) to capture images of the tissue and calculate the displacement caused by an external vibration [23,24]; and on the smaller scale, atomic force microscopy (AFM) which has been used for evaluating the mechanical properties of biological samples including cells, biomolecules and tissue.AFM viscoelasticity measurement is performed by applying a small indentation to the tissue surface while recording the deflection of the cantilever [25,26].There is a high risk of damaging the tissue in rheology, indentation and AFM because of the application of external forces.Although MRE is a non-invasive method, a bulky expensive unit is required for this system.Ultrasound-based techniques are non-invasive, and relatively widespread with clinical applications for liver fibrosis and breast cancer providing whole-body imaging depth, but suffer from low resolution around 500 µm [27].
Photoacoustic (PA) imaging is a rapidly expanding technique that has emerged in the past decade, enabling imaging of deep tissues with high resolution [28].This method relies on laser excitation, usually pulsed laser, which triggers a thermoelastic expansion of tissue, producing high-frequency ultrasound waves that can be detected by ultrasound transducers [29,30] or optical methods such as interferometry [31,32].PA imaging (PAI) has numerous applications, ranging from small organelle to whole organ imaging [33,34].PA microscopy (PAM) can provide high-resolution images, offering insights into cellular morphology [35], functional status [36][37][38], and molecular composition [39].PA tomography can be used to generate 3D images of deep tissues, up to several centimeters deep, by employing a wide laser beam with an array of transducers for detection [40].PA endoscopy has also been developed for imaging internal organs such as the gastrointestinal tract [41,42].Given that conventional PAI is based on optical absorption, it has been demonstrated that mechanical properties, such as viscoelasticity, can also be measured using this technique [43].The viscosity-elasticity ratio can be determined based on the phase delay between the PA signal and the laser excitation signal in the photoacoustic viscoelasticity (PAVE) system in which laser modulation is mostly in the range of kHz.PAVE has been employed for the detection of liver diseases such as hepatitis and cirrhosis [44,45], distinguishing tumors and surrounding tissue [43,46], identification of atherosclerosis [47][48][49] and esophageal disease [50], mapping mechanocellular properties [51,52] and measuring mechanical parameters of gray and white matter of mouse brain [53].As mentioned above, various techniques exist for tissue differentiation based on the viscoelasticity with their profits and applications, but the PAVE offers a non-destructive method same as MRE and USE.The most important advantage of the PAVE technique compared to the ultrasound or MRE methods is higher spatial resolution (around 10 times) with the cost of a lower imaging depth (a few cm).Moreover, those systems are large and expensive [27].
In the present study, the viscoelasticity of various brain areas including the cerebellum and prefrontal cortex of 5 control rats and 5 autistic-like ones has been measured by the proposed PAVE system as a reliable, low-cost, non-invasive method.Moreover, in order to make the deductions strong, the percentage of the water content of these brain regions has been investigated in 4 control rats as well as 4 rats with autism.
Animals
A total of 18 male offspring of Wistar rats were used for water content assessment and PAVE studies.Animals were kept at the room temperature of 23 ± 2 C and a 12:12 h light/dark cycle with food and water ad libitum.In the present study, two groups of control and maternal VPA-treated group, referred to as autistic-like offspring, were employed and two sets of experiments were performed.The first set was carried out to assess the alterations in the viscoelasticity and the second set was done to determine the water content of the cerebellum and prefrontal cortex.Experimental protocols and animal care were done according to the guidelines approved by the Ethics Committee of Shahid Beheshti University of Medical Sciences in line with the NIH Guide for the Care and Use of Laboratory Animals (IR.SBMU.MSP.REC.1397.335).
To induce autistic-like behavior in rats, 4 female rats (200-250 g) were mated, and pregnancy was determined by the presence of spermatozoa in vaginal smears.On the embryonic day 12.5, dams received a single intraperitoneal injection of Sodium Valproate (NaVPA, 500 mg/ kg, 150 ml/kg) or saline (150 ml/kg) [54][55][56][57][58].After weaning, offspring were separated from their dams and housed in standard cages.The 6-week male offspring were divided into separate groups for the studies (Fig. 1).
Principle of PAVE
The working principle of PAVE system is depicted in Fig. 2(a).An intensity-modulated laser beam with a sinusoidal intensity with the form of I = 0.5I 0 (1 +cosωt) is required for the sample irradiation.ω and I 0 represent the modulation frequency and time-average light intensity, respectively.Absorption of light by the tissue, changes the temperature of the tissue in a periodic form and thermal stress will be generated accordingly.This sinusoidal thermal stress leads to the production of periodic strain with the exact frequency as the light modulation but with a phase lag related to the viscoelastic damping effect of the tissue [59].By employing the Kelvin-Voigt (KV) model for viscoelasticity which is a parallel connection of a spring and a damper as shown in Fig. 2(b), stress (σ)-strain (ε) relation can be calculated based on the Eq. 1 in which E and η are Young's modulus and coefficient of viscosity accordingly [60].
After taking the Fourier transform of Eq. 1, the phase lag δ between the stress and strain or laser excitation and the generated PA signal can be found as expressed in equation 2.
Therefore, the ratio of viscosity to elasticity has a direct relation to the amount of the phase delay between the laser and PA signal and can be used for comparing the viscoelasticity properties of different samples.
PAVE experimental setup
Fig. 3 illustrates the employed system for measuring viscoelasticity based on the PA effect.A fibre-coupled continuous wave laser with a wavelength of 808 nm, maximum output power of 1 W and modulated with 50 kHz frequency was chosen as the excitation source (the laser spot diameter was 0.8 mm ensuring the ANSI limit for the medical applications).The sample was placed on a thin layer of ultrasound gel working as the coupling media on top of the ultrasound transducer (UT).Both the produced PA wave, collected by a 50 kHz UT (DYW-50 kHz, Dayu Electric) with a 63 mm diameter and 10% fractional bandwidth, and the laser excitation signal are fed to a homemade lock-in amplifier for measuring the phase delay between these two signals.Finally, the signals of both channels of the lock-in amplifier were digitized by a DAQ card (USB-4716 Advantech) and transferred to the computer for further processing.The MATLAB platform was employed for controlling the system, generating a driving signal for the laser and further processing the raw data.Based on the ultra-low level of the PA signal and with the reference of its periodicity, lock-in amplification has been used.In this regard, two-channel homemade lock-in amplifier was used to measure both the amplitude and the phase of the generated signal (with the reference to the excitation).For this purpose, the PA signal and the reference signals were multiplied and then filtered to find the amplitude and the phase.In contrast to the most of previously reported PAVE systems, the primary distinction lies in the modulation method of the excitation laser.We have modulated the laser beam electronically, offering compactness, cost-effectiveness, versatile modulation formats and faster modulation compared to the electro-optic and acousto-optic modulators.
Phantom and sample preparation for PAVE
To verify the system performance, gelatin phantoms mimicking the mechanical properties of the biological tissues were prepared with two different concentrations of 10% and 20%.First, the gelatin powders in the water were stirred at room temperature followed by heating them in a water bath for complete dissolving and generation of a uniform solution.By pouring the solution into the molds and waiting for a few hours, the phantoms were ready for the viscoelasticity tests.In addition, to check the system capability, the liver, fat and muscle of a chicken were considered as the biological samples and their viscoelasticity was measured.
In order to assess the mechanical properties of the brain tissues in normal and autism-like conditions, rats were sacrificed following anesthesia by 100 mg/kg ketamine and 10 mg/kg xylazine, and then brains were removed, and 3 mm-thick slices of the cerebellum and prefrontal cortex were prepared.To slow down the tissue degradation and prevent tissue dehydration, acute slices were submerged in carbogenated (95% O 2 / 5% CO 2 ) artificial cerebrospinal fluid composed of (in mM): 125 NaCl, 2.5 KCl, 1.5 CaCl 2 , 1.25 NaH 2 PO 4 , 25 NaHCO 3 and 10 D-glucose.
Measurement of the brain water content
The wet/dry weight technique was used to calculate the water content of the brain.After sacrifice, the brains were carefully removed from the skull.Then, 100 mg tissue of the cerebellum and prefrontal cortex were incubated for 24 h in a 60 • C oven and the initial and final weights were measured.Water content of various rat brain regions samples was expressed according to Eq. 3 [61]: Water percentage (%) = [(wet weight -dry weight) / wet weight] × 100 (3)
Statistical analysis
The results, depicting water content and PA phase delay, were presented as the mean±SEM (standard error of the mean).Analysis was conducted using unpaired t-test, employing GraphPad Prism software.Furthermore, differences with a significance level of p < 0.05 were considered statistically significant.
PAVE of phantoms
Viscoelasticity of the gelatin-water mixtures with 10% and 20% concentrations were measured by the proposed PAVE system and the results are presented in Fig. 4(a).The blue bars show that by doubling the concentration of gelatin samples, their PA phase delay dropped significantly from 46.44 ± 0.15 to 36.65 ± 0.23 degrees (a reduction of 22.8%) while the PA amplitude of the measured signals illustrated as purple bars represent a slight variation (24.97 ± 0.07 mV for the sample with 10% concentration vs 25.73 ± 0.07 mV for 20% concentration gelatin).Fig. 4(b) shows the evaluation of the PA phase delay (blue bars) and PA amplitude (purple bars) for the liver, fat and chicken muscle as biological tissues.It can be seen that although the PA phase delay is a good parameter for distinguishing these tissues, the variation of PA amplitude was small and remained at the same level approximately (13.6 ± 0.02 mV for liver, 13.7 ± 0.02 for fat, and 14.2 ± 0.01 for muscle).The liver stands on the first rank with the highest phase delay of 59.22 ± 0.08 degrees, followed by the fat with 54.65 ± 0.07 degrees in the second place.The minimum phase delay is related to the muscle as the stiffest tissue with 47.32 ± 0.05 degrees as expected.
PAVE of the mouse brain
To assess the possible impacts of autism on the mechanical properties of different regions of the brain, PA phase delay of the cerebellum and prefrontal cortex for control and autistic-like rats were measured and the results are depicted in Fig. 5(a).The phase delay of the cerebellum for the autistic-like group is noticeably higher than the value for the control group (48.98 ± 0.11 degrees vs 48.24 ± 0.11 degrees).However, for the prefrontal cortex, the phase delay of the rats with autism is lower than the measured value for the control samples (46.83 ± 0.06 vs 47.35 ± 0.07).
To explore the viscoelasticity properties of the various regions of the brain, mean values of the PA phase delay of the cerebellum and prefrontal cortex were measured for control and autistic-like groups (Fig. 5 (b)).The results express that in both groups, the cerebellum has a higher viscoelasticity ratio than the prefrontal cortex, but the difference is not the same.Cerebellum shows a higher phase delay (48.24 ± 0.11 degrees) compared to the prefrontal cortex (47.35 ± 0.07 degrees) in control rats.While the trend is the same for rats with autism, the phase delay of the prefrontal cortex is markedly lower than the value for the cerebellum (46.83 ± 0.06 degrees vs 48.98 ± 0.11 degrees).In other words, the percentage of the variation of the phase delay from the cerebellum to the prefrontal cortex increased from 1.87% to 4.39%, respectively.
Mouse brain water content
Prenatal exposure to the VPA led to a significant elevation in the water percentage in the 100 mg of the cerebellum (92.54 ± 0.38 vs. 79.1 ± 0.67, p˂0.001) compared to the control.However, for a 100 mg sample of the prefrontal cortex, the water percentage did not change considerably (81.28 ± 1.65 vs. 83.77± 2.37, p = 0.4228) (Fig. 6(a)).Next, to investigate whether the water content is different between these two brain areas, the water percentage of the cerebellum and prefrontal cortex within rats of the same group were compared (Fig. 6(b)).In control rats, the prefrontal cortex had a slightly higher water percentage than the cerebellum, with mean values of 83.77 ± 2.37 in the prefrontal cortex vs. 79.1 ± 0.67 in the cerebellum.However, statistical analysis revealed that it was not significant (p > 0.05).
Verification of the system performance
To evaluate the system performance, the PA phase delay of gelatin samples with 10% and 20% concentrations were measured and compared to similar studies in the literature.According to the values provided by Zhao et al., by increasing the gelatin concentration from 4% to 7%, the percentage of change in PA phase delay was around 22% [62]; the results of our setup showed a 21.1% change in PA phase delay by doubling the gelatin concentration, meaning a good agreement to the reported values.Furthermore, the PA phase delay of a chicken's liver, fat and muscle was assessed and a change of 12.9% was measured between the PA phase delay of the fat and muscle.Gao et al., employed the PAVE system to investigate the viscoelasticity of the liver, fat and muscle of a pig [63], the trend of the PA phase delay of these tissues was similar to our findings.Moreover, the pig's fat had an 8.2% higher phase delay in comparison to the muscle of the pig.Given that the tissue composition of chicken and pig as well as the sample preparation in these two studies might differ, this small difference seems reasonable.
Water content of brain tissue
The areas that are crucial brain regions implicated in ASD pathogenesis include the prefrontal cortex, amygdala, hippocampus, and essentially the cerebellum.Here, we selected the prefrontal cortex and cerebellum brain regions that are involved in motor learning [64], which is disrupted in ASD [65].We demonstrated that prenatal exposure to VPA significantly increased the cerebellar water content in VPA-induced autistic-like offspring.In agreement with our results, Deckmann et al. revealed a significant increase in the whole brain water percentage of the VPA animal model of autism [66].Furthermore, Kumar et al. have recently shown a huge cerebellar permeability [67] and increased water content in the cerebellum [68] of rats treated with prenatal VPA.
As a rule of thumb, there is a direct relationship between water content and tissue viscoelasticity [8].Interestingly, here we provide the first evidence for the accompanying decreased elasticity by the increment of water content in the cerebellum of the autistic-like model.
In the other respect, previous studies demonstrated that white matter stiffness and myelin content exhibit a strong correlation [7], and the brain water level changes during the myelination process.As the brain matures, increasing myelination shows a concomitance decrease in brain water content, also shown in the cerebellum [69,70].Moreover, as mentioned previously, there is an interlink between the mechanical properties of the brain, such as viscoelasticity with brain composition [7] and gray and white matter (myelinated axons) characteristics [6].However, there is a shred of evidence demonstrating delayed myelination in the animal models of ASD during development [71].It can be concluded as a suggestion that the possible delayed myelination in autism would have resulted in the higher water content in the cerebellum of autistic-like offspring.
Since the higher water content reflects the presence of higher liquid volume in the brain of autistic-like animals, it would be proposed that brain oedema in ASD possibly is a factor leading to higher brain volume regions in autistic patients.In keeping with this claim, MRI findings in ASD individuals have well described the volumetric differences in the total brain [72,73] and numerous subcortical structures, especially the cerebellum [7,8], as compared to typically developing children.In accordance, greater brain volume [73] and enlarged cerebellar volume have been detected in ASD patients.
We also examined the content of water in the PFC region.Although the changes were not significant, a reduction in water percentage appeared in the PFC of offspring affected by prenatal exposure to VPA compared to the control group.Bolduc et al. and Limperopoulos et al. have shown that prenatal cerebellar malformation led to a relative reduction in the volume of the prefrontal cortex at age two, possibly the age onset of ASD [74][75][76].According to the critical association of the cerebellum and prefrontal cortex and the structural and functional connection between them, these findings support the importance of cerebellum and prefrontal cortex mechanical abnormalities in autism development.
PA viscoelasticity of the brain tissue
As the tissue structure and constituent can mainly determine its mechanical properties, such as viscoelasticity, assessing the water content can help the results interpretation.Mchedlishvili et al. investigated the effect of oedema development in the rabbit's brain tissue, in which they showed that by increasing the water content of the tissue, as a marker of oedema formation, tissue compliance elevated considerably [77], meaning a decline in the tissue elastic modulus and therefore growth in the viscosity-to-elasticity coefficient.Benjamin et al. studied the effect of age on the mechanical properties of various brain regions, including the hippocampus and cortex.They reported that there is a negative correlation between the water content and stiffness (elastic modulus) [78] or a direct relationship between water content and viscoelasticity.According to the Fig. 6(a), the water content of the cerebellum in the autistic rats is significantly higher than the control ones, which confirms the larger phase delay for the rats with autism (Fig. 5(a)).The water content of the prefrontal cortex and cerebellum in autistic-like samples (Fig. 6(b)) determines that prefrontal cortex is remarkably stiffer than the cerebellum, which justifies the lower viscosity-to-elasticity coefficient for prefrontal cortex (Fig. 5(b)).The water content of the prefrontal cortex in control and autistic-like groups, as well as the prefrontal cortex and cerebellum in control tissues, are not statistically different; while there is a noticeable change in their viscoelasticity properties.A possible explanation is that the differences in cellular composition, water, protein and lipid contents may also lead to the variation in mechanical properties, particularly the viscosity-elasticity ratio [79].MacManus et al. evaluated the shear modulus of various brain regions, including the cerebellum and cortex of different species such as mouse, rat and pig for adolescents and young adults with the indentation method.They observed that the shear modulus of the rat's cortex was significantly higher than the value for the cerebellum at all ages [79].Considering the structural similarity between the prefrontal cortex and other cortices, the findings of Mac-Manus et al. can be extended to the shear modulus of the prefrontal cortex and cerebellum.On the other hand, Young's modulus can be calculated from Eq. 4, which represents the relation between shear modulus and Young's modulus [80].
where G and ν are shear modulus and Poisson's ratio, respectively.As the tissue can be considered as an incompressible medium due to the high water content, the Poisson's ratio is around 0.5 and the elastic modulus can be approximated by the three-fold of shear modulus.Therefore, it is expected that in control samples, Young's modulus for the cerebellum would be lower than the prefrontal cortex.Fig. 5(b) shows that the cerebellum has a higher phase delay, meaning viscoelasticity, compared to the prefrontal cortex.These results are in good correspondence with the literature since there is an inverse relation between Young's modulus and viscosity-to-elasticity coefficient.
Conclusions
To the best of our knowledge, there has been no study investigating the impact of autism induction on the viscoelasticity properties of different brain regions measured by PAVE system.In the present study, the application of PAVE for acquiring the viscoelasticity of various brain regions including the cerebellum and prefrontal cortex for control and autistic-like conditions has been demonstrated.The results of our study reveal that the cerebellum in control samples is noticeably stiffer than in autistic tissues.On the contrary, the viscosity-to-elasticity coefficient of the prefrontal cortex in control groups was higher than the measured value for the rats with autism.In addition, a comparison between the viscoelasticity of the brain regions showed that the prefrontal cortex had a lower visco-elasticity ratio than the cerebellum for both control and autism samples.While the mechanical properties of the tissues are completely dependent on their composition and structure, the percentage of water content was calculated for different conditions to support the deduction.There are some requirements which should be considered for sample preparation that might limit the applications of the PAVE system.The samples should have the same thickness and their surface must be flat to avoid the undesired phase deviation due to the thickness difference or uneven surfaces.Our study's findings present a novel avenue for investigating the autism using a trustworthy, economical, and non-intrusive methodology based on mechanical characteristics.In light of the significance of mechanical properties of the brain tissues in neuroscience research, our suggested approach for quantifying brain tissue viscoelasticity holds the potential for effective utilization.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 4 .
Fig. 4. Bar chart of the PA phase delay and amplitude for (a) gelatin phantoms.(b) chicken tissues.
Fig. 5 .
Fig. 5. (a) PA phase delay of the cerebellum and prefrontal cortex in control and autistic-like groups.(b) PA phase delay of different regions of the control and autistic-like samples.
Fig. 6 .
Fig. 6.(a) Percentage of water in the prefrontal cortex (PFC) and cerebellum through the difference in wet/dry.(b) Percentage of water of various rat brain regions in the control and autistic-like groups.
|
2024-01-24T16:30:30.425Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "21d02bcd8975e36497c5d48195add6b141364248",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.pacs.2024.100590",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8d676f574dc19d6276a9d37a64323f121491380",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
37785483
|
pes2o/s2orc
|
v3-fos-license
|
SKIP counteracts p53-mediated apoptosis via selective regulation of p21Cip1 mRNA splicing.
The Ski-interacting protein SKIP/SNW1 functions as both a splicing factor and a transcriptional coactivator for induced genes. We showed previously that transcription elongation factors such as SKIP are dispensable in cells subjected to DNA damage stress. However, we report here that SKIP is critical for both basal and stress-induced expression of the cell cycle arrest factor p21(Cip1). RNAi chromatin immunoprecipitation (RNAi-ChIP) and RNA immunoprecipitation (RNA-IP) experiments indicate that SKIP is not required for transcription elongation of the gene under stress, but instead is critical for splicing and p21(Cip1) protein expression. SKIP interacts with the 3' splice site recognition factor U2AF65 and recruits it to the p21(Cip1) gene and mRNA. Remarkably, SKIP is not required for splicing or loading of U2AF65 at other investigated p53-induced targets, including the proapoptotic gene PUMA. Consequently, depletion of SKIP induces a rapid down-regulation of p21(Cip1) and predisposes cells to undergo p53-mediated apoptosis, which is greatly enhanced by chemotherapeutic DNA damage agents. ChIP experiments reveal that SKIP is recruited to the p21(Cip1), and not PUMA, gene promoters, indicating that p21(Cip1) gene-specific splicing is predominantly cotranscriptional. The SKIP-associated factors DHX8 and Prp19 are also selectively required for p21(Cip1) expression under stress. Together, these studies define a new step that controls cancer cell apoptosis.
Factors that regulate the elongation phase of RNA polymerase II (RNAPII) transcription also play an important role in protecting cells from DNA damage and environmental stress. Global inhibition of transcription elongation activates the p53 tumor suppressor through formation of long single-stranded regions of DNA that recruit RPA and ATR to signal a stress response, even in the absence of DNA damage (Derheimer et al. 2007;Gartel 2008). Transcription elongation is tightly regulated at many induced genes by the positive elongation factor P-TEFb (CycT1:CDK9) (Price 2008;Fuda et al. 2009;Hargreaves et al. 2009). P-TEFb counteracts proteins responsible for RNAPII promoter-proximal pausing (Chiba et al. 2010). As a consequence, p53 is strongly induced in cells treated with P-TEFb/CDK9 inhibitors such as flavopiridol (FP). FP promotes apoptosis through induction of p53 and inhibition of short-lived anti-apoptotic proteins, and is currently in clinical trials as an anti-cancer agent for leukemia and solid tumors (Canduri et al. 2008;Wang et al. 2009). Thus, RNAPII is a genome-wide sensor for DNA damage, through its ability to activate p53 and initiate programmed cell death upon encountering significant blocks to elongation.
The Ski-interacting protein SKIP (Snw1 and NCoA62) is a required transcriptional coactivator for many newly induced genes (Leong et al. 2001(Leong et al. , 2004Zhang et al. 2003;Folk et al. 2004;MacDonald et al. 2004) and counteracts transcriptional repression by retinoblastoma (Prathapam et al. 2002). The SKIP homologs in Saccharomyces cerevisiae (Prp45) and Drosophila (BX42) are essential for cell viability, splicing Makarov et al. 2002;Gahura et al. 2009), and nuclear export of spliced mRNAs (Farny et al. 2008). Although elongation factors can affect splicing indirectly through changes in the rate of elongation, and defects in cotranscriptional splicing can reduce RNAPII elongation rates in vivo (Kornblihtt 2007;Muñ oz et al. 2009;Pirngruber et al. 2009), SKIP is recruited to promoters as well as transcribed regions and appears to play a direct role in each process. We reported previously that SKIP associates with P-TEFb and stimulates HIV-1 Tat transcription elongation in vivo and in vitro (Brè s et al. 2005). At the HIV-1 promoter, SKIP recruits c-Myc and also interacts with the MLL1:Menin histone methyltransferase to promote H3K4 methylation (Brè s et al. 2009). Previous studies found that SKIP also binds U2AF35 , the PPIL1 peptidyl-prolyl isomerase Xu et al. 2006), and the DExH RNA helicase Prp22 (Gahura et al. 2009), which helps release mRNA from the spliceosome (Schwer 2008). SKIP is required for cell survival and stress resistance in plants (Hou et al. 2009), and depletion of human SKIP or hPrp22 results in mitotic spindle defects and accumulation in prometaphase (Kittler et al. 2004(Kittler et al. , 2005, indicating an important role in cell cycle progression. We reported previously that neither SKIP nor P-TEFb is needed for stress-induced HIV-1 transcription in vivo (Brè s et al. 2009). It is unclear why P-TEFb is dispensable under stress, but it could reflect a loss of RNAPII pause factors or promoter histone modifications, or even locuswide nucleosome depletion, as observed at heat-shock genes (Petesch and Lis 2008). Similarly, an earlier study found that P-TEFb is not required for p53-induced p21 Cip1 (henceforth called p21) gene transcription in cells subjected to DNA damage (Gomes et al. 2006). These studies suggest that a widespread loss of elongation control may accompany environmental or genotoxic stress, such as that leading to G2/M arrest. In contrast, p21 gene transcription is selectively blocked at the level of elongation in cells exposed to the S-phase arrest agent hydroxyurea (Mattia et al. 2007), indicating that different types of stress have distinct effects on elongation in vivo. Different subsets of p53 target genes specify whether cells will arrest to repair DNA damage, or undergo apoptosis (Vazquez et al. 2008;Vousden and Prives 2009). Key p53 target genes in these opposing pathways are the anti-apoptotic G1 cell cycle arrest factor p21 (Abbas and Dutta 2009) and the proapoptotic BH3-only Bcl-2 protein PUMA. The relative levels of these two proteins help to determine the extent of cell survival in response to DNA damage Iyer et al. 2004). Known transcription factors that impact this balance include c-Myc, which represses p21 without affecting PUMA expression (Seoane et al. 2002;Jung and Hermeking 2009), and the bromodomain protein Brd7, which promotes p53 binding to the p21, but not PUMA, gene (Drost et al. 2010). As a consequence, inhibition of p21 or expression of c-Myc predisposes tumor cells to undergo apoptosis in response to DNA damage. Interestingly, the pro-and anti-apoptotic p53 target genes contain different types of core promoters and are therefore regulated by different transcription factors (Gomes and Espinosa 2010). In particular, the p21 genes contain high levels of preloaded (poised) RNAPII at the promoter in the absence of DNA damage, which allows for the rapid induction of these genes following p53 activation (Espinosa et al. 2003;Gomes et al. 2006;Morachis et al. 2010). In contrast, RNAPII elongation complexes must assemble de novo at PUMA and other proapoptotic p53 target genes, which delays their expression. Cell growth arrest arising from rapid p21 induction is an initial protective response to DNA damage or oncogene expression. Although the p21 gene is predominantly regulated at the level of transcription, additional factors control its trans-lation, as well as protein and mRNA stability (Abbas and Dutta 2009).
Here we describe an unusual mechanism for p21 gene expression that involves gene-specific splicing by SKIP and is essential for cancer cell survival under stress. In particular, we found that SKIP is critical for splicing and expression of p21, but not for PUMA or other investigated p53 target genes, in human HCT116 (colon cancer) and U2OS (osteosarcoma) cells. SKIP associates with the 39 splice site recognition factor U2AF65, but not U2AF35, and recruits it to the p21 gene and mRNA in vivo. In contrast, U2AF65 recruitment and splicing at the PUMA gene is independent of SKIP. As a consequence, siRNA-mediated depletion of SKIP induces p53-dependent apoptosis, which is most pronounced in cells subjected to DNA damage. The regulated binding of 39 splice site recognition factors we observe here is reminiscent of a central feature of alternative splicing, which controls the expression of different isoforms of cell death pathway proteins (e.g., BCL-X and Caspase-9) with distinct or opposing roles in apoptosis (Schwerk and Schulze-Osthoff 2005). Consequently, alternative splicing factors are wellknown regulators of p53-dependent and p53-independent apoptosis (Merdzhanova et al. 2008;Kleinridders et al. 2009;Legerski 2009;Moore et al. 2010). Our results reveal that cancer cell survival upon DNA damage also depends on SKIP and associated factors (DHX8 and Prp19), which function as gene-specific regulators of p21 mRNA splicing.
Results
SKIP is essential for p53 stress-induced expression of the p21, but not PUMA, genes As is observed for many essential proteins, ablation of SKIP by siRNA increases endogenous p53 levels. Immunoblot analysis of extracts from SKIP-depleted U2OS cells revealed a significant increase in the steady-state level of p53, which was phosphorylated at Ser15, a modification that stabilizes the protein (Supplemental Fig. S1A). Levels of the PUMA protein were also elevated, indicating that the induced p53 protein is transcriptionally active. However, we noticed that p21 protein levels were markedly reduced in SKIP knockdown cells compared with cells expressing a control siRNA. To assess whether SKIP plays a role in the normal p53 stress response, endogenous p53 was induced in two human cancer cell lines, U2OS (osteosarcoma) and HCT116 (colon cancer), using the chemotherapeutic DNA damage agents etoposide (U2OS cells) or doxorubicin (HCT116 cells). As expected, DNA damage-induced accumulation of p53 and two of its target genes, p21 and PUMA, was observed in both U2OS (Fig. 1A) and HCT116 (Fig. 1B) cells. Interestingly, p53 levels increased in siRNA-mediated SKIP knockdown cells, and rose further upon exposure of these cells to etoposide or doxorubicin. Consequently, PUMA expression was elevated in SKIP-depleted cells, and increased further with DNA damage ( Fig. 1; Supplemental Fig. S1B). In contrast, both basal and stress-induced p21 mRNA levels decreased in SKIP-depleted HCT116 or U2OS cells, compared with cells treated with a control siRNA, accompanied by a strong block to p21 protein expression, as detected by immunoblot (Fig. 1A, 1B, cf. lanes 1-3 and 4-6). Similar results were obtained using two different SKIP siRNAs (Supplemental Fig. S1B). Taken together, these data suggest that SKIP is critical for p53 induction of the antiapoptotic gene target p21, but not for the proapoptotic PUMA gene.
SKIP is dispensable for stress-induced transcription of the p21 gene Numerous transcription and chromatin factors, including c-Myc (Seoane et al. 2002) and p300 (Iyer et al. 2004), are known to differentially affect p53 transactivation of the p21 and PUMA genes in vivo. However, it was surprising to find a role for SKIP in the p53 pathway, because other elongation factors, including P-TEFb and FACT, are dispensable for p21 expression under conditions of stress (Gomes et al. 2006;Gomes and Espinosa 2010). Consequently, we used RNAi chromatin immunoprecipitation (RNAi-ChIP) experiments to examine the block to p21 expression in SKIP-depleted U2OS cells before and after exposure to etoposide at the promoter and throughout the coding region ( Fig. 2A). The ChIP experiments revealed increased p53 binding to the p21 promoter in SKIP knockdown cells, consistent with the observation that p53 is induced in these cells, and p53 occupancy at the gene increased further following etoposide treatment (Fig. 2B). ChIP analysis of the PUMA gene revealed a similar increase in p53 binding in cells treated with SKIP siRNA, which increased further upon stress induction (Supplemental Fig. S2B). Therefore, the loss of p21 protein expression in SKIP-depleted cells is not due to impaired binding of p53 to its target genes.
Further ChIP analysis revealed that SKIP is present at the p21 gene in the absence of stress, with the highest levels at the promoter and proximal downstream region, but it is also present at lower levels in the coding region, following a pattern similar to that observed for P-TEFb/ CDK9. SKIP binding was slightly enhanced by stress and greatly reduced in SKIP knockdown cells (Fig. 2B), consistent with the overall loss of SKIP protein (Fig. 1A). In contrast, only background levels of SKIP were detected at the PUMA gene, and this signal did not change upon SKIP knockdown (Supplemental Fig. S2B). Thus, SKIP associates specifically with the p21, and not PUMA, gene promoters. As reported previously (Espinosa et al. 2003), we detected high levels of RNAPII at the p21 core promoter in the absence of stress, indicative of a paused RNAPII complex, whereas RNAPII occupancy was low at the PUMA promoter but increased strongly following etoposide treatment ( Fig. 2B; Supplemental Fig. S2B). Knockdown of SKIP did not affect recruitment of RNAPII, CDK9, or Spt5 at the stress-induced p21 or PUMA genes. Moreover, Ser2-phosphorylated RNAPII levels were unaffected in SKIP knockdown cells, indicating that SKIP is not required for accumulation of active RNAPII elongation complexes within the transcribed region of the p21 (Fig. 2B) or PUMA genes (Supplemental Fig. S2B). Together, the RNAi-ChIP studies indicate that SKIP is selectively recruited to the basal p21 promoter, but is not required for binding of p53 or transcription elongation at the stress-induced p21 gene in vivo.
To confirm that SKIP is not required for transcription under stress conditions, we asked whether nascent unspliced p21 transcripts accumulate in SKIP knockdown cells. Total RNA was isolated from U2OS cells in the presence or absence of etoposide, and was amplified using intron-specific primers specific for nascent p21 and PUMA transcripts (see the Materials and Methods). Interestingly, primary transcripts derived from the p21 (+540 and +6990 primers) and PUMA (+2460 and +6803 primers) genes increased significantly in SKIP knockdown U2OS cells in the absence of stress, and even more dramatically upon addition of etoposide (Fig. 2C, top row). Virtually identical results were observed in HCT116 cells following doxorubicin treatment (Fig. 2C, bottom row). No significant signals were detected from control PCR reactions programmed with RNA but lacking reverse transcriptase (Supplemental Fig. S2C), indicating that the RNA samples were effectively free of contaminating genomic DNA. We conclude that SKIP is dispensable for stress-induced nascent p21 transcription in vivo.
SKIP is required for pre-mRNA splicing of p21, but not PUMA, transcripts To determine whether SKIP is required for splicing of p21 mRNA, quantitative RT-PCR (qRT-PCR) reactions using intron-exon and exon-exon junction-specific primers were carried out to measure spliced and unspliced mRNA levels, and the ratio of spliced:unspliced transcripts was then used to gage splicing efficiency. As shown in Figure 3A, splicing at either the first or second p21 intron was relatively unchanged upon etoposide treatment in cells treated with a control siRNA, but declined significantly (eightfold and 3.5-fold to fourfold, respectively) in SKIP knockdown cells. The drop in splicing efficiency in SKIP knockdown cells was evident in both the presence and absence of DNA damage. Importantly, loss of SKIP did not affect splicing at the PUMA, NOXA, and GADD45 genes, all of which are direct targets of p53 (Fig. 3B). Virtually identical results were obtained in HCT116 cells exposed to doxorubicin (Supplemental Fig. S3A). We conclude that SKIP is important for efficient splicing of both p21 mRNA introns, but does not affect splicing of PUMA or other tested p53-induced transcripts.
To examine the effects of SKIP knockdown on the p21 mRNA stability, qRT-PCR was performed to measure p21 mRNA half-life in U2OS cells that were transfected with SKIP or control siRNAs for 48 h, followed by treatment with transcriptional inhibitor actinomycin D for 0, 2, 4, or 6 h (Supplemental Fig. S3B). The results indicate that SKIP has no significant effect on p21 mRNA stability. The SKIP homolog in Drosophila has been shown to promote the export of spliced mRNAs (Farny et al. 2008). To test whether SKIP affects the mRNA export of p21 mRNA in human cells, SKIP or control siRNAs were transfected in U2OS cells for 48 h, followed by the treatment with etoposide for 18 h. Cells were fractionated into nuclear and cytoplasmic fractions, and mRNA levels were monitored. As shown in Supplemental Figure S3C, depletion of SKIP did not significantly affect export of either p21 or GAPDH mRNAs, indicating that the mRNA export pathway used in mammalian cells under DNA damage conditions is not dependent on SKIP.
To determine whether SKIP might also affect p21 protein stability, the rate of p21 protein turnover was measured in SKIP knockdown cells in the absence of stress. Forty-eight hours after transfection with control or Cold Spring Harbor Laboratory Press on July 19, 2018 -Published by genesdev.cshlp.org Downloaded from SKIP siRNA, U2OS cells were treated with cycloheximide (CHX) to prevent new protein synthesis, and the decay of endogenous p21 protein was measured (Supplemental Fig. S3D). The results indicate that SKIP has no significant effect on p21 stability in the absence of stress. The proteasome inhibitor MG132 elevated p21 protein levels in both control and SKIP siRNA transfected cells (Supplemental Fig. S3E), indicating that it is a short-lived protein and subject to active proteolytic degradation under both conditions. Based on these findings, we reasoned that a cDNA encoding p21 should be expressed independently of SKIP in these cells. To assess this possibility, a Flag-tagged p21 cDNA encoding the fulllength p21 protein expressed from a heterologous (CMV) promoter and lacking both introns as well as 59 untranslated region (UTR) and 39UTR sequences was transfected into U2OS cells, and, after 24 h, either control or SKIP siRNAs were transfected into the cells for a further 48 h, followed by treatment with or without etoposide for 18 h. As shown in Figure 3C, the basal and stress-induced endogenous p21 protein levels decreased in SKIP-depleted cells, whereas expression of the larger Flag-p21 hybrid protein was unaffected. Similar results were obtained from p53-null H1299 cells in the absence of stress (Supplemental Fig. S3F). Importantly, the decrease of p21 mRNA and protein levels in SKIP knockdown cells was effectively rescued upon expression of a vector encoding an siRNA-resistant form of SKIP, but not the wild-type (siRNA-sensitive) SKIP (Fig. 3D), indicating that these results are not due to off-target effects. Together, these data indicate that SKIP regulates p21 expression through a unique gene-specific splicing mechanism.
SKIP interacts with and recruits U2AF65 to the p21 gene and mRNA Although SKIP is required for splicing, the steps it regulates are unclear. SKIP is a component of the activated spliceosome complex; however, the fission yeast homolog of SKIP was shown previously to bind U2AF35, the small subunit of the U2AF 39 splice site recognition complex , indicating that it might also function at an early step in splicing. To determine whether human SKIP protein also associates with the U2AF complex, recombinant full-length glutathione-S-transferase (GST)-SKIP was purified and coupled to glutathione-S-sepharose beads for GST pull-down experiments using nuclear extracts from HCT116 cells. Relatively low levels of U2AF35 were recovered in the GST-SKIP pull-down fractions, and this association was disrupted when the beads were treated with RNase A (V Brè s and K Jones, unpubl.), indicating that this interaction may be indirect. Interestingly, we observed much stronger binding of the endogenous U2AF65 protein to the GST-SKIP beads (Fig. 4A, left panel), and this association was unaffected by RNase A (V Brè s and K Jones, unpubl.). No U2AF65 was recovered in the control GST-bead fraction, indicating that the interaction is specific for SKIP. In reciprocal pull-down experiments, GST-U2AF65 bound avidly to nuclear U2AF35 and SKIP, whereas none of these factors bound to GST alone (Fig. 4A, middle panel, cf. lanes 7 and 8). Interestingly, SKIP was not detected in GST-U2AF35 pull-down fractions (Fig. 4A, right panel), which otherwise contained high levels of nuclear U2AF65. To examine this association further, reciprocal coimmunoprecipitation experiments were performed with U2OS whole-cell lysates. As shown in Figure 4B, both SKIP and U2AF35 coimmunoprecipitated with U2AF65 (left panel), whereas U2AF65, but not U2AF35, was recovered in the SKIP immunoprecipitate (right panel). These results indicate that SKIP interacts with U2AF65 independently of U2AF35.
Based on these findings, we next used RNAi-ChIP experiments to analyze whether SKIP is responsible for cotranscriptional recruitment of mRNA splicing factors at the p21 gene. Interestingly, U2AF65 occupancy within the coding region of the p21 gene decreased significantly in SKIP knockdown cells (Fig. 4C, left panel). In contrast, loss of SKIP had no effect on binding of U2AF65 to the PUMA gene (Fig. 4C, center panel). Steady-state U2AF65 protein levels were unaffected in SKIP-depleted cells, as measured by immunoblot (Fig. 4C, right panel). Unfortunately, we were unable to monitor U2AF35 occupancy at the p21 gene due to lack of a suitable antibody. We conclude that SKIP is required for stable binding of U2AF65 at the p21, but not PUMA, genes in vivo.
These data strongly suggest that SKIP regulates cotranscriptional loading of U2AF65 and splicing at both introns of the p21 gene, and that spliceosomal complexes formed in the absence of SKIP may be unable to splice p21 mRNAs whether on or off of the gene. To examine U2AF65 binding to p21 mRNA directly, RNA immunoprecipitation (RNA-IP) experiments were carried out in U2OS cell extracts. As shown in Figure 4D, high levels of the p21 transcript were recovered in SKIP antibody, and not control immunoglobin G (IgG), immunoprecipitates. Importantly, the SKIP immunoprecipitation (SKIP-IP) fractions contained significantly higher levels of unspliced (detected with primers III-VI) than spliced (detected with primers I-II) transcripts. Furthermore, the level of unspliced transcript bound to SKIP declined greatly in SKIP knockdown cells, whereas the low background level of spliced mRNA in the SKIP-IP fraction was unaffected, indicating that this latter signal is nonspecific. Because the mRNA in these experiments was not sonicated, it was not possible to localize the position of SKIP binding in these experiments. Thus, the higher signal detected with primer III likely reflects an increased efficiency in binding to p21 mRNA. Interestingly, SKIP also bound to PUMA mRNA introns. Thus, SKIP binds preferentially to introns, presumably as part of the spliceosome complex, but does not discriminate between the p21 and PUMA mRNA. Therefore, SKIP selectivity in splicing is likely conferred by its ability to bind to the core promoter and recruit U2AF65 cotranscriptionally to the p21 gene and mRNA.
Promoter-proximal intron splicing is strongly influenced by 59-mRNA capping (Lewis et al. 1996), and, consequently, we asked whether SKIP affects loading of the mRNA cap-binding protein CBP80. As shown in Figure 4E, p21, PUMA, and GADD45 mRNAs were efficiently recovered in CBP80 immunoprecipitates, and ablation of SKIP had no affect on CBP80 binding to either the spliced or unspliced mRNAs. In contrast, U2AF65 bound preferentially to the unspliced mRNAs. Most interestingly, the binding of U2AF65 to unspliced p21 mRNA was largely abolished in si-SKIP-treated cells (Fig. 4E), consistent with the ChIP results, whereas U2AF65 binding to the PUMA or GADD45 mRNAs was only modestly affected in SKIP knockdown cells. To assess whether U2AF65 is required for expression of p21 and PUMA genes, mRNA and protein levels for these genes were analyzed in cells transfected with si-U2AF65. Knockdown of U2AF65 significantly reduced pre-mRNA splicing (Supplemental Fig. S4A) and protein expression (Supplemental Fig. S4B) of both p21 and PUMA mRNAs in control and DNAdamaged cells, confirming its role as a general splicing factor. These data indicate that SKIP binds to introns at both target and nontarget mRNAs, and is required for binding of U2AF65 to p21 mRNA. Although U2AF65 is also required for splicing of PUMA mRNA, it is recruited to the gene and mRNA independently of SKIP.
SKIP is also required for p21 induction by Nutlin3a or TGF-b signaling To assess whether SKIP-regulated p21 gene expression is restricted to conditions of stress, we used the nongenotoxic RNA-IP analysis of binding of the SKIP protein to p21 unspliced or spliced mRNA. U2OS cells were transfected with control or SKIP siRNA for 48 h. RNA samples were purified from nonprecipitated cellular lysates (input), or extracts precipitated with control IgG or SKIP antibody. Immunoprecipitated p21 mRNA was detected using qRT-PCR with the indicated primers. Values were expressed as percentage of input RNA. Error bars represent the standard deviation obtained from three independent experiments. (E) RNA-IP analysis of binding of CBP80 or U2AF65 to p21, PUMA, or GADD45 unspliced or spliced mRNA. Experiments were performed as in D. The primers used for detecting p21 transcripts were primer IV (unspliced) and primer I (spliced) as in D. The primers used for detecting PUMA or GADD45 transcripts were the same as in Figure 3B. drug Nutlin3 to activate p53 and induce p21 gene expression in U2OS cells. Nutlin3 disrupts binding of p53 to the HDM2 ubiquitin ligase, and therefore can stabilize p53 in the absence of stress. As shown in Figure 5A, Nutlin3 induced p53 activation of several downstream target genes, including p21, PUMA, and HDM2. Nutlin3-induced expression of PUMA and HDM2 was further increased in si-SKIP cells, while the induction of p21 was strongly suppressed. Thus, SKIP is required for p53-induced p21 expression, irrespective of DNA damage. To address whether SKIP regulation depends on the activator, p21 induction was studied in the human breast cancer cell line MDA-MB-231, which expresses a mutant p53 protein, treated with anti-mitogenic cytokine transforming growth factor-b (TGF-b). In these cells, p21 mRNA was induced rapidly in response to TGF-b signaling, and mRNA levels peaked 4 h after induction (Fig. 5B). Addition of TGF-b did not affect the mutant p53 protein levels (Fig. 5C). Strikingly, this increase of p21 mRNA and protein was completely abolished in SKIP knockdown cells (Fig. 5B,C). These findings in H1299 (p53-null) cells were compared with two cell lines that are deficient for p53 signaling: HeLa (p53 inactivated by the E6 protein of HPV-18) and HCT116 p53 À/À (p53 gene deleted by homologous recombination). In all of these cells, loss of SKIP gave rise to a strong inhibition of endogenous p21 mRNA and protein expression (Fig. 5D). In the absence of stress, SKIP likely affects both p21 transcription elongation and splicing. Together, these findings highlight the general role for SKIP as a critical regulator of p21 expression.
SKIP is an essential cancer cell survival factor that counteracts DNA damage-induced apoptosis The observation that SKIP is critical for p21, but not PUMA, gene expression indicates that loss of SKIP should predispose cells to undergo p53-dependent apoptosis. To test this directly, HCT116 cells were transfected with SKIP siRNA or control siRNA for 48, 72, and 96 h. The cells were collected and the percentage of cells in each phase of the cell cycle was quantified by flow cytometric analyses. As shown in Figure 6A, knockdown of SKIP did not lead to cell cycle arrest at the G1, S, or G2/M phase of the cell cycle. Rather, the SKIP-depleted cells were subjected to massive DNA fragmentation and cell apoptosis, as measured by the sub-G1 DNA content, with >70% cell death at 96 h following transfection of SKIP siRNA. Next, we asked whether SKIP depletion can induce apoptosis in the isogenic HCT116 p53 À/À cell line. As observed in the HCT116 parental cells, the cell cycle progression of the SKIP-depleted cells is similar to that of the cells transfected with control siRNA. However, cell death triggered by knockdown of SKIP is largely attenuated, but not absent, in the HCT116 p53 À/À cells, with the percentage of cells in the sub-G1 fraction reduced to 25% after 96 h of treatment with SKIP siRNA (Fig. 6A, bottom panel). The expression of endogenous SKIP was identical in these two cell lines, whereas both p21 and PUMA protein levels were higher in the HCT116 parental cells compared with the p53-null cells (Fig. 6A, right panel). Detailed quantification of the effect of si-SKIP on the cell cycle is presented in Supplemental Table 1. We conclude that SKIP is required for cancer cell survival through its role in p21 expression, which counteracts p53-mediated apoptosis.
The observation that SKIP remains essential for p21 protein expression even under conditions of stress led us to ask whether loss of SKIP sensitizes cells to apoptosis induced by chemotherapeutic DNA damage agents. Therefore, HCT116 cells were treated either with sicontrol or si-SKIP RNA, and, 48 h after transfection, the cells were treated with UVC or 5-FU for a further 24 h. FACS analysis of these cells revealed that apoptosis induced by UVC or 5-FU treatment was much higher in cells containing reduced levels of SKIP (Fig. 6B, left panel). Immunoblots were also used to monitor the protein levels of SKIP, p53, PUMA, and p21 in these experiments (Fig. 6B, right panel), and confirmed that p21 expression remains SKIP-dependent under UVC and 5-FU stress conditions. These findings indicate that SKIP loss strongly augments chemotherapy-induced cell killing.
Conversely, we also asked whether ectopic expression of SKIP would render cells resistant to p53-mediated apoptosis. To address this question, HCT116 cells were engineered to stably express a V5-tagged SKIP protein (HCT116-SKIP). HCT116 and HCT116-SKIP cells were treated with either UVC or 5-FU for 48 h, and apoptosis was monitored by FACS sorting. Strikingly, HCT116-SKIP cells were much more resistant to DNA damageinduced cell death (Supplemental Fig. S5A). However, immunoblot analysis of protein expression in these cells indicates that activation of p53 is significantly impaired in these cells, and, consequently, the mechanism is distinct from that observed in SKIP knockdown cells. Similar results were observed in HCT116 cells that overexpress SKIP through transient expression (Supplemental Fig. S5B). Thus, excessively high levels of SKIP may inactivate factors that are normally required for p53 activation. Taken together, these results suggest that SKIP is critical for cell viability, and that changes in SKIP expression can strongly modulate the cell response to DNA damage.
The anti-apoptotic function of SKIP is primarily due to its ability to regulate p21 expression Together, these findings suggest that SKIP depletion sensitizes cells to undergo apoptosis through its ability to prevent p21 expression. To test this model, we asked whether knockdown of SKIP affects apoptosis in HCT116 p21 À/À cells, which lack the p21 protein and are more prone to undergo apoptosis in response to DNA damage. Although p53 was induced more strongly in 5-FU-treated HCT116 cells, levels of the anti-apoptotic p21 protein were also much higher in these cells than in the SKIP knockdown cells (Fig. 6C, cf. lanes 2 and 3), and consequently, the overall extent of apoptosis was comparable in 5-FU and SKIP-depleted cells. Knockdown of SKIP in the 5-FU-treated cells resulted in high levels of p53 and low levels of p21, further enhancing apoptosis (Fig. 6C, lane 4). In contrast, in the HCT116 p21 À/À cells, basal p53 levels are higher (Fig. 6C, lane 5), and increase further upon exposure to 5-FU (Fig. 6C, lane 6), but only modestly, if at all, in the si-SKIP-treated cells (Fig. 6C, lane 7). Consequently, 5-FU treatment increases apoptosis more readily in HCT116 p21 À/À cells (Fig. 6C, lane 6), whereas apoptosis is only modestly increased upon SKIP depletion (Fig. 6C, lane 7), consistent with the fact p53 levels are only marginally higher in these cells. Moreover, 5-FUmediated apoptosis was not enhanced further by SKIP knockdown in the HCT p21 À/À cells (Fig. 6C, lane 8). Thus, the enhanced apoptosis seen in SKIP-depleted cells is predominantly linked to down-regulation of p21 expression, which appears to be a major target for SKIP in HCT116 cells, whereas 5-FU-induced cell death is linked to the strong induction of p53.
In addition, we asked whether overexpression of Flag-p21 could block the apoptotic effect of SKIP knockdown in HCT116 cells. As shown in Figure 6D, expression of the Flag-p21 protein significantly reduced cell death induced by depletion of SKIP (cf. lanes 3 and 4) or treatment with 5-FU (cf. lanes 5 and 6), as well as the enhanced level of apoptosis observed in cells exposed to both 5-FU and SKIP-siRNA (cf. lanes 7 and 8). The expression of SKIP, p53, and PUMA under these different experimental conditions was monitored by immunoblot (Fig. 6D, bottom panel), and confirmed that ectopic p21 blocks apoptosis without influencing expression of any of these factors, presumably through induction of cell cycle arrest. Together, these findings indicate that the primary mechanism by which SKIP controls p53 apoptosis is through its ability to regulate p21 expression.
The SKIP-associated factors DHX8 and Prp19 are also selectively required for p21 splicing Although SKIP has been shown to regulate the catalytic step in splicing as a component of the activated spliceosome, our findings indicate that it also functions at an earlier step to regulate loading of U2AF65 at the p21 gene. Consequently, we wondered whether other SKIP-interacting splicing factors also control p21 gene-specific splicing. Previous studies have shown that SKIP interacts with DHX8 (hPrp22), the human homolog of a yeast RNA helicase implicated in branch point recognition and removal of the spliceosome from the transcript (Gahura et al. 2009), and both proteins were detected in a genomewide RNAi screen for factors required for mitotic progression through prometaphase (Kittler et al. 2004). Within the spliceosome, SKIP also associates with Prp19 complex proteins (Wahl et al. 2009). Interestingly, these experiments revealed that siRNA-mediated knockdown of human DNX8 or Prp19 leads to a selective down-regulation of splicing of p21 transcripts, without affecting splicing of PUMA or NOXA mRNAs (Fig. 7A, left panel); a corresponding decline in p21 protein expression was also evident by immunoblot (Fig. 7A, right panel). Moreover, RNA-IP analysis established that U2AF65 loading on p21 mRNA is strongly reduced in the DHX8 knockdown cells (Fig. 7B). These findings indicate that other spliceosome components also function selectively in p21 expression, and contrast with siRNA knockdown of U2AF65, which disrupts splicing of both p21 and PUMA mRNAs. We conclude that a subset of SKIP-associated spliceosomal proteins is not universally required for splicing under stress, but rather functions in a gene-specific manner to regulate cotranscriptional p21 mRNA splicing.
Discussion
The CDK inhibitor p21 is a potent cell cycle arrest factor that counteracts p53-dependent apoptosis and predisposes cells to undergo differentiation or cellular senescence. Transcriptional induction of the p21 gene plays a central role in TGF-b/SMAD-mediated G1 cell cycle arrest, as well as DNA damage/p53-induced inhibition of cell division. Conversely, the p21 gene is transcriptionally repressed by c-Myc to override the cell cycle checkpoint and promote proliferation. Here, we show that basal and stress-induced p21 expression requires the SKIP/ SNW1 transcription elongation and splicing factor. These results were unexpected, given that p53 induction of p21 does not require the P-TEFb elongation factor (Gomes et al. 2006), and that neither P-TEFb nor SKIP is required for stress-induced HIV-1 transcription (Brè s et al. 2009). RNAi-ChIP experiments revealed that SKIP is not required for p53 binding or accumulation of Ser2P-RNAPII in the body of the p21 gene, and qRT-PCR analysis with intron-specific primers confirmed that it is not needed for nascent p21 transcription. Thus, SKIP, like P-TEFb, is dispensable for transcription at the p21 gene in cells exposed to DNA damage, supporting the idea that elongation control is lost in cells subjected to DNA damage. At heat-shock genes, P-TEFb predominantly affects 39-mRNA end processing, rather than promoter-proximal elongation (Ni et al. 2004). At the stress-induced HIV-1 promoter, transcription is accompanied by loss of typical histone modifications, including trimethylation of H3K4 (H3K4me3) and H2BUb, and activation of heat-shock genes is preceded by widespread nucleosome depletion (Petesch and Lis 2008). Thus, profound changes in chromatin structure may alleviate the need for elongation factors under DNA damage.
SKIP selectively regulates p21 pre-mRNA splicing under stress
To define the block to p21 expression, qRT-PCR experiments were carried out using intron-exon and exon-exon junction primers to monitor the level of spliced mRNA, which revealed a strong block to splicing at both p21 introns in SKIP knockdown cells. Following an earlier report that the fission yeast SKIP homolog associates with the U2AF recognition factor , we discovered that human SKIP interacts with U2AF65, the polypyrimidine tract-binding factor required for 39 splice site recognition. Interestingly, SKIP appears to recognize U2AF65 independently of the small U2AF subunit U2AF35. ChIP studies revealed that SKIP recruits U2AF65 to the p21 gene, and RNA-IP experiments indicated that it is also required for U2AF65 binding to the mRNA. However, SKIP is not required for splicing or binding of U2AF65 to the PUMA gene and mRNA. The RNA-IP experiments further revealed that SKIP preferentially associates with introns rather than exons, presumably as part of the spliceosome; however, it is present at both target (p21) and nontarget (PUMA) mRNAs. In contrast, SKIP is present at the p21, but not PUMA, genes . DHX8 and Prp19 selectively regulate p21 mRNA splicing, and DHX8, like SKIP, is required for binding of U2AF65 to p21 unspliced mRNA. (A, left) qRT-PCR was used to monitor the ratio of unspliced to spliced p21, PUMA, or NOXA mRNAs. U2OS cells were transfected with control, SKIP, DHX8, or Prp19 siRNA for 48 h, and incubated in the presence or absence of etoposide (20 mM) for the indicated times. (Right panel, lanes 1-6) Protein lysates were subjected to immunoblot analysis. (B) RNA-IP analysis of binding of U2AF65 to p21 or PUMA unspliced or spliced mRNA. U2OS cells were transfected with control, SKIP, or DHX8 siRNA for 48 h. RNA samples were purified from nonprecipitated cellular lysates (input) or extracts precipitated with U2AF65 antibody. Immunoprecipitated p21 transcript was detected using qRT-PCR with the primers used in Figure 4E. Values are expressed as percentage of input RNA. Error bars represent the standard deviation obtained from three independent experiments. (C) Model for the role of SKIP in the regulation of p21 gene-specific splicing.
both before and after DNA damage, indicating that the specificity is determined by the core promoter.
Previous studies have shown that the p21 promoter, like the HIV-1 promoter, contains high levels of paused RNAPII prior to induction, whereas the PUMA promoter assembles the RNAPII transcription complex de novo upon gene activation (Gomes and Espinosa 2010). Many transcription factors discriminate between these two promoter types, including p300 and c-Myc, which activate and repress p21 expression, respectively, without affecting PUMA gene transcription (Seoane et al. 2002;Iyer et al. 2004). ChIP experiments show that SKIP binds to the p21 gene with a pattern similar to that observed for P-TEFb, peaking at the core promoter and proximal region. The absence of SKIP at the PUMA gene establishes that it is not recruited through p53. It is unclear how SKIP is recruited to the p21 gene; however, we showed previously that it is recruited to the basal HIV-1 promoter via the H2B ubiquitin ligase hRNF20 (Shema et al. 2008). It will be interesting to learn whether any other transcription or chromatin regulators at the p21 promoter also affect splicing and cotranscriptional loading of U2AF65. We did not observe any effect of SKIP on p21 mRNA or protein stability or mRNA export; however, it remains possible that it could affect p21 translation, which depends on cotranscriptional loading of the CUGBP1 59UTR factor (Iakova et al. 2004). Translation could also be affected by mRNA-capping defects, although we did not detect any defect in binding of the cap-binding protein CBP80 to either p21 or PUMA mRNA.
Evidence that p21 splicing is cotranscriptional
Although it is widely recognized that elongation factors can indirectly affect mRNA splicing patterns through changes in the rate of nascent transcription, SKIP appears to directly affect each process. SKIP is an essential factor in many organisms (Folk et al. 2004), and studies of the S. cerevisiae (Prp45) or Drosophila (BX42) homologs have focused mainly on its roles in mRNA splicing, spliceosome assembly, and export of spliced mRNAs (Farny et al. 2008). Consequently, it was not surprising to find a role for SKIP in splicing of the p21 gene. What is remarkable is the gene-specific activity of SKIP under stress, where it is dispensable for splicing of many p53 target genes, including PUMA, GADD45, and NOXA. Moreover, SKIP differentially affects p21 and not PUMA expression even in the absence of stress, and therefore appears not to be universally required for splicing in human cells. Our data indicate that SKIP is required to load U2AF65 onto the p21 gene and mRNA, and we found no evidence for selective binding of SKIP to the p21 mRNA, indicating that splicing is predominantly cotranscriptional in this case. This conclusion is consistent with recent studies showing that RNAPII undergoes pausing and release-accompanied by changes in RNAPII phosphorylation-at 39 splice sites, and that cotranscriptional splicing may be widespread in yeast (Alexander et al. 2010, Oesterreich et al. 2010, and is also consistent with studies showing that the SC35 splicing factor can affect RNAPII elongation and pausing Xiao et al. 2008). These studies also raise the question of whether the 39 splice site recognition factors might also play a role in promoter-proximal pausing at some genes. In addition, although the p21 intron sequences appear to conform to the consensus, it is possible that the intron also contributes to SKIP-dependent binding of U2AF65. Unfortunately, the p21 reporter genes that we tested are not responsive to stress, and therefore it is unclear whether the p21 promoter is sufficient to confer SKIP-dependent splicing to a heterologous intron. We also show that two SKIP-associated splicing factors, DHX8 (hPrp22) and Prp19, also selectively regulate p21 splicing under stress conditions. The yeast homolog of DHX8, Prp22, promotes the second catalytic step of splicing at nonconsensus splice sites (Gahura et al. 2009), and is also involved in mRNA release from the spliceosome (Schwer 2008). Thus, a subset of splicing factors may function with SKIP to control cotranscriptional loading of U2AF65 at target genes.
Interestingly, we found that SKIP selectively associates with U2AF65, and not with its heterodimeric partner, U2AF35. In this respect, SKIP resembles certain other regulatory factors, including the Wilms' tumor protein, which binds selectively to U2AF65 and not U2AF35 (Davies et al. 1998). In contrast, the histone H3.3 chaperone and oncogene DEK (Sawatsubashi et al. 2010) regulates the 39 splice site checkpoint through selective binding to U2AF35 (Soares et al. 2006) and not U2AF65. Binding of DEK to U2AF35 confers its specificity for the 39-AG dinucleotide, and is required for U2AF35 binding at selected introns (Soares et al. 2006). In addition, the transcription coregulator SNIP1-which controls CycD1 expression and cell cycle progression (Bracken et al. 2008), as well as c-Myc stability and transactivation (Fujii et al. 2006)-functions to recruit U2AF65 and other RNA processing factors to the 39 end of the CycD1 gene and mRNA to control mRNA stability. Interestingly, substoichiometric amounts of SKIP were detected in the SNIP1 RNA processing complex (Bracken et al. 2008). SNIP1 is also required for p53 expression and ATR substrate phosphorylation (Roche et al. 2007). Because SNIP1 inhibits TGF-b signaling (Kim et al. 2000), opposite to the role of SKIP, it will be interesting to examine whether competition for U2AF65 might influence RNA-PII pausing and elongation. SKIP also associates with the MLL1:Menin histone methyltransferase and is required for H3K4 methylation at the HIV-1 promoter (Brè s et al. 2009), indicating that it may, like DEK, provide a link between splicing and chromatin. In this respect, it is interesting that chromatin modifications can directly impact splicing specificity (Luco et al. 2011). Cotranscriptional loading of the U2snRNP complex has been shown to depend on H3K4me3 and the Chd1 chromatin remodeling factor (Sims et al. 2007), as well as SAGA/ Gcn5 acetylation of histone H3 (Gunderson and Johnson 2009), and it will be interesting to learn whether SKIP might also regulate splicing through changes in chromatin structure.
SKIP is an essential cancer cell survival factor
We show here that ablating SKIP expression results in p53-mediated apoptosis of HCT116 colon cancer or U2OS osteosarcoma cells. In contrast, SKIP knockdown in HeLa cells, which lack a functioning p53 pathway, results in G2/M arrest in prometaphase (Kittler et al. 2004(Kittler et al. , 2005. Thus, the p53 pathway appears to be a prime target for SKIP in colon cancer cells. Although SKIP may regulate splicing at many genes, the observation that HCT116 p21 À/À cells are largely insensitive to apoptosis by si-SKIP, and that Flag-p21 overexpression is sufficient to block apoptosis in wild-type HCT116 cells, strongly indicates that p21 is a major target for SKIP in these cells. Numerous studies have also identified potent antiapoptotic roles for various splicing factors in the control of alternative splicing that commonly regulate splice site choice through effects on binding of the U2AF65:35 complex (Chen and Manley 2009). Alternative splicing of Bcl2 mRNAs regulates the balance of expression of proand anti-apoptotic family members, and also mediates the differential expression of various death receptors, death ligands, and caspases. Splicing factor activity is subject to inhibition by stress, with different effects on the constitutive and alternative splicing pathways (for review, see Giul and Cá ceres 2007;Biamonti and Cá ceres 2008). We observed previously that ectopic expression of the SKIP SNW domain strongly favors the use of the HIV-1 A3 splice acceptor site (Brè s et al. 2005), indicating that it might also play a role in alternative splicing. Consequently, it will be important to determine whether SKIP regulates the alternative splicing pattern of genes involved in apoptosis, and, similarly, whether alternative splicing factors, including the SR proteins, regulate cotranscriptional p21 gene-specific splicing under stress conditions. Taken together, our findings indicate that inhibitors of SKIP could be of therapeutic benefit by augmenting DNA damage chemotherapy-induced apoptosis. Like certain other short-lived anti-apoptotic factors, we found that SKIP levels decline in cells treated with the CDK inhibitor FP (V Brè s and K Jones, unpubl.), which has shown clinical benefit in leukemia and as a combination chemotherapy for colon cancer. Importantly, we show here that apoptosis associated with SKIP ablation is greatly enhanced when combined with DNA damage agents that further induce p53 levels, such as 5-FU and UV (Fig. 7). Thus, SKIP and associated enzymes that control p21 splicing, such as DHX8 and Prp19, may be useful anti-cancer targets, as would be small molecule inhibitors that selectively block the protein-protein interactions needed to recruit U2AF65 to the p21 gene. Further studies on the mechanism of SKIP-regulated p21 mRNA splicing, and identification of other factors that control this step, may suggest new approaches to enhance chemotherapy-induced cell killing.
Plasmids, siRNAs, drugs, and antibodies
Mammalian expression constructs of human pV5-SKIP and pFlag-SKIP were generated by subcloning SKIP cDNA into pcDNA6 (Invitrogen) and pCMV-Tag2 (Stratagene) vectors, respectively. Human Flag-p21 was obtained from Addgene (plasmid no. 16240). The bacterial expression construct encoding full-length SKIP was described previously (Brè s et al. 2009). For rescue experiments, siRNA-resistant vector was prepared by site-directed mutagenesis using the primer 59-AATCTGGAC AAGGACATGTATGGCGACGATCTCGAAGCCAGAATAAA GACCAACAG-39 with substituted nucleotides (underlined). The resultant cDNA fragment replaces the original nucleotide sequence targeted by SKIP siRNA without changing the amino acid sequence, and was subcloned into the pCMV-Tag2 vector. The mutations were confirmed by sequence analysis. Synthetic dsRNA oligonucleotides targeting SKIP, U2AF65, and CDK9 were purchased from Ambion and are listed in Supplemental Table S2. Etopside, doxorubicin, Nutlin3, 5-FU, CHX, actinomycin D, and MG132 were purchased from Sigma, and TGF-b was obtained from R&D Systems. The antibodies for Western blots, ChIP, and RNA-IP are listed in Supplemental Table S3.
Cell cycle and apoptosis analysis
Cells were plated in 100-mm dishes and treated with the siRNAs, UV, or 5-FU. At the indicated time points, cells were trypsinized, washed with phosphate-buffered saline (PBS), and fixed in 70% ethanol overnight at 4°C. After being washed with PBS, cells were incubated with propidium iodide (PI)/RNasestaining buffer (BD Bioscience) for 15 min at room temperature. Cell distribution across the cell cycle was analyzed with FACScan (Becton Dickinson) and CellQuest software.
GST pull-down experiments
GST fusion constructs were expressed in BL21 Escherichia coli cells, and crude bacterial lysates were prepared by sonication in GST lysis buffer (25 mM Tris at pH 7.5, 150 mM NaCl, 1 mM EDTA, protease inhibitor). Approximately 10 mg of the appropriate GST fusion proteins was incubated with precleared HCT116 nuclear extract for 2 h at 4°C. The binding reaction was then added with 30 mL of glutathione-Sepharose beads and mixed for another 1 h at 4°C. The beads were washed four times with the above GST lysis buffer, separated on a 10% SDS-PAGE, and analyzed by Western blotting.
Subcellular fractionation, qRT-PCR, and ChIP
Cell fractionation was performed using the PARIS kit (Ambion) according to the manufacturer's instructions. Total RNAs were isolated using Trizol and were subjected to DNaseI treatment prior to reverse transcription using random hexamers and SuperScript III reverse transcriptase (Invitrogen). The resulting cDNAs were subjected to qPCR with the indicated primer sets (Supplemental Table S4). Values were normalized to those of GAPDH. ChIP assays were performed essentially the same as described previously (Brès et al. 2009). Briefly, cells were fixed with 1% formaldehyde, and then whole-cell lysates were prepared. p21 promoter-specific splicing factors Protein lysate was subjected to ChIP with the indicated antibodies (Supplemental Table S3), followed by DNA purification. ChIP-enriched DNA was analyzed with qPCR with the indicated primer sets (Supplemental Table S5).
Coimmunoprecipitation and RNA-IP
Cells were lysed in cold lysis buffer (50 mM Tris-Cl at pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% NP-40, 0.25% sodium deoxycholate, protease inhibitor mixture). Cell extracts (500 mg) were incubated with the first antibodies or control normal IgG on a rotator overnight at 4°C, followed by addition of protein A/G Sepharose CL-4B beads for 2 h at 4°C. Beads were then washed four times using the lysis buffer. The immune complexes were subjected to SDS-PAGE followed by immunoblotting with the secondary antibody. For RNA-IP experiments, cells were lysed in ice-cold NET-2 buffer (50 mM Tris-HCL at pH 7.4, 300 mM NaCl, 0.5% [vol/vol] Nonidet P-40, 13 complete protease inhibitors [Roche], 100 U/mL RNase OUT [Invitrogen]). The lysate was incubated with the indicated antibodies (Supplemental Table S3) or control normal rabbit/mouse IgG on a rotator overnight at 4°C, followed by addition of protein A/G agarose (Invitrogen) for 2 h at 4°C. Beads were then washed four times using the NET-2 buffer. Immunoprecipitated RNA was then extracted using Trizol and reverse-transcribed with random hexamers. The resulting cDNA was analyzed with the indicated primer sets (Supplemental Table S6).
|
2018-04-03T04:06:09.115Z
|
2011-04-01T00:00:00.000
|
{
"year": 2011,
"sha1": "6fdd2dd422ed01be096f1a40540e4a2945c7469b",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/25/7/701.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2dca5b6dd8356a22cd4b0829e32b2cd13665f2d1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1427988
|
pes2o/s2orc
|
v3-fos-license
|
Sounds, Behaviour, and Auditory Receptors of the Armoured Ground Cricket, Acanthoplus longipes
The auditory sensory system of the taxon Hetrodinae has not been studied previously. Males of the African armoured ground cricket, Acanthoplus longipes (Orthoptera: Tettigoniidae: Hetrodinae) produce a calling song that lasts for minutes and consists of verses with two pulses. About three impulses are in the first pulse and about five impulses are in the second pulse. In contrast, the disturbance stridulation consists of verses with about 14 impulses that are not separated in pulses. Furthermore, the inter-impulse intervals of both types of sounds are different, whereas verses have similar durations. This indicates that the neuronal networks for sound generation are not identical. The frequency spectrum peaks at about 15 kHz in both types of sounds, whereas the hearing threshold has the greatest sensitivity between 4 and 10 kHz. The auditory afferents project into the prothoracic ganglion. The foreleg contains about 27 sensory neurons in the crista acustica; the midleg has 18 sensory neurons, and the hindleg has 14. The auditory system is similar to those of other Tettigoniidae.
One main function of the intraspecific auditory communication between females and males is to assist pair formation (Robinson 1990). Therefore these acoustic signals are stereotypical with a distinct structure for a given species. The temporal pattern and frequency components of these songs are species specific and are widely used for taxonomy and ecological analysis (Heller 1988;Ragge and Reynolds 1998;Walker et al. 2003;Elliott and Hershberger 2007).
Another type of acoustic signal that is used in many insect taxa (e.g. Coleoptera (Lewis and Cane 1990;Schilman et al. 2001) and Homoptera (Stölting et al. 2004)), is the disturbance sound. These alarm signals are made by insects disturbed in different manners e.g. by touching. In contrast to the calling song, the disturbance sound has a simple and irregular temporal pattern (Masters 1980). Alexander (1967) reported that arthropods use the sound production for a defensive mechanism more often than for any other acoustical communication.
The ear of Tettigoniidae is located in the proximal area of the foreleg tibia (Graber 1876;Schumacher 1979). The scolopidial cells, specialized for detecting mechanical forces, show a typical arrangement in the proximal tibia of Tettigoniidae (Schumacher 1973;Lakes and Schikorski 1990). These cells form a complex tibial organ, consisting of the subgenual organ, the intermediate organ, and the crista acustica; the latter perceives airborne sound (Stumpner 1996). The auditory fibres run from the tibial organ through nerve 5B1 into the prothoracic ganglion where they terminate in the auditory neuropile (Römer et al. 1988).
The Hetrodinae are distributed all over Africa and neighbouring areas (Grzeschik 1969;Irish 1992) and are called armoured ground (or bush) crickets because of spikes on their pronotum and legs. These bush crickets are flightless with rudimentary wings that are covered under the pronotum (Weidner 1955). Acanthoplus longipes (Orthoptera: Tettigoniidae: Hetrodinae) is a dark brown and ventrally green bushcricket with spines only on the pronotum. They are sexually dimorphic, and males use an elytroelytral stridulatory mechanism, as is the case with most bushcrickets. A. longipes lives in the low grassland of Southwest Africa (Namibia, Angola, and Congo) where it can have plague status in field crops when its population climaxes between March and May (Weidner 1955;Mbata 1992). The importance for agricultural ecosystems leads to investigations about the reproductive system of Acanthoplus spp. (Mbata 1992;Bateman and Ferguson 2004). The acoustic system of Tettigoniidae is an important part of the reproductive system. In respect to the auditory system it has been shown that Acanthoplus spp. have a pulsed calling song (Conti and Viglianisi 2005), but the sensory organs have not been investigated. Therefore, the acoustic signals, as well as the anatomy and physiology of the sound receiver, are described.
Bushcrickets
A. longipes ( Figure 1) were collected as nymphs on roads near Keetmanshoop (26°3 2' S, 18°6' E), Namibia in March 2008 and transferred to the University of Giessen. The species was identified based on the key from Irish (1992). Four female and seven male A. longipes were used for the experiments. The animals were sorted by sex and kept between 22° C and 30°C with a 12:12 light:dark cycle. They were fed with wheat seedlings, dog and fish food, and water ad libitum.
Sound recordings and analysis
For the sound recordings, the bushcrickets were placed within a cage of fly-screen in an anechoic chamber (50 x 50 x 50 cm). Each of six males was recorded once.
The recordings of the calling song were made in the dark, while the recordings of the disturbance stridulation were made under light conditions. To evoke a disturbance sound, the resting insects (n = 2) were briefly touched with a stick. The songs were recorded at a temperature between 23°C and 27°C. An ultrasound microphone (Ultra Sound Gate CPVS, Avisoft Bioacoustics, www.avisoft.com) with a frequency range of 10 to 95 kHz connected to a digital recorder (Tascam HD-P2) with a sampling rate of 192 kHz was used. The microphone was placed 15 to 40 cm away from the bushcrickets. Sound pressure level was measured with a Voltcraft meter (DT-8820). Both temporal structure and frequency range of the recordings were analyzed on a computer with the AviSoft program. For statistical analysis, Prism 4.03 (GraphPad Software, Inc., www.graphpad.com) was used. The following terminology was used for describing the insect sounds: Impulse: A single impulse probably caused by movement of one tooth of the stridulatory file.
Figure1.
Photograph of a male Acanthoplus longipes. Scale: 1 cm, relative to the pronotum. High quality figures are available online.
Pulse: A train of impulses which are produced by opening or closing the wings. Verse: A group of impulses, which can contain one or two pulses.
For the analysis of the courtship behaviour, four virgin female A. longipes were tested. For each test, one female and one male were put together into a terrarium.
Hearing threshold
For electrophysiological investigations, A. longipes (n = 5) were waxed on a metal holder with the ventral side up, and the forelegs were fixed approximately in their natural position. The hindlegs were removed and the midlegs were fixed with wax. The prothorax was opened ventrally and the prothoracic ganglion, the leg nerve, and the tympanal nerve were exposed. The recordings were made extracellularly from the tympanal nerve close to the bifurcation from the leg nerve. The tympanal nerve was put on a silver wire electrode, and the indifferent electrode was inserted contralaterally in the thorax. The signals from the nerve were amplified 1.000x by a preamplifier (T122, Tektronix, Inc., www.tek.com), visualized on an oscilloscope, and connected to earphones. The sound signals were computer generated and amplified. They were made audible by a loudspeaker (SEAS 11 F-GX), which was positioned laterally 38 cm from the insect. The tested frequencies ranged from 3 to 40 kHz and were played back with sound pressure levels from 30 to 80 dB. Each sound intensity was tested five times. The lowest acoustic stimulus which elicited neuronal responses was defined as the auditory threshold.
Neuroanatomy
For the anatomical studies of the periphery, all legs (of 7 A. longipes) were removed and placed into Petri dishes filled with saline solution. The legs were opened proximally at the femur-tibia joint, and the tympanal nerve (N5B1) was cut and placed in a glass capillary filled with 5% cobalt chloride solution in distilled water. Preparations were placed in a moist chamber for two days at 4°C . The staining was visualized with a 1% solution of ammonium sulphide in phosphate buffer. The legs were fixed in 4% of paraformaldehyde, dehydrated in a graded ethanol series, and cleared in methylsalicylate. As it was not possible to see the scolopidial cells through the dark cuticle, the tibia was opened dorsally by careful dissection.
For the anatomical studies of the central nervous system, the prothoracic ganglion was removed from the animal and placed in a Petri dish. The tympanal nerve (N5B1) was placed in a glass capillary, which was filled with a 5% neurobiotin solution in distilled water. The preparation was incubated at 4°C in a moist chamber for 48 hours. Thereafter, the ganglion was fixed in 4% paraformaldehyde. Then it was dehydrated, cleared in xylene for 5 minutes, and rehydrated. The next step was incubation in collagenase and hyaluronidase solution (1 mg each, Sigma Chemicals, www.sigmaaldrich.com) in 1 ml phosphate buffer for one hour at 37°C. The ganglion was placed in an Avidin-Biotin-Complex (Vectastain ABC Kit PK-6100 Vector Laboratories, www.vectorlabs.com) over night. After washing with phosphate buffer, the marking was visualized with DAB and H 2 O 2 (Vector Peroxidase Substrate Kit DAB SK-4100, Vector Laboratories) under visual control. The ganglion was dehydrated and cleared in methylsalicylate.
Sound of A. longipes
The calling song of A. longipes males (Figure 2A) was produced in the late evening. The males were persistent singers, often singing for several minutes without any interruptions. Most stayed in one place, usually elevated, while singing, but some walked around without stopping to sing. The sound pressure level reached about 87 dB SPL in a distance of 10 cm caudal (n = 4).
The calling song consisted of a sequence of verses that were separated into two pulses by a pause of about 16 ms (Figures 2A, 2C). These two pulses consisted of 2 to 7 impulses, which differed between the tested males ( Figure 3), but all males had fewer impulses in the first pulse than in the second pulse. In one of the six males (M2 in Figure 3) the second pulse more than doubled the number of impulses in the first pulse. The impulse interval (3.5 ms, n = 2811; SD = 0.68) was similar in the first and the second pulses ( Figure 2C), which were separated by an interpulse interval of about 16 ms. The verse interval was about 50 ms. The mean verse duration was 40 ms ( Figure 4A), and the mean number of impulses per verse was 8.53 pulses ( Figure 4B).
Disturbance stridulation ( Figure 2B) could be more easily elicited during the day than during the night and from resting insects than from walking insects. For two males, recordings from both types of sounds were compared ( Figure 4). The disturbance sound showed three characteristic differences to the calling song. First, the disturbance stridulation lasted only a few seconds. Second, the disturbance stridulation consisted of verses with only one pulse. Third, the pulses consisted of about 13 or 14 impulses per verse in contrast to the maximum number of 10 impulses per verse in the calling song ( Figure 4A). The mean number of impulses per verse between the calling song and the disturbance stridulation was significantly different (Figure 4; unpaired t-test; p < 0.0001; t = 45.45; df = 1495; calling song: n = 1262; disturbance sound: n = 235) in both males. However, the duration of the verses of both sounds was not different ( Figure 4B). The sound pattern resulted in two groups of interval durations ( Figure 2D). The verse interval was rather variable (mean = 98 ms; n = 220; SD = 62.50), but the impulse interval (2.9 ms; n = 2028; SD = 0.78) was invariant and significantly different from that of the calling song (p < 0.0001, unpaired T-test, dft = 26.13, df = 4837). Both types of songs had similar frequency spectra within the investigated range with a peak around 15 kHz and a steady decrease in the ultrasonic range ( Figure 5).
Defense behaviour
Disturbance stridulation can be regarded as one mechanism of defense. While producing the sound, A. longipes always started to run away. As an additional defense mechanism, both sexes used reflex bleeding. They extruded hemolymph liquid from the coxatrochanter joint. The squirt intensity and the bleeding coxa-trochanter joints could vary. The bleeding could not be elicited by a brief touch, but by handling the insects, e.g. during preparation for experiments. Otherwise, no complex defense mechanisms were observed.
Courtship
Females performed positive phonotaxis toward singing males. Whereas 3 of 4 females paused during phonotaxis, 1 female approached the male very quickly. When females reached the males, they touched them with their long antennae, and the males stopped singing. All observed pairs met each other under the top of the cage, and the male climbed underneath the female from a lateral position. Mating only started in the late evening and took at least 2 hours. On the next morning, 3 of 4 females still carried the spermatophore. One spermatophore was removed and weighted: 0.46 g, 5.4% of the respective male's weight. Females were heavier (mean 11.6 g, n = 3) than the males (mean 8.5 g, n = 2). During the day, the females fed on the spermatophore. For egg laying, the female, with its abdomen, made a small hole in the sand and placed a cluster of eggs into it.
Electrophysiology
The hearing threshold showed the highest sensitivity from 4 and 10 kHz with a threshold between 40 and 45 dB SPL ( Figure 6). The threshold rose to about 60 dB SPL in the ultrasonic range (20 -40 kHz). No differences between males and females were found.
Neuroanatomy
The anterograde backfills of the tympanal nerve into the prothoracic ganglion showed that the nerve, 5B1, projects through the leg nerve. The axonal fibres of the auditory receptors continued in a posterior curve to the midline of the ganglion and terminated ipsilaterally in a dense neuropile (Figure 7). (Table 1), with no sexual dimorphism.
Calling song and courtship of A. longipes
The calling song of A. longipes is a sequence of two pulse verses, which can last several minutes. Each verse consists of two pulses, which consist of a few impulses. The impulse numbers in the pulses vary among individuals (see also Conti and Viglianisi 2005). The songs often show some variations within a basic pattern (Schul 1998), which could be important for sexual selection. Larger variation might raise a problem when females need an exact pattern of the calling song to recognize the speciesspecific song (Klappert and Reinhold 2003), which is the case with females from areas of sympatry (Gwynne 2001). Variable song pattern could lead to heterospecific mating in closely related species, as has been shown for Acrididae (von Helversen and von Helversen 1975).
The results on the frequency spectrum extend those of Conti and Viglianisi (2005) in the ultrasonic range and confirm a broad peak between 10 and 15 kHz. This frequency spectrum lies within the range of other Tettigoniidae (Heller 1988;Römer et al. 1989;Schul and Patterson 2003). The fact of frequency attenuation of the vegetation, especially for the ultrasonic components of the calling song (Keuper et al. 1986;Römer and Lewald 1991), might be the reason that A. longipes males seemed to prefer singing from a higher position. This has to be confirmed by field studies.
The auditory threshold shows the greatest sensitivity to between 4 and 10 kHz, which reflects a mismatch to the frequency spectrum of the calling song. In other Tettigoniidae, a species-specific tuning to the song spectrum is found, although the temporal pattern might be even more important (Dobler et al. 1994;Römer and Bailey 1998;Schul and Patterson 2003;Lehmann et al. 2007). Phonotaxis experiments with song models could clarify how species recognition in A. longipes is influenced by song frequency or by song pattern. In the laboratory, no chorusing of A. longipes could be observed, as was observed in the Hetrodinae Acanthoplus speiseri (Mbata 1992) and Eugaster spp. (Grzeschik 1969). This shows a considerable variation of acoustic signalling in a genus similar to other Tettigoniidae (Greenfield et al. 2004;Fertschai et al. 2007). The phonotactic behaviour of the females, which is a common reaction to the conspecific calling song in tettigoniids, is also described for other Acanthoplus species (Power 1958) and for Eugaster species (Weidner 1955;Grzeschik 1969). However, no courtship song could be observed for A. longipes as in A. speiseri (Mbata 1992) and Eugaster spp. (Grzeschik 1969). The mating and egg laying behaviour is similar to those of other Hetrodinae species (Weidner 1955;Power 1958;Grzeschik 1969;Mbata 1992) although the mating duration seems to be much longer.
Disturbance sound and defense
The disturbance sounds of orthopterans are less well studied (Field 1993;Desutter-Grandcolas 1998). Some Tettigoniidae use their stridulation mechanism for both intraspecific communication and as a defense mechanism (Kaltenbach 1990), and other species use different organs for disturbance stridulation (Heller 1996). Additionally, other defensive behaviour with and without sound production evolved (Belwood 1990). The disturbance stridulation of A. longipes could be evoked by disturbing resting animals. It was a brief sound that stopped shortly after the disturbance. The verses consist of more impulses that are not separated into pulses, compared to the calling song. The verse interval is variable; thus, the rather plain and variable pattern fits two of four characteristics of a disturbance sound (simple and irregular) proposed by Masters (1980). The two other characteristics (broad frequency band and a maximum energy at 1 kHz) could not be found in the disturbance stridulation of A. longipes. The frequency spectrum of the disturbance stridulation and the calling song are similar. It has been found for other orthopterans, as well, that the four characteristics do not always fit to disturbance sounds (Desutter-Grandcolas 1998). The different impulse interval together with the different verse structure indicate that the disturbance stridulation does not simply reflect the neuronal and functional networks involved in calling song stridulation.
Many species who use disturbance sounds are large, flightless, slow-moving and nightsinging bushcrickets, for example Pterophylla camellifolia, Liparoscelis nigrispina and Aglaothorax armiger (Alexander 1960), which leads some authors to the assumption that this kind of sound production is a defense mechanism, especially against vertebrates (Alexander 1967;Belwood 1990). The disturbance stridulation might increase the chance of survival of an insect after a predatory attack because it might startle the predator (Robinson and Hall 2002). Or it might have a warning function for an additional defense mechanism, e.g. noxious signals (Masters 1979). Furthermore, it is possible that a defense sound is mimicking an aposematic signal. While camouflage and mimicry are primary defense mechanisms (Gwynne 2001), the disturbance stridulation is a secondary defense mechanism, which is used after the predator has made contact with the potential prey. There are also some arguments against the hypothesis of a defense mechanism: if this type of sound is an important defense mechanism both sexes should be able to produce it (Heller 1996). Only in tettigoniid species, where the females also produce a sound for intraspecific communication, both sexes produce disturbance sounds (Shaw and Galliart 1987). Furthermore, nymphs should also benefit from such a defense mechanism, as in some tettigoniid species (Dadour and Bailey 1990).
A. longipes showed no complex behavioural pattern for defense, as other orthopterans do (Sandow and Bailey 1978), but like other Hetrodinae (Weidner 1955;Power 1958;Grzeschik 1969), both sexes use reflex bleeding as an additional, secondary defense mechanism. However, there is no evidence that the hemolymph of A. longipes is noxious. Additionally, A. longipes is well armed with spines, making it a difficult prey for small animals. The complement of different defense mechanisms might be necessary for day-active, ground-living flightless animals that otherwise might become an easy prey.
Neuroanatomy of the auditory system
Retrograde backfills of the legs show a complex of scolopidial cells in the proximal tibia, which can be divided into three parts. The most proximal group of cells is the subgenual organ, which detects substrate vibrations. The middle part is the intermediate organ, and the third part is the crista acustica, which perceives airborne sound (Stumpner 1996). This complex tibial organ can be found in all legs, although tympana are only present in the foreleg. In the crista acustica, the cell number is species-specific and ranges between 20 and 50 cells in different species of the Tettigoniidae (Schumacher 1979;Lakes and Schikorski 1990;Kalmring et al. 1993;Robinson and Hall 2002). The number of crista acustica receptor cells of A. longipes (n = 27) fits well into this range. Like in other Tettigoniidae, the number of crista acustica cells decreases in the midleg and the hindleg (Houtermans and Schumacher 1974). The central projection of auditory fibres has a typical arrangement in the prothoracic ganglion. The fibers project into the auditory neuropile and terminate at the midline. It can be presumed that the crista acustica cells have a tonotopic projection like in other Tettigoniidae (Oldfield 1982). Thus, the neuroanatomy of this firstdescribed Hetrodinae is in accordance with that of other Tettigoniidae (Lakes and Schikorski 1990).
|
2014-10-01T00:00:00.000Z
|
2010-06-10T00:00:00.000
|
{
"year": 2010,
"sha1": "80fa9b620e255a842301b96659e3fead1d4b754c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/10/1/59/18174455/jis10-0059.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80fa9b620e255a842301b96659e3fead1d4b754c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119460253
|
pes2o/s2orc
|
v3-fos-license
|
Coincident SZ and $\gamma$-ray signals from cluster virial shocks
Virial shocks around galaxy clusters are expected to show a cutoff in the thermal Sunyaev-Zel'dovich (SZ) signal, coincident with a leptonic ring. However, until now, leptonic signals were reported only in Coma and in stacked Fermi-LAT clusters, and an SZ shock signal was reported only in A2319. We point out that a few clusters --- presently Coma, A2319, and A2142 --- already show sharp drops in Planck SZ pressure near the virial radius, coincident with a LAT $\gamma$-ray excess. These signatures are shown to trace the virial shocks of the clusters, at joint medium to high confidence levels. The electron acceleration rates inferred from $\gamma$-rays are consistent with previous measurements. Lower limits of order a few are imposed on the shock Mach numbers.
INTRODUCTION
As a galaxy cluster grows, by accreting matter from its surroundings, a strong, collisionless, virial shock is thought to form at the so-called virial shock radius, rs. In spite of considerable efforts, these elusive shocks have only recently been traced, thanks to their distinct leptonic signature. However, these signals have not yet been corroborated by independent, or more direct, indicators.
By analogy with supernova remnant (SNR) shocks, virial shocks too should accelerate charged particles to highly relativistic, 10 TeV energies. These particles, known as cosmic ray (CR) electrons (CREs) and ions (CRIs), should thus form a nearly flat, i.e., nearly constant E 2 dN/dE, spectrum (equal energy per logarithmic CR energy bin), radiating a distinctive non-thermal signature which stands out at the extreme ends of the electromagnetic spectrum. High-energy CREs cool rapidly, on timescales much shorter than the Hubble time H −1 , by Compton-scattering cosmic microwave-background (CMB) photons Totani & Kitayama 2000;Keshet et al. 2003). These up-scattered photons should then produce γ-ray emission in a thin shell around the galaxy cluster, as anticipated analytically Totani & Kitayama 2000) and calibrated using cosmological simulations (Keshet et al. 2003;Miniati 2002). The projected γ-ray signal typically shows an elliptic morphology, elongated towards the large-scale filaments feeding the clus-⋆ E-mail: ukeshet@bgu.ac.il ter (Keshet et al. 2003(Keshet et al. , 2004b. The same γ-ray emitting CREs are also expected to generate an inverse-Compton ring in the optical band (Yamazaki & Loeb 2015) and in hard X-rays (Kushnir & Waxman 2010), and a synchrotron ring in radio frequencies Keshet et al. 2004b,a). It was recently shown that the shocks can also be detected in soft X-rays, below the peak energy of the thermal component (Keshet & Reiss 2017, henceforth KR17).
By stacking Fermi Large Area Telescope (LAT; henceforth) data around 112 massive clusters, and by utilizing the predicted spatial and spectral behavior of the signal, the cumulative γ-ray emission from many virial shocks was detected recently at a high (> 4.5σ) significance (Reiss et al. 2017, henceforth R17). The signal was found to be spectrally flat, with a photon spectral index α ≡ −d ln Nγ /d ln ǫ = 2.11 +0.16 −0.20 , and peaked upon radial binning around the virial radius, at ∼ 2.4R500 ≃ 1.5R200, in agreement with predictions. Here, Nγ and ǫ are the photon number density and energy, and subscripts δ = 200 and 500 designate an enclosed mass density that is δ times larger than the critical mass density of the Universe. This signal indicates that the stacked shocks deposit on average ξeṁ ∼ 0.6% (with a systematic uncertainty factor of ∼ 2) of the thermal energy in CREs over a Hubble time. Here, ξe is the fraction of shocked thermal energy deposited in CREs, andṁ ≡Ṁ /(M H) is a dimensionless accretion rate of order a few, with H being the Hubble parameter. As these results were obtained by radial binning, they sample only the radial component of the virial shocks, necessarily diluting the signal by picking up only those parts of the shocks favourably seen in such a projection.
It is interesting to study the signal from individual nearby clusters, where the signal may be picked up directly, without stacking. The Coma cluster (Abell 1656), in particular, is one of the richest nearby clusters, and is exceptionally suitable for the search for virial shock signatures. An analysis (Keshet et al. 2017, henceforth K17) of a ∼ 220 GeV VERITAS mosaic of Coma (Arlen et al. 2012) found evidence for a large-scale, extended γ-ray feature surrounding the cluster. It is challenging to uncover the signal at lower, LAT energies, where the Galactic foreground becomes strong and the resolution in general deteriorates; LAT studies thus imposed upper limits on various emission morphologies (Zandanel & Ando 2014;Prokhorov 2014;Ackermann et al. 2016). By considering a thin γ-ray template corresponding to the VERITAS signal, a 3.4σ LAT excess was detected, along with the soft X-ray signal (> 5σ) anticipated from lower energy CREs advected to smaller radii; both signals are best fit by the VERITAS ring morphology, and agree (within systematics) with the same CRE distribution.
A more direct tracer of a virial shock, independent of its particle acceleration, is its imprint on the thermal Sunyaev-Zel'dovich (SZ;Sunyaev & Zeldovich 1972) signal. This distortion of the CMB field, produced as the CMB photons traverse the intracluster medium (ICM), provides a direct measure of the Comptonization y parameter. The shock should then present as a sharp outward drop in the y-parameter, localised near the virial shock (Kocsis et al. 2005). Preliminary evidence was found for a correlation between the γ-ray signature in Coma and the y-parameter drop inferred from WMAP (K17). Accurately measuring the SZ effect at sufficiently high sensitivity and resolution is challenging, but has recently become feasible thanks to the y-parameter maps prepared by the Planck collaboration (Planck Collaboration et al. 2016). Indeed, the first firm detection (8.6σ) of the virial SZ drop was recently reported by Hurier et al. (2017, henceforth H17).
Here, we present a combined analysis of Planck yparameter maps and of LAT data around select galaxy clusters, to test if coincident SZ and γ-ray signals can be identified at a high confidence level. While photon statistics and resolution limit the LAT virial signal of an individual cluster to low confidence levels, using the SZ drop to pinpoint the location of the shock raises the γ-ray significance. Combining γ-rays and SZ in a joint analysis further boosts the detection, especially owing to the high significance levels that can presently be achieved with Planck. Such a high-significance joint detection not only supports the viability of the γ-ray and SZ signals, but also corroborate the association of the γray signal with the virial shock, and confirms that the virial shock is a strong, collisionless shock with a measurable CRE acceleration efficiency. We choose to study two clusters that already have published radial, azimuthally-averaged SZ profiles, namely Coma (Khatri & Gaspari 2016) and Abell 2319 (Ghirardini et al. 2017). A third cluster -Abell 2142 -is selected as a test-case, based on its high mass and data availability.
The paper is arranged as follows. Our analysis methods are presented in §2. The Coma cluster is analysed in §3, A2319 in §4, and A2142 in §5. The results are then summarised and discussed in §6. We adopt a flat ΛCDM cosmological model with a Hubble constant H0 = 70 km s −1 Mpc −1 and a mass fraction Ωm = 0.3. Assuming a 76% hydrogen mass fraction gives a mean particle mass m ≃ 0.59mp. An adiabatic index Γ = 5/3 is assumed. Confidence intervals quoted are 68% for one parameter. The results are primarily quantified in terms of an overdensity δ = 500. Accordingly, we define a normalised angular distance τ ≡ θ/θ500 from the centre (defined as the X-ray peak) of the cluster.
METHOD
We extract the parameters of the analysed clusters from the Meta Catalogue of X-ray Clusters (MCXC; Piffaretti et al. 2011). In addition to the location of each cluster on the sky and its radius R500, the catalogue specifies the redshift z of each cluster, so the corresponding angular radius θ500 can be computed. The acceleration efficiency can be inferred from the leptonic signals, as described in R17, using M500, the mass enclosed inside R500. In most of the analysis, we assume that the gas distribution in each cluster is spherical. For Coma, which shows evidence for an elongated signature at large radii, we also examine an underlying prolate distribution. The parameters of the clusters and the results of their analyses are summarised in Table 1.
To model the signals and better estimate their significance, we use a maximal likelihood (minimal χ 2 ) analysis. The likelihood L is related to the χ 2 distribution of squared normalised errors by The test statistics (Mattox et al. 1996) TS, defined as can be computed from Eq. (1). Here, subscript '−' (subscript '+') refers to the likelihood without (with) the modelled signal, maximised over any free parameters. Confidence levels are computed by assuming that TS has a χ 2 n distribution, where n = n(+) − n(−) is the number of free parameters added by modeling the signal (Wilks 1938).
SZ
We generate the radially binned profile of the Comptonization parameter in each cluster, as a vector y, following the standard SZ analysis protocol (Planck Collaboration et al. 2013;Hurier et al. 2013;Planck Collaboration et al. 2016;Ghirardini et al. 2017), as described in H17. The Comptonization parameter for each pointing is defined as giving the dimensionless line-of-sight integral of the electron pressure P . Here, σT is the Thompson cross section, me is the electron mass, and c is the speed of light. We construct an SZ map for each cluster, with a 7 ′ FWHM angular resolution, using the Modified Internal Linear Combination Algorithm (MILCA; Hurier et al. 2013).
Each map is azimuthally averaged and binned onto concentric annuli of 2 ′ thickness. The resulting profiles are shown for Coma in Figure 1, for A2319 in Figure 2, and for A2142 in Figure 5. The local background offset y b is assumed uniform in each cluster, and treated as a free parameter; the figure ordinates are offset for presentation purposes.
Due to the moderate angular resolution of the Planck survey, the y-map is over-sampled, introducing correlations between pixels that propagate into the radial profiles; additional correlations are induced by the intrinsic Planck noise properties. The covariance matrix Cp of the binned y profile is estimated using 1000 simulations of inhomogeneous correlated gaussian noise. The error bars in the figures represent only the (square root of the) diagonal of this covariance matrix.
The y-map in each cluster is modelled by the line of sight integration Eq. (3) over the ICM pressure. In general, we assume spherical symmetry, P (r) = P (r); deviations from sphericity are examined in §3. We first model the gas without a virial shock, using the generalised NFW profile (gNFW; Navarro et al. 1996Navarro et al. , 1997, where α, β, γ, and C are free parameters. We also consider a simpler, isothermal β-model, whereβ and the core radius rc are free parameters; this allows the integration to be carried out analytically. An overall normalization and y b are two additional free parameters in each model. The resulting y-map of each model is analyzed similarly to the data: convolved with a 7 ′ FHWM filter and binned onto the same 2 ′ radial bins, to give a binned radial vector ym. The free parameters of the model are then determined by maximizing the likelihood. The uncertainties are assumed to follow Gaussian statistics, such that Next, each model is generalised to account for the presence of a shock. A simple model for an internal shock is given by The shock radius rs and the fractional pressure jump q > 1 across it constitute two additional free parameters. This model, which assumes the same pressure slope both upstream and downstream of the shock, is more appropriate for a weak ICM shock than it is for a virial shock. For better modeling the virial shock, we replace the upstream region r > rs with pristine accreted gas. The infalling gas is approximated as in free fall, v ∝ r −1/2 , so mass conservation implies a ρ ∝Ṁ /(r 2 v) ∝ r −3/2 mass density profile. Adiabaticity then yields P (r > rs) ∝ ρ Γ ∝ r −3Γ/2 = r −5/2 , so the pressure profile is given by P (r) = P0(r) × 1 for r ≤ rs ; q −1 (r/rs) −5/2 for r > rs .
As virial shocks are expected to be strong, we also consider a model in which q → ∞ in Eqs. (7) or (8); this leaves rs as the single shock parameter. We simultaneously fit the free parameters of the different models, with and without a shock. In all cases, we find a substantial, at least 3σ improvement in the fit when including a virial shock. The ICM shock profile Eq. (7) is marginally disfavoured with respect to the virial shock profile Eq. (8) in all cases. A strong shock is favoured; in some cases q −1 is consistent with zero. Given q, the Mach number Υ of the shock may then be computed as This is used to estimate or place lower limits on Υ. While the gNFW models provide a better fit to the data, as they have two free parameters more than their β model counterparts, the two model variants yield very similar shock parameters, well within the statistical confidence intervals. The β models have the advantage of allowing for faster computations, as they have fewer free parameters, and the integration can be carried out analytically even when incorporating the shocks as in Eqs. (7) or (8). Generalizations for multiple shocks and deviations from sphericity are thus examined using the β model.
The confidence levels of shock detection obtained in the gNFW and β model variants are either comparable to each other, or higher in the β model. It is not a priori clear which model better captures the presence of the shock: the additional free parameters of the gNFW model can follow the gas distribution better, but they may also mask the presence of the shock to some degree. When incorporating the virial shock, the β models give β ≃ 1, as expected at large radii, and provide a very good fit to the data; the TS difference with respect to the gNFW+shock counterpart is 3.
Various convergence tests are used to test the robustness of our results. We thus examine if the results are sensitive to the radial range used in the analysis, ruling out spurious effects induced by structure at both small and large radii. We obtain comparable results when fitting profiles in linear vs. logarithmic r-y space. We test y parameter profiles prepared with the Needlet Internal Linear Combination algorithm (NILC; Delabrouille et al. 2009) instead of MILCA, obtaining similar results. Other convergence tests and variations in the models are described below, on a cluster-bycluster basis.
Gamma-rays
The γ-ray analysis is similar to that employed in R17 and in KR17. We use the archival, ∼ 8 year, Pass-8 LAT data from the Fermi Science Support Center (FSSC) 1 , and the Fermi Science Tools (version v10r0p5). Pre-generated weekly allsky files are used, spanning weeks 9-422 for a total of 414 weeks (7.9 yr), with ULTRACLEANVETO class photon events. We consider four logarithmically-spaced energy bands in the range 1-100 GeV. The data is discretised using a HEALPix scheme (Górski et al. 2005) of order 10. Point source contamination is minimised by masking the 95% (90% for A2319; see §4) containment area of each point source in the LAT 4-year point source catalogue (3FGL; Acero et al. 2015). The foreground is estimated by fitting a polynomial function to the cluster and its vicinity.
We bin the LAT data into concentric rings about the X-ray centre of the cluster. For each photon energy band ǫ, and each radial bin centered on τ with width ∆τ , we define the excess emission ∆n ≡ n−f as the difference between the number n of detected photons, and the number f of photons estimated from the fitted foreground. The significance of the excess emission in a given energy band ǫ and radial bin τ can then be estimated, assuming Poisson statistics with f ≫ 1, as Next, we compute the χ 2 contribution of the excess counts ∆n(ǫ, τ ) with respect to the model prediction µ(ǫ, τ ), for given ǫ band and τ bin, The likelihood L is then related to the sum over all spatial bins and energy bands, as The (so-called shell) model µ is based on leptonic emission from a thin shell in a β-model, as described in K17, R17, and KR17. The two free parameters describing the shock emission are its (normalised) radius τs, and the CRE acceleration rate ξeṁ. Another (so called planar) model we consider assumes that accretion is confined to the plane of the sky, so the emission takes the form of a ring; this uses the same two free parameters (KR17). We also consider a one-parameter model, in which the parameter ξeṁ = 0.6% is fixed on the mean value inferred from the stacking of LAT clusters (R17).
Such γ-ray analyses were tested and calibrated in R17 using large control catalogues, with mock clusters redistributed on the sky. Convergence tests for all analysis parameters were carried out using a sample of 112 clusters (including A2319 and A2142) and a large mock sample. In particular, parameters pertaining to the discretisation resolution, point source removal, and foreground modeling, were shown to be well behaved.
In the above method, we analyse the γ-ray data around A2319 and around A2142 (the γ-ray signal from Coma was analysed in KR17). Flux profiles are shown in Figures 3 (for A2319) and 6 (for A2419), and the corresponding significance profiles are presented in Figures 4 (A2319) and 7 (A2142). In both clusters, we find a γ-ray excess in the vicinity of the virial radius, and in close proximity to the shock location inferred from SZ. We then repeat the analysis when using the high precision localisation of the virial shock by the SZ data, as a prior for the γ-ray analysis. Joint likelihood analyses are also carried out.
The γ-ray signal in Coma was discussed in K17 and in KR17. It is best described as an elongated, elliptical ring, with semiminor axis coincident with the cluster's virial radius, oriented toward the western LSS filament; the best fit was obtained for a ratio ζ ≡ a/b ≃ 2.5 of semimajor axis a to semiminor axis b ≃ 2.1R500. A soft X-ray signature, consistent with a leptonic virial shock signal emitted by lower energy CREs, was identified in the low (R1 and R2) bands of the ROSAT all-sky survey (RASS; Snowden et al. 1997). The morphologies of the LAT and ROSAT signals are best fit by the same parameters of the VERITAS signal, and are in better agreement with the planar, rather than the shell, model, as anticipated from the planar distribution of LSS around Coma. The intensities of the VERITAS, LAT, and ROSAT signals agree with each other, within systematic uncertainties, for an approximately flat CRE spectrum. Interpreting the signal as a virial shock would imply ξeṁ ≃ 0.5%, to within a systematic uncertainty factor of a few.
Coma: SZ
The radial, azimuthally-averaged and binned profile of the y parameter in Coma is shown in Figure 1. The measured (error bars) and modelled (curves) profiles are shown both in the full radial range with gNFW-based models (left panel), and zoomed in on the putative shock region with β models (right panel). A flattening and a possible rise in y(τ ) are observed around τ ∼ 2.1; this was also seen in an analysis of a southwest sector, where it was putatively associated with a weak relic shock (Erler et al. 2015). Beyond τ ∼ 2.3, the profile is seen to fall off, indicative of the presence of the virial shock, as shown below.
Consider first spherically symmetric gas models in the absence of a shock. We compute the corresponding y(τ ) profiles by integrating the P0 pressure models (Eqs. 4 or 5), convolved the resulting map with a 7 ′ FHWM filter, and radially binning it, as described in §2.1. The resulting profiles (red dotted curves), found by maximizing the likelihood, depend somewhat on the underlying model and on the radial region being examined, as seen by comparing the two panels. Figure 1. Azimuthally-averaged radial profile of the y-parameter in Coma, measured with Planck (1σ error bars) and modeled both without (dotted red curve) and with (other curves) a virial shock, in particular a strong, Mach Υ → ∞ spherical virial shock (purple solid curves; a dotted vertical curve shows the best-fit shock location). Models are based both on spherical gNFW (left panels) and isothermal β (right, zoomed in panels) profiles; the bottom panels show the fit residuals (slightly shifted in τ for shock models). The left panels include models for an ICM shock (dot-dashed green) and a virial shock (dashed black) of finite Υ; but as the inferred shock is strong, the different shock curves overlap. The right panels include models with sharp transitions from spherical to filamentary (dot-dashed green) and to prolate (dashed black) distribution.
Next, we incorporate a shock, finding a marked improvement in the fit for all three gNFW-based shock models described in §2.1 and shown in the left panel: an ICM shock (3.7σ for Eq. 7), a virial shock (3.7σ for Eq. 8), and an infinitely strong shock (4.1σ for q → ∞). In all three shock models, the (projected, normalised) shock radius is found to be τs = 2.46 ± 0.04. The best-fit pressure jumps are large, giving high Mach numbers Υ = 75 for an ICM shock and Υ = 76 for the virial shock, but the uncertainties here are substantial; a lower limit Υ > 10 (Υ > 2.5) can be placed on the latter at the 1σ (2σ) confidence level.
Similar results are obtained for shock profiles based on the β model and when considering other radial ranges, as illustrated in the right panel. Here, a shock is identified at a higher, 6.3σ confidence level, with τs = 2.48 ± 0.07. The narrower radial range upstream considered here allows for a stronger upstream ICM component, and thus a weaker shock; for the 0 < τ < 3 range used to produce the right panel we obtain a lower limit Υ > 1.6 (Υ > 1.4) at the 1σ (2σ) confidence level. Next, we consider the possible presence of an additional inner ICM shock (subscript i) inward of the virial shock (subscript s), by generalising Eq. (7) according to r < ri ; q −1 i ri < r < rs ; (qiqs) −1 r > rs .
where ri,s and qi,s are the radii and pressure jumps of the two shocks; an analogous generalization is applied to Eq. (8). The best fit for the putative, weak, inner shock gives a radius τi = 1.6 ± 0.2 and a Mach number Υi = 1.1 +0.5 −0.1 . However, this shock is not detected at a significant level in the present, azimuthally averaged analysis. The parameters of the virial shock are not appreciably changed by incorporating the weak shock.
The flattening and possible rise around τ = 2.1 suggest some underlying structure or morphological change.
This may be associated with the elongated leptonic signatures (K17, KR17) and with evidence for nonsphericity in published SZ maps (Planck Collaboration 2012; Khatri & Gaspari 2016) of Coma. We therefore consider models for simple deviations from sphericity, as illustrated in the right panel. A sharp transition from spherical to prolate at some radius τ b (assuming ζ = 2.5 and a constant pressure between a sphere of radius τ b and a spheroidal with semiminor axis τ b ) gives a slightly (0.5σ) better fit with τ b ∼ 2.3. A sharp transition from spherical to filamentary at some τ f gives a noticeably better fit (2.7σ) for τ f ∼ 1.9. A more detailed analysis is deferred to future work.
To test if the shock detection is sensitive to the model and to its applicability at small radii, we repeat the analysis but restrict it to large, r > R500 radii only, and recover consistent shock parameters with a comparable confidence level. We also examine SZ maps of the Planck collaboration (Planck Collaboration et al. 2016), extracted with the MILCA algorithm and binned by Khatri & Gaspari (2016, figure 2 therein; see discussion in §6); the results do not appreciably change.
ABELL 2319
Abell 2319 is the cluster with the highest signal-to-noise detection in the Planck SZ catalogues (SNR ∼ 50;Planck Collaboration et al. 2016). Here, M500 = 5.83 × 10 14 M⊙, R500 = 1248 kpc, θ500 = 0 • .3205, and z = 0.0557 (Piffaretti et al. 2011). Although a major merger was reported on small scales (O'Hara et al. 2004), the cluster appears quite spherical in SZ (H17), so we analyse it as such. It is interesting to note that a sharp drop in thermal Xray emission can be seen around r ∼ 3 Mpc (figure 2 in Ghirardini et al. 2017), possibly associated with the virial shock we discuss below.
A2319: SZ
The radial, azimuthally averaged profile of the y-parameter in A2319 was studied by Ghirardini et al. (2017). The profile, extracted as detailed in §2.1, is shown in Figure 2. The slope becomes steeper around τ ∼ 2.5, and subsequently flattens beyond τ ∼ 2.8. Note the slight flattening around τ ∼ 2.3, just before the steepening; this is somewhat reminiscent of the more substantial flattening seen in Coma.
The profile was analyzed by H17 using the gNFW model, and found to harbor a virial shock at the 8.6σ confidence level. This shock was identified at a radius τ = 2.81 ± 0.05 (using the value of θ500 adopted above) and was found to be very strong, with a lower limit Υ > 3.25 (at the 2σ confidence level) on the Mach number. Here, we carry out a complementary analysis based on the β model. We fit the profile by projecting, convolving, and binning the isothermal β model variants: without a shock (dotted red curve; Eq. 5), with an ICM (dot-dashed green; Eq. 7) or a virial (dashed black; Eq. 8) shock of finite Mach number, and with strong shock (purple solid; in the q → ∞ limit). As the shock is inferred to be strong, the different shock curves nearly overlap.
All models give a shock radius consistent with τs = 2.82 ± 0.05, consistent also with the H17 result. The shock is again found to be strong, with a Mach lower limit Υ > 10 (Υ > 1.6) at the 1σ (2σ) confidence level. The detection confidence level is very high, reaching 14σ for the case of an asymptotically strong shock. This is higher than found with the gNFW model in H17, due to the simpler model and the wider radial extent taken into account.
To test if the shock detection is sensitive to the model and to its applicability at small radii, we repeat the analysis but restrict it to large, r > R500 radii only, as in the Coma analysis. The results do not significantly change.
A2319: γ-rays
A2319 lies near the Galactic plane, at latitudeb ≃ 13.5 • . Due to the strong Galactic foreground at such low lati- tudes, here we adopt a fixed (best fit constant) foreground, and limit the analysis to the close vicinity of the cluster. A nearby point source (3FGL J1913.9+4441) further limits the available analysis area, so only a 90% masking is used. The γ-ray flux and its excess are shown in Figure 3. The significance of the excess emission is presented in Figure 4.
An excess of ∼ 2.2σ can be seen in Figure 4 in the 2 < τ < 2.5 bin. This is the same bin that showed a strong excess in the stacked LAT signal of R17. Note that the mock clusters in the control samples of R17 show no such feature.
We model the signal using the spherical accretion shock model of R17 and adopting the β-model parameters of Fukazawa et al. (2004), in order to translate the emitted γray flux to electron acceleration rate ξeṁ. The inferred shock radius is τs = 2.9 +0.3 −0.4 , somewhat larger than in the stacked clusters of R17, but consistent with the SZ result for this cluster. The inferred acceleration rate is ξeṁ = (0.4±0.2)%, consistent with previous estimates in other clusters (K17, R17, and KR17). The TS-based significance (omitting the innermost, τ < 0.5 bin, which is adversely affected by point source masking) is low, 1.2σ, but this is mainly driven by the spectral dependence, which may be contaminated at such low latitudes. One may adopt the mean acceleration efficiency inferred from the stacking of LAT clusters, ξeṁ ≃ 0.6%, as a prior for the γ-ray analysis. This raises the significance of shock detection to 1.6σ, giving τs = 3.0 ± 0.3.
The overlap between SZ and γ-ray estimates for the shock radius supports the viability of the γ-ray signal. We may use the SZ result, which tightly constrains the shock radius as τs = 2.82 ± 0.05, as a prior for the γ-ray analysis. This raises the significance of shock detection in LAT γrays to 1.7σ, leaving the acceleration rate estimate ξeṁ = (0.4 ± 0.2)% unchanged. A joint SZ-γ-ray analysis yields a combined shock detection at a very high (> 9σ) confidence level, due to the high significance of the SZ signal.
As a consistency check, we examine a planar leptonic model (as invoked for Coma in KR17), in which the shock is assumed to be confined to the plane of the sky. As expected, such a planar model does not provide a better fit for A2319.
ABELL 2142
Abell 2142 is the largest and most massive of the three clusters, with M500 = 8.15 × 10 14 M⊙, r500 = 1380 kpc, θ500 = 0 • .2297. At a redshift z = 0.0894, the cluster shows a very dense core and substantial surrounding substructure (Einasto et al. 2015). The cluster appears quite spherical in SZ, so we analyse it as such.
A2142: SZ
The radial, azimuthally averaged profile of the y-parameter in A2142, extracted as detailed in §2.1, is shown in Figure 5. A steepening can be seen around τ ∼ 1.6, flattening beyond τ ∼ 2.2. Here, the flattening seen just inward of the steepening (compare the data points to the non-shock, dotted-red curve) is very modest.
We first fit the profile with the gNFW-based models. Adding a shock provides a better fit to the data in all three shock variants. A strong virial shock is detected at the 3.1σ confidence level. All three shock models are consistent with a shock radius τs = 1.89 ± 0.06. Beyond this radius, some upstream component may still be included in the fit, so the Mach number in principle does not have to be very high; the inferred lower limit is Υ > 2.2 (Υ > 1.9) at the 1σ (2σ) confidence level.
We test if the shock detection is sensitive to the model and to its applicability at small radii. First, we repeat the analysis, using the β model instead of the gNFW model; the results do not change significantly. Next, we restrict the analysis to large, r > R500 radii only; again, the results remain similar.
A2142: γ-rays
As A2142 lies at an intermediate latitude (b ≃ 48.7 • ), here we adopt our nominal, fourth-order polynomial as in R17, with a 95% point-source masking. The analysis is otherwise identical to that of A2319. The γ-ray flux and its excess are shown in Figure 6. The significance of the excess emission is shown and modeled in Figure 7. An excess of ∼ 2.2σ can be seen in the 2 < τ < 2.5 bin -the same bin showing excess emission in A2319 and in the stacked clusters in R17. The TS-based significance of the excess is 2.2σ (omitting the innermost, τ < 0.5 bin due to possible contamination by one high energy photon; see below). The inferred shock radius is τs = 2.2 +0.2 −0.3 , and the acceleration rate is ξeṁ = (0.7 ± 0.3)%, consistent with, and marginally higher than, estimated in other clusters. We may use the mean value ξeṁ ≃ 0.6% inferred from the stacking of LAT clusters as a prior for the analysis, yielding a 2.6σ detection of a shock at τs ≃ 2.2.
The τs estimates from SZ and from γ-rays are consistent with each other within 1σ. Using the tightly constrained value τ = 1.89±0.06 from the SZ profile as a prior for the γray analysis marginally raises the γ-ray significance to 2.3σ; the corresponding acceleration rate is ξeṁ = (0.60 +0.36 −0.25 )%. A joint SZ and γ-ray analysis yields a virial shock signal at the 3.5σ confidence level. If we fix ξeṁ = 0.6% to the stacking value, the joint confidence increases to 3.9σ.
As expected, replacing the shell model with the planar model does not improve the fit for A2142. The ∼ 3.3σ excess evident in Figure 7 at the innermost, τ < 0.5 bin is almost entirely due to the single photon detected in the highest energy band; its significance is inflated due to the low foreground expected at such a small bin.
SUMMARY AND DISCUSSION
Motivated by the detection of leptonic signals from the virial shocks of galaxy clusters (K17, R17, and KR17) and by preliminary evidence for a morphological coincidence between this leptonic emission and an SZ cutoff on the y-parameter (K17), we present a joint analysis of SZ data from Planck (Figures 1, 2, and 5) and of γ-ray data from the LAT (Figures 3-4 and 6-7). The analysis focuses on Coma ( Figure 1) and A2319 (Figures 2-4), for which γ-ray or SZ virial signals were already published, and supplements them by a new joint analysis of γ-ray and SZ data in A2142 ( Figures 5-7), selected by its high mass and by data availability. Our results are summarised in Table 1. The imprint of the virial shock on the radial profile of the SZ y-parameter is detected at fairly high confidence levels in all three clusters, reaching 8.6σ in A2319 (H17), 4.1σ in Coma, and 3.1σ in A2142. These confidence levels are obtained with gNFW-based models; comparable or higher confidence levels are found when using β models, instead. When incorporating the virial shock in either gNFW or β models, they fit the data well in all cases. Consistent shock parameters are inferred from the different model variants, even when masking small or large radii.
The LAT γ-ray excess near the virial radius is found to be 2.2σ, or slightly higher, in all three clusters. Such confidence levels are to be expected, based on previous studies (R17, KR17), when analysing an individual cluster, due to the limited photon statistics, the instrumental point spread function, and the strong Galactic foreground. The TS-based statistics show comparable significance levels, except in A2319, which suffers from a considerable Galactic foreground due to its low latitude. Adopting the mean CRE acceleration rate inferred from a previous stacking of LAT clusters (R17) reduces the fit to a one-parameter model, raising the significance of shock detection in all cases.
The detection of the γ-ray and SZ signals, and their interpretation as associated with each other and with the virial shock, are further supported by the inferred radii and properties of the virial shocks. First, while the shock radius τs is treated as a free parameter in each analysis of each cluster, it is found to be near the virial radius in all cases. Second, while in each cluster τs is treated independently in γ-ray and SZ analyses, the two values are found to be consistent with each other, within 1σ, in A2319 and A2142 (in Coma, non-sphericity complicates the comparison); this is illustrated by the vertical lines in Figures 4 and 7. Third, the shock Mach numbers are found to be high, as expected in a virial shock. And fourth, the acceleration rates ξeṁ inferred from the γ-ray signals are consistent among the three clusters (see discussion below) and with previous studies.
The SZ signature of a virial shock can be used to boost the sensitivity for the detection of the leptonic emission from the shock, or vice versa. One option is to use the shock radius inferred from SZ as a prior for the γ-ray analysis. Another option is to carry out a joint SZ-γ-ray analysis. Both methods are shown to raise the significance of the γray detection. For example, a joint analysis of A2142 yields a 3.5σ shock detection. Further fixing the acceleration rate on the value inferred from stacking other clusters yields a higher, 3.9σ signal.
The SZ signals are strongly dominated by the highpressure plasma downstream of the virial shock. Consequently, the inferred shock parameters are insensitive to assumptions on the upstream plasma. Lower limits can still be imposed on the strength of the virial shock. However, due to the uncertain foreground level and the low signal-to-noise near and beyond the virial radius, we are only able to impose lower limits with substantial uncertainty; the 2σ upper limits are of order Υ ∼ 2.
The acceleration rates ξeṁ inferred in the three clusters support previous estimates of order a few 0.1%. Modeling the SZ signal and the distributions of galaxies near the virial shock, one can measureṁ and break its degeneracy with ξe. For example, in A2319, the accretion rate was estimated (H17) asṁ200 ≃ 2.0. Adopting the β model scalings (R17) m ∝ δ 1/2 ∝ r −1 δ , the accretion rate at the A2319 shock is found to beṁ63 ≃ 1.1, which gives ξe ≃ 0.5% in this cluster.
We find that the parameters of both gNFW and β models change substantially when taking into account the pres- ence of a shock. This suggests that when modeling data beyond ∼ 1.5R500, the projected effect of the virial shock must be taken into account. Some flattening of the y-parameter profile is seen just inward of the virial shock. This is most pronounced in Coma, but is also seen in A2319, and possibly also in A2142. The flattening may be associated with recently accreted substructure or with a change in morphology, as suggested by the improved fit obtained for Coma when adding a filamentary component.
We find that the presence and parameters of the virial shocks can be inferred from the SZ data using the β model, without invoking the more complicated gNFW model; the results of the two approaches are consistent with each other. We similarly examine if the error bars σ d (yj), representing the diagonal of the covariance matrix, can be used to give a reasonable estimate of the uncertainties even without accounting for the off-diagonal terms. We find that for the parameters used in this study, the correlations among neighboring bins in the radial y parameter profile can be approximately accounted for by co-adding to each diagonal term the mean differences in y between the neighboring bins, σ(yj) 2 = σ d (yj) 2 + (yj − yj+1) 2 /2 + (yj − yj−1) 2 /2. This is analogous to accounting for an unknown position within the radial bin. In this method, we can analyze previously published y parameter profiles (Khatri & Gaspari 2016;Ghirardini et al. 2017), recovering the same results obtained here from the full analysis.
We note that the acceleration rates ξeṁ inferred in the three cluster seemingly show a mild correlation with the cluster mass, M500. However, this tendency is not significant, and was not found among the 112 clusters analyzed by R17. Furthermore, the rate inferred in Coma pertains to a different, prolate model, so there is a considerable systematic uncertainty when comparing it to the other clusters.
|
2018-01-04T19:00:01.000Z
|
2018-01-04T00:00:00.000
|
{
"year": 2018,
"sha1": "fdd7c2ee5d5fab1b3a8789985f0c6c6531277b1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b533ce9c803c56b5d9c9048eeb50f93cea800286",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233781844
|
pes2o/s2orc
|
v3-fos-license
|
Cultural determinants affecting pedagogical decisions in content design: a case study
Purpose – Commercially produced educational materials often reflect the pedagogical beliefs and culture(s) of the content developers. While many teachers involved in teaching English as a foreign language have relied on commercially published content in the past, the advent of ubiquitous technology has afforded them the ability to create content that is contextualised and to share it with other educators across the globe. The purpose of this study is to investigate cultural determinants which affect the pedagogical decisions of teachers when designing content. Design/methodology/approach – This case study, conducted at a higher educational institution in the Gulf, addresses the issues that arise when cultures or ideologies of educators as material developers are different to that of the target audience. Three semi-structured interviews with teachers were conducted in an effort to understand cultural determinants that influence decision-making about pedagogy when creating inhouse content to motivate undergraduate students on an English language program in the United Arab Emirates. Findings – The results of this study indicated that the participants maintained mainly essentialist perspectives of local cultures and sub-cultures and their thinking in content creation was not all that different to that of commercial publishers. Practical implications – This study holds implications for awareness-raising and pedagogical training for educators involved in in-house content development. Originality/value – This case study addresses an area that has been under-researched in the Gulf region.
Introduction
The United Arab Emirates (UAE) is home to over 200 nationalities with Arabic as the official language of the country and English as the main language of communication amongst the large expatriate community (UAE Fact Sheet, 2020). While state schools educate Emirati students in Arabic, tertiary education is delivered in English and for students to enter undergraduate programmes in universities, they have to pass an English examination (Al Hussein and Gitsaki, 2018). In addition to the state exam, other internationally recognised English language proficiency tests can also be used for university entry, of which the International English Language Testing System (IELTS) is the most commonly used.
As with most tests of its kind, IELTS benchmarks are based on "native-speaker" models of communication and the teaching materials used to prepare students for this test are developed by authors based in what Kachru (1992) terms as inner circle countries (i.e. North America, UK, Australia and New Zealand). This means that this teaching material along with the implied inner circle inherited pedagogy (evident within the language skills activities in the materials) may be detached from or even irrelevant to the cultural contexts of the target students (Bax, 2003). A more sinister view is that inner circle culture(s) and ideologies are deliberately promoted through English language education materials and policies as superior, perpetrating an aura of imperialism over other cultures (Canagarajah, 1999;Phillipson, 1992Phillipson, , 2009). This is especially pertinent, if one considers that the majority of interactions today are between speakers who require a globalised form of English (Graddol, 2006) or English as a Lingua Franca (ELF) (Hopkyns, 2020;McKay, 2000;Seidlhofer, 2011) rather than a variety culturally attached to an inner-circle country.
While the discussions around linguistic imperialism and English hegemony are not new in the field of English as a Foreign Language (EFL), the provision of ubiquitous technology has added another dimension to these issues. Using technologies, teachers in localised EFL contexts have the opportunity not only to create their own content but to share it very easily with other teachers on a global scale. Publishing houses are no longer alone in providing content; millions of teachers can now do the same. Viewed positively, teachers now have the opportunity to create materials and use pedagogies that are highly contextualised, avoiding the "one size fits all" approach to language learning previously propagated in commercially produced teaching materials. However, as this is a relatively recent phenomenon, more research is needed in how teachers' beliefs affect the choices they make in designing materials for their own teaching context. To this end, the present study aimed to explore the cultural determinants influencing the design of bespoke teaching materials in an intensive English language programme. The low English language proficiency level of the undergraduate students in the context of this study necessitated the creation of custom-built resources designed in-house by teacher content creators (TCCs). The main aim of the programme was to use an inquiry-based approach to empower students to undertake an authentic research project while using edited English text and media as sources; that is, material that is factually accurate but has been created at a level of English more suitable for lower proficiency learners.
The study hypothesised that many of the decisions made by the western TCCs, would be culturally linked to their own backgrounds; embedded in a belief system that has roots in linguistic imperialism and dominant inner-circle pedagogical principles. This view stems from a neo-Vygotskian socio-cultural paradigm and more specifically from the work of Mercer and Fisher (1997) who understand learning and the development of cognition as something that is heavily influenced by culture and that it is socially constructed. With that in mind, ideas about learning and teaching differ from one culture to another and when two or more cultural entities overlap a "culture of dealing" (Holliday, Hyde and Kullman, 2004) is established. This means that one's perception of the other culture(s) is based on established beliefs from their own culture but is also inclusive of the context in which they encounter the other culture, in this case, the language classroom.
Thus, this exploratory research study aimed to investigate the cultural determinants the TCCs were driven by when making pedagogical decisions on content. To do so, a case study was drafted from three semi-structured interviews with TCCs to answer the following research question: What cultural determinants affect TCCs' decisions on what is valuable or worth learning when designing materials for undergraduate students on an intensive English language programme?
Literature review
Towards an understanding of culture One of the main obstacles in the study of cultural determinants is defining culture itself. It would seem that the very notion of culture is fluid. Indeed, in their book Redefining Culture Baldwin et al. (2008) have drafted just under 90 pages of definitions of culture, all of which are equally valid across disciplines in academia. For the purposes of defining culture within the context of this study, it is more worthwhile adopting a notion of culture within the realms of applied linguistics and language learning. To that end, Holliday et al. (2004) provide the distinction between two paradigms which can be used to differentiate essentialist from non-essentialist notions of culture. Their small culture lens allows researchers the liberty of not engaging in "hard" superficial divides such as ethnic or national identities, but rather prefers to see "softer" culture(s) within social groups. Small culture is composed of factors like events, interactions, rituals that a specific group of people "habitually engage" in. Small cultures are, therefore, context specific and not generalist in nature (Holliday, 1999). Understanding small cultures allows teachers (and researchers) a better understanding of their classroom without making overarching sweeping assumptions based on ethnic stereotypes. Therefore, this view of culture is more useful in the context of the present study.
Taking on a non-essentialist perspective on culture means authors like Hudson (2012) deconstruct social groups within EFL in the UAE. His rhetoric not only includes condemnation of discriminatory hiring practices of inner circle teachers over teachers more attuned to local cultures, but, also, he begins to paint a picture of the highly complex nature of students in the UAE; how rules like females wearing the niqaab (a veil covering the face worn by some Muslim women, especially in the Gulf region) in front of male teachers from inner circle countries are on occasion discarded, if that teacher is trusted. For Hudson, the dichotomy lies in the need for (self-) censorship, on the one hand, and the risk of losing one's job for unacceptable "cultural" attitudes on the other. Indeed the complexity of linguistic and cultural dualism in higher education in the Gulf region is something that Findlow (2006) discusses. In recognising the compulsory and potentially hegemonic and imperialistic oriented studying of English in the UAE (Phillipson, 1992;Salem, 2012;Weber, 2011;Zughoul, 2003), Findlow (2006) allows us to understand the conflict (or balance depending on one's perspective) between traditional "Arab-Islamic correctness" and English, the language of modernity and high social status (at least as viewed by some social groups in the UAE). The convoluted nature of cultural interactions within the UAE are difficult to describe homogeneously. Which in turn means that the application of hegemonic language teaching practices from the West are not equal in all contexts within the UAE, while they affect curriculum, material development and pedagogy enormously (Mazawi, 2003) making this case study all the more significant.
Transferring pedagogy as a product of culture The dangers of bequeathing a cultural product, such as pedagogy, into another culture are evident in the field of EFL in the teaching materials and implied pedagogies of the Communicative Language Teaching (CLT) approach (Richards and Rodgers, 1986). CLT is synonymous with notions of student-centred, reflective and minimally guided learning. It has been at the forefront of EFL for over thirty years and this is despite being criticised, most prominently by Bax (2003) and his call to a context-based approach to language teaching, but also Canagarajah (1999) and Phillipson (1992 and2001) who both highlight a more ominous side to CLT and its role in propagating linguistic and cultural imperialism.
Pedagogical decisions in content design
Similarly, in the field of medical education, Frambach et al. (2012) question the applicability of inquiry-based approaches to teaching like problem-based learning (PBL), steeped in Western constructivist tradition, for the fostering of self-directed learning (SDL) in Middle Eastern students. The study found that the "cultural factors of uncertainty and tradition [. . .] [pose a challenge] to Middle Eastern students" (Frambach et al., 2012, p. 738). This is not to say that successful implementation of such constructivist approaches to teaching and learning are not possible but rather that cultural alternatives may emerge (Frambach et al., 2012) or that teachers may exercise "informed eclecticism" (Sowden, 2007) when deciding what methodologies work best in their contexts.
Informed decision-making is indeed critical when deciding on content and pedagogy in intensive English foundational programmes, especially understanding what students go through when they transition from state secondary schools to tertiary education in the UAE. For twelve years, Emirati students attend state primary and secondary education where "Arabic supplies all or most communication needs [. . .] [while in tertiary education, the transition to] learning in English requires a substantially changed cultural mindset" (Findlow, 2006, p. 27). Students are suddenly expected not only to understand content in English but to learn through the medium of English and adopt learning strategies, note-taking skills, role-playing, "foreign" notions of critical thinking and Western concepts of collaboration leading to learning shock (Griffiths, Winstanley and Gabriel, 2005). This is not to say that they are unable to engage in these activities. Rather the question remains as to whether these should be imposed on them.
Compounding this problem, are some teachers in EFL, predominantly from innercircle countries (Kachru, 1992), who live in "expat" bubbles or sub-cultures. In other words, there are social groupings of teachers who recreate their own inner circle reality within the UAE host culture, which means that regardless of how long they "experience" the Gulf, they are predominantly culturally isolated from their students. Woods's (1996) notion of foreign language teachers' Beliefs, Assumptions and Knowledge (BAK) explains that teachers realise these three constructs as a continuum and in many cases they are undeniably linked and often indistinguishable from each other. Zheng (2013, p. 398) defines beliefs as "study-bound, culture-based, contextemergent and even person-bound". Thus, in combination with "expat" bubble culture, it is not difficult to see a lack of awareness or cultural sensitivity on behalf of EFL teachers, especially when there are established belief systems in place. In addition, Riley's (2009) synthesis of Pajares's (1992) fundamental assumptions about beliefs (which are equally applicable to students as well as teachers) highlight that beliefs are a result of culture and these are formed early on and they are difficult to change in adulthood. The issue of whether new background awareness of cultural interactions can ultimately change behaviour is, therefore, questionable. This can only be considerably worse when content creators employed by commercial publishers are working within inner-circle countries without any contact with the spectrum of contexts they are expecting to apply their version of EFL to.
Given these concerns, the current study sought to identify influences on content-creation decision making and the extent to which these influences are driven by culture. More specifically, the study was guided by the following questions: RQ1. What guides TCCs when they select content for their materials?
RQ2. What assumptions are TCCs making about their students when considering content and pedagogy?
RQ3.
To what extent the factors affecting the TCCs' decisions are culturally driven?
Participants and context
This qualitative case study focussed on the English language teachers who were involved in creating content for a pilot English language course for undergraduate EFL students in a university in the UAE. The course used a scaffolded inquiry-based approach to language learning, shifting the focus from English language learning to English as a means of addressing a challenge. To that end, a specialised library of bespoke designed resources was developed by the TCCs. The pool of teachers who had worked on the pilot course was limited in number and so convenience sampling was used. In addition, teachers from inner circle countries were selected to participate in this study because they not only made up the majority of the TCCs but also the main teaching body in the English language programme of the target university. The TCCs' experience working on the pilot course and their intimate knowledge of it meant that they were able to provide rich data (Cohen, Manion and Morrison, 2007).
Out of 11 TCCs, three were selected to participate in the study. They were interviewed using individual semi-structured interviews. To maintain anonymity, interviewee names were replaced with pseudonyms (Alex, Charles and Ben). Table 1 provides short profiles for each of the teacher participants.
Data collection and procedures
Once the three teachers were selected from among the TCCs, they were approached and asked if they would be interested in participating in the study. They were given an information letter detailing the research and were asked to sign consent forms. They were assured of their privacy and the confidentiality of the data and reporting procedures and they were informed of their ability to withdraw their consent at any time without prejudice. Table 1.
Participant profiles Alex
Alex was an older teacher with over twenty years' experience teaching in EFL settings. Like many of the teachers in the English programme, he had taught English in the Far East as well as English as a Second Language (ESL) in his inner circle home country. He had spent over half his years of experience in the Gulf area and taught in Oman, Saudi Arabia and Bahrain. Alex taught male students exclusively while serving on the programme. Despite the extensive time he spent living and working in the Gulf, he did not speak Arabic
Charles
Charles was a teacher that started working in the UAE directly from grad school. He had over ten years' experience and had moved from emirate to emirate building his knowledge of the region. Despite being from an inner circle country, he was fluent in Arabic and also had many Arab friends, mainly from the Levantine area. He said that the English project was the first time he had really felt that things were changing for the better Ben Ben had worked in Qatar and Saudi Arabia for twelve years before coming to the UAE. He said he was glad to be teaching in the UAE where things were more liberal. Ben had experience teaching students in his own inner circle country and said that the students there acted differently. He was not approached to write content for the English program but rather selected the project himself because he felt that it was something he was interested in developing. Ben did not speak Arabic
Pedagogical decisions in content design
The semi-structured interview protocol was prepared using an interview evaluation grid (Gillham, 2000;Kvale, 1996) and was piloted on a TCC on the English program but not included in the data. After the pilot, a second version of the interview protocol was drafted (Appendix) and the researcher proceeded to interview the TCCs. Each of the individual semistructured interviews lasted for approximately 30 min and they all took place within a week. Interviews took place in each of the TCCs' classrooms and they were audio recorded with the participants' permission.
Data analysis
The data from the interviews were manually transcribed and the transcripts were shared with each of the participants to ensure they had no objections with any of the interview contents. Following that, thematic analysis was used and the data went through two coding cycles. Initially this included values coding and then an iteration of pattern coding to enable the researcher to identify specific themes within the data (Saldaña, 2009). The data was then processed using the six-phase structure developed by Braun and Clarke (2006). The process involved the following steps: familiarisation with data; generation of codes; searching of themes; reviewing of themes; defining and naming themes; and finally, the case study report. This method allowed for a more holistic and systematic approach in analysing the interview data.
Results and discussion
The impetus for the bespoke English course was to address the low student motivation and engagement when learning with commercially produced textbooks for IELTS preparation. The existing teaching materials tended to be extremely inner circle in their approach and style disregarding the local culture. For example, the textbooks would cite famous historical figures, concepts or ideologies from the West, such as Fibonacci, Archimedes and Chomsky rather than historical figures that Emirati students would recognise and relate to; they would make references to British, American or Australian English rather than Indian, or Filipino English which are more common in the UAE; they would ask students to debate about student debt which is not a major problem in the UAE or whether people should have fewer children because of overpopulation, suggesting that having large families is wrong although it is considered a blessing by many in the UAE. These topics exemplify some of the shortcomings of commercially supplied teaching materials especially considering that many students may want to learn English but may not be willing to "receive the cultural load of the target language" (Alptekin and Alptekin, 1984, p. 17). During the interviews, the TCCs reported that the existing materials were responsible for the low student motivation and engagement in the classroom and that rethinking the topics in the new course materials was imperative.
In choosing the topics for the new materials, the TCCs held very firm views about what was necessary for students to learn, outside their remit as English language instructors. There was strong evidence to suggest a genuine sense of duty or what was considered "right" by the TCCs, and this was reflected in their rationale in choosing topics. However, all three teachers were concerned about "censorship" on the project and not being able to give students a "complete" education. There was no doubt among teachers as to whether they should teach certain subject areas but rather they felt compelled to "enlighten" their students on topics they believed they should know about. For example, Alex held a very strong view on promoting the theme of immigration and human trafficking within the new language course, stating, "[. . .]they [students] need to know about the world outside [. . .]" He felt frustrated, thinking that he was not able to talk about themes such as human trafficking saying, "I was quite upset about not being able to put these into the library [. . .] Come on, the students all know it happens, why are we hiding it from them?" (Alex, lines 66-69). While there are topics that are considered culturally sensitive and, therefore, best avoided, Alex's decision on what to include in course materials is driven by his personal belief on what is considered taboo within this cultural context. In reality, human trafficking is openly discussed in the UAE and there are government portals, policies and public websites that inform the public of this crime. The fact that his personal misconceptions override the reality of what is and is not acceptable, led him to limit the range of topics in the materials he designed depriving students of the opportunity to engage in rich discussions on global issues like human trafficking and practise their language skills.
Remarkably, even though some topics were considered taboo by the TCCs, they still felt it necessary to protest their dissatisfaction with the status quo as they perceived it. To that effect, Ben stated: This suggests that because of their inner circle country culture, TCCs felt duty-bound to expose students to topics implicitly even when they believed topics were considered taboo and should not be explored.
When thinking about the topics and learning activities of the new course, the TCC beliefs on student preferences were based on what TCCs thought students were capable of. These beliefs were not based on actual student language ability, but, instead, they were motivated by their own cultural preconceptions of what students could and could not do based on gender. A good case in point is Ben, who thought that male students would do best at topics related to enterprise such as running a business for a day. To be more specific he said: "I don't know about the girls, but I think the Shabab [laddish culture found among young male Emirati adults] would prefer running a business. It's more down their alley. They all want to become businessmen, don't they? [. . .] or managers [. . .]". (Ben, Similarly, the other two teachers were also influenced by such essentialist beliefs when making decisions about content creation. For example, during the interview, Charles stated: He went on to say, "They're not all like that, you know there are the girls that sit up the front, you know, the ones that always wave their hands in the air when you ask a question [. . .] they really are the exception to the rule". (Charles, These excerpts demonstrate TCCs' beliefs that student interests and motivation are tied down to gender-based and cultural perspectives, as though by being part of a certain genre makes your willingness or unwillingness to learn inherent.
Pedagogical decisions in content design
The analysis of the interview data also showed that the TCCs held strong views about how students should learn. They considered the inquiry-based approach of the new materials, the right way to learn. Charles went so far as to say that: "They need to learn how to learn [. . .] they're not used to deep learning are they? [. . .] We need to teach them how to collaborate and be critical thinkers, that's what being in today's world means". (Charles, This attitude expressed by Charles is indicative of the cultural imperialism that Bax (2003), Canagarajah (1999) and Phillipson (1992Phillipson ( , 2001Phillipson ( and 2009 refer to and demonstrates widely held views by Western teachers that their educational system is superior to that of their students and, therefore, they know best how a group of students should learn.
Not only did the TCCs have preconceived ideas of how students learn but also what students expected of them in the classroom. Even though the content that was created was supposed to follow an inquiry-based approach to teaching and learning, all three teachers believed that there should be an element of structured learning following a traditional teacher-centred approach in combination with the experiential learning approach that the course aimed to use. For example, Alex thought that the mixture of structured learning combined with inquiry-based learning created the best environment for the students. This excerpt from the interview with Charles best illustrates this point: "Look [. . .] you can't expect students to make the switch for their teacher to be the 'guide on the side' when at high school they have been used to them being the 'sage on the stage'". (Charles, This shows cultural understanding from one teacher in particular and is reminiscent of Holliday's (2006, p. 385) "othering" of students, especially students that have difficulty "with the specific types of active, collaborative, and self-directed "learner-centred" teachinglearning techniques [. . .] [considered] superior within the English speaking West". The TCCs' beliefs of what is best practice was tainted by their preconceptions of what they thought students expect from teachers in the classroom. This in turn affected their teaching approach when designing the content and the related learning activities.
Summary
The pilot English course aimed at establishing a library of materials at the appropriate language proficiency level for the target students to involve them in an inquiry-based learning experience that would address issues of student motivation and engagement. From the limited data that this study obtained, it is evident that there are several cultural determinants that restricted the range of topics, teaching approaches and types of activities included in the materials. While the materials aimed for active student engagement in the classroom, the TCCs' predetermined ideas of what students expected them to do resulted in a preference for a hybrid of inquiry-based learning and structured learning based on a traditional teacher-centred approach. In many ways, this echoes what Frambach et al. (2012) referred to when they described cultural alternativesin this case more prescriptive teaching approaches to scaffold the inquiry-based activities. The difference here is that the TCCs' decisions were based on misguided notions of student abilities and preferences. In the end, TCCs exercised what Sowden (2007) referred to as "informed eclecticism" when it came to the approaches they used, albeit influenced by their own cultural biases. When it came to learning, the TCCs adopted the view that they knew best about how students learn driven by a sense of cultural imperialism. This confirms an earlier study by Mazawi (2003) which showed how western teachers' hegemonic teaching practices greatly affect curriculum, material development and pedagogy. This point was further compounded by TCCs' predetermined beliefs and assumptions of what students were able to do based on their gender. Finally, this project wanted to address the insufficiency of commercially produced materials with regards to the topics which tended to be influenced by inner-circle biases with little or no relevance to the local student experience. However, even the TCCs' decisions on what topics to include in the materials were culturally biased. They felt that it was their duty to determine what was right for students to learn but their beliefs about what was taboo and their personal misconceptions limited what topics were included in the materials and how they were approached. Western teachers exercising self-censorship is something Hudson (2012) also observed in his study where he mentioned that this was done by western teachers working in the Middle East to preserve their jobs. In the present study, TCCs exercising self-censorship led them to overcompensate and restrict the range of topics in the new course. Additionally, TCCs felt so strongly about what their students should learn that they resorted to subversive methods to expose their students to certain topics they felt to be important but considered inappropriate for the local culture. This is in line with Woods's (1996) BAK model that stipulates that foreign teachers hold beliefs, assumptions and knowledge that are inextricably linked to each other and not easily changeable (see Pajares, 1992).
Conclusion
This was an exploratory case study that sought to investigate the cultural determinants governing decisions on pedagogy and content in teachers creating EFL materials in-house and the extent to which these mirrored those of commercially produced materials. Convenience sampling was used to select TCCs from inner-circle countries working in the Gulf. Semi-structured interviews were conducted to gather data on the cultural determinants that influenced their decision-making process. The data analysis showed that the TCCs picked topics based on their own cultural biases, made assumptions about student motivation based on gender and presumed that certain topics were taboo. Even when topics were considered taboo, there was an implied suggestion that students should be exposed to them indirectly, allowing them to reach the disputed topics naturally on their own, essentially circumventing the need for the inclusion of those topics in the materials. By and large, this small-scale qualitative study indicated that participants had, on the whole, an essentialist view of the local culture and in doing so they exhibited attitudes consistent with the view that Western educational ideologies are superior.
The present study highlights the need for raising awareness among teachers engaging in localising and designing EFL materials in what Holliday (1999) refers to as small cultures. Such professional development would help in closing cultural gaps (affecting subsequent decision-making) between inner circle teachers and local cultures. This would aid teachers' understanding of their students and their needs, untethered from their own cultural bias. A more eclectic approach when considering pedagogy and alternative language learning methodologies would also be useful but only when this is guided by a non-essentialist cultural understanding. This would also help materials developers to understand that Western pedagogical approaches are not always ideal in all contexts.
This project was based on the premise that teachers who lived within the local culture would create materials more suitable for their students compared to "one size fits all" commercially produced materials. In contrast, the study showed that even those teachers creating materials for their own students are (mis)guided by essentialist cultural understandings. The significance of the study findings is amplified when considering how the use of ubiquitous technology facilitates not only the design of localised materials but also the sharing of those materials with the wider professional community through a plethora of online platforms. The global spread of digital material has the potential to carry Pedagogical decisions in content design with it messages, assumptions, ideologies and paradigms which can impact teaching and learning on a much grander scale than before.
This study would have benefitted from more in-depth textual analysis on the materials produced by the TCCs as this would have provided further evidence to corroborate the findings from the interviews. Furthermore, a larger sample of TCCs would have increased the generalisability of the results. However, time constraints prohibited the inclusion of more subjects in the study. Future studies should also consider including Arab background TCCs in the sample as this would provide the opportunity to compare results outside inner circle teacher groups.
Despite its limitations, this exploratory case study does raise important points that need to be considered when teachers create their own materials, especially now more than ever. The recent COVID-19 pandemic has created an unprecedented need for teacher-curated digital content for remote, online teaching and learning. With modern digital devices, teachers have an opportunity as educators to develop highly contextualised, personalised approaches to learning. In doing so, teachers need to consider their students and their learning needs without grouping them into essentialist categories. What qualities do teachers need to best teach this course? Thank the participant for taking part in the interview Inform them that the data will be transcribed and the researcher will be in contact with them to provide them with the opportunity to destroy the audio recording Inform them that they will have the opportunity to check through the transcription and omit any or all of the data Inform them that you will be in contact with them to schedule an observation Additional prompts which may be used to help participants expand on their responses What did you mean. . .?
Can you give me more detail.
|
2021-05-07T00:03:10.349Z
|
2021-03-05T00:00:00.000
|
{
"year": 2021,
"sha1": "98153fd74439e0b8ab3ccae420f474a43c6db39d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1108/lthe-10-2020-0053",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b9924dfdf6135289f5860f9c274925d581e79957",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
254275222
|
pes2o/s2orc
|
v3-fos-license
|
A Large Muon EDM from Dark Matter
We explore a model of dark matter (DM) that can explain the reported discrepancy in the muon anomalous magnetic moment and predict a large electric dipole moment (EDM) of the muon. The model contains a DM fermion and new scalars whose exclusive interactions with the muon radiatively generate the observed muon mass. Constraints from DM direct and indirect detection experiments as well as collider searches are safely evaded. The model parameter space that gives the observed DM abundance and explains the muon $g-2$ anomaly leads to the muon EDM of $d_{\mu} \simeq (4$-$5) \times 10^{-22} \, e \, {\rm cm}$ that can be probed by the projected PSI muEDM experiment. Another viable parameter space even achieves $d_{\mu} = \mathcal{O}(10^{-21}) \, e \, {\rm cm}$ reachable by the ongoing Fermilab Muon $g-2$ experiment and the future J-PARC Muon $g-2$/EDM experiment.
Introduction
The near-future discovery of the muon electric dipole moment (EDM) is highly expected by the reported discrepancy in the muon anomalous magnetic moment (g − 2) µ , which may indicate the existence of physics beyond the Standard Model (SM) at or below the TeV scale [1][2][3][4] (for a review, see ref. [5]), because the same new physics contribution naturally has the imaginary part which is relevant to the EDM. The current upper limit on the muon EDM is |d µ | < 1.8 × 10 −19 e cm (95% C.L.) [6]. There is also a study on indirect bounds on the muon EDM by measuring EDMs of heavy atoms and molecules, which indicates |d µ | < 2×10 −20 e cm [7]. Moreover, the sensitivity to the muon EDM will be improved in the near future: the ongoing Fermilab Muon g − 2 experiments [8] and projected J-PARC Muon g − 2/EDM experiment [9] will explore the muon EDM at the level of 10 −21 e cm, while the Paul Scherrer Institute (PSI) muEDM experiment [10][11][12] will reach the sensitivity of 6 × 10 −23 e cm.
A fermion EDM d f is described by a dimension-five operator L ⊃ − i 2 d ff σ µν γ 5 f F µν where f is a Dirac fermion, σ µν ≡ i 2 [γ µ , γ ν ] and F µν is the photon field strength. Since this operator requires a chirality flip and left-and right-handed fermions carry different charges in the SM, we actually need a Higgs field insertion which makes the EDM operator effectively dimension-six. Therefore, a new physics contribution to a fermion EDM scales as v H /M 2 where v H and M denote the Higgs vacuum expectation value (VEV) and a new physics mass scale, respectively.
To estimate the expected size of the muon EDM, we can consider four classes of new physics that generate the muon EDM as well as the anomalous magnetic moment 1 : • Spurion approach. The chirality flip required to generate the EDM operator is provided by the muon Yukawa coupling y µ or some coupling proportional to y µ . When the muon EDM d µ is generated at the k-loop level, we expect Here, δ CPV and λ represent the size of CP-violating phases and couplings involved in the loop, and m µ is the muon mass. Models in this class have been discussed in refs. [15][16][17][18][19][20][21].
• Flavor changing approach. If the muon is converted to the tau lepton by a lepton flavor violating (LFV) interaction, the chirality flip can be provided by the tau Yukawa coupling y τ . In this case, we find with a LFV coupling y µτ and the tau lepton mass m τ . Refs. [22][23][24][25] have explored models in this class. Note that if the model has a scalar leptoquark with appropriate charge assignment, the chirality flip can be picked up from a quark Yukawa coupling, e.g., the top Yukawa coupling. Moreover, there is an enhancement due to the color factor N C = 3. Refs. [26][27][28][29][30][31][32] have explored such a possibility in the context of the electron EDM and ref. [33] presented a general discussion for the case of the muon g − 2 and EDM. In addition, a model with extra vector-like leptons also has a possibility to predict a large muon EDM [34,35] due to the chirality flip on a heavy lepton line.
• Radiative stability approach. New physics that produces the muon EDM also generates the muon mass by removing the attached photon. When we just assume that such a contribution to the muon mass does not exceed the correct value, the size of the muon EDM is expected to be because the same loop factor and coupling λ are shared by the generated muon mass and EDM.
• Tuning approach. If the muon mass generated by new physics that produces the muon EDM exceeds the correct value, a fine-tuning is required. This (unlikely but logical) possibility allows us to obtain a very large muon EDM which is bounded by for λ 4π. Table 1 shows mass scales of new physics that produce the muon EDM at the one/twoloop level probed by the projected PSI muEDM experiment [10][11][12] (the ongoing Fermilab Muon g −2 experiment [8] and the future J-PARC Muon g −2/EDM experiment [9]). Aside from the tuning approach, the table indicates that the radiative stability approach generates the largest muon EDM and its near-future measurements can probe mass scales larger than the TeV scale. The present paper explores this fascinating possibility for Radiative stability 5900 GeV (1400 GeV) Tuning 1.0 × 10 6 GeV (2.5 × 10 5 GeV) Table 1: Mass scales of new physics that produce the muon EDM at the one/two-loop level probed by the projected PSI muEDM experiment [10][11][12] (the Fermilab Muon g −2 and J-PARC Muon g −2/EDM experiments [8,9]) for each approach presented in the main text. Here, we assume λ ≈ 0.65 of around the weak coupling constant and y µτ ≈ 0.3 which is roughly the maximum value allowed by the measurement of the branching ratio of h → µτ (see, e.g., ref. [23]). For the leptoquark model, the mass scale is enhanced by (y µt /y µτ ) N C m t /m τ ≈ 57y µt where y µt is the leptoquark coupling to the muon and the top quark.
the first time through the study of a concrete model to realize the radiative stability approach 2 . We consider a model of dark matter (DM) that can address the muon g − 2 anomaly. A DM fermion and new scalars exclusively couple to the muon, which leads to the radiative generation of the muon mass. The model contains a new CP-violating phase and produces the muon EDM. We will find that the model parameter space to give the observed DM abundance and explain the muon g − 2 anomaly leads to the muon EDM of d µ (4-5) × 10 −22 e cm probed by the PSI muEDM experiment. Furthermore, it will be shown that another viable parameter space even achieves d µ = O(10 −21 ) e cm reached by the Fermilab Muon g − 2 and J-PARC Muon g − 2/EDM experiments, which is consistent with the estimate of Table 1.
The rest of the paper is organized as follows. Section 2 starts with the description of our DM model and explores the mass spectrum. We then calculate the radiatively generated muon mass and coupling to the Higgs boson and the muon EDM as well as the anomalous magnetic moment. They are all induced at the one-loop level. We also discuss deviations of the muon couplings to the Higgs and Z bosons from those of the SM. In section 3, phenomenology of DM in our model is explored. Section 4 summarizes the independent parameters of the model, and then presents our results to identify the parameter space that gives the observed DM abundance and explains the muon g − 2 Table 2: Charge assignments for the relevant particles. L µ L and µ R represent the second generation of the left-and right-handed leptons and H is the SM Higgs field. The hypercharge of ψ L,R is taken as Y ψ = 0 in the present paper. L µ and X are Z 2 symmetries associated with the muon number and the exotic particle number, while S a is a softly broken Z 2 symmetry to forbid the tree-level muon Yukawa coupling.
anomaly and indicate the size of the muon EDM. In section 5, we give conclusions and discussions. Loop integrals and full one-loop expressions are summarized in appendices.
Model description
Our DM model is based on models proposed in ref. [37], which radiatively generate the muon mass and explain the muon g − 2 anomaly. The model contains a single Dirac fermion ψ and two scalar fields φ, η. Charge assignments for the relevant particles are shown in Table 2. The present paper focuses on the case with Y ψ = 0 3 . We introduce two Z 2 symmetries L µ , X associated with the muon number and the exotic particle number, respectively. The former symmetry 4 makes it possible to avoid severe constraints from lepton flavor violating processes, while the latter one stabilizes the lightest exotic particle which is identified as DM. In addition, we assume a softly broken Z 2 symmetry S a to forbid the tree-level muon Yukawa coupling. The charge assignments 3 The model with Y ψ = −1 has a singlet real scalar η, and a CP phase appears in the scalar sector.
In this case, however, CP violating effects necessarily involve the SM Higgs VEV, and therefore, the muon EDM is suppressed when the exotic particle masses are set to be around TeV. 4 The L µ symmetry can be enhanced to a global U (1) Lµ symmetry when λ Hφ in Eq. (2.2) is set to be zero. This value is irrelevant to our current analysis. Note that even if we do not have U (1) Lµ symmetry, B − 3L e number and B − 3L τ number are conserved (for a review, see, e.g., Ref. [38]) and there is no washout of baryon asymmetry in the early universe.
lead to the following terms in the Lagrangian: Note that all couplings in V scl and y φ,η can be real and positive by field redefinitions, while one phase of m D , m LL , and m RR cannot be removed. In fact, a combination m LL m RR /m 2 D is independent of phase rotations, and we define a physical phase in the model as where θ D,L,R denote phases of m D , m LL , and m RR , respectively. Since ψ is singlet under the SM gauge symmetry, we can define the left-and right-handed Majorana fermions as ψ M L,R ≡ ψ L,R + (ψ L,R ) c with ψ L,R = P L,R ψ and (ψ L,R ) c ≡ iγ 2 (ψ L,R ) * . We assume that the exotic scalars φ, η do not acquire nonzero VEVs. As a result, no mixing between H and φ/η is induced, and hence we can parameterize the SM Higgs field H as where v H = 246.22 GeV is the SM Higgs VEV, G + and G 0 are Nambu-Goldstone modes, and h 0 is the SM Higgs boson. Note that a minimization condition leads to Below, we will present the mass spectrum of exotic particles and calculate the radiatively generated muon mass and coupling to the Higgs boson and the muon EDM as well as the anomalous magnetic moment. Deviations of the muon couplings to the Higgs and Z bosons from those of the SM will be also discussed. Note that for the neutrino sector, we need a further extension to reproduce the correct neutrino mixing angles, due to the muon number symmetry. We discuss some possibilities of the extension in appendix A.
We emphasize that such an extension does not affect our numerical results.
Mass spectrum of exotic particles
From the Lagrangian (2.1), the mass matrix for ψ L and ψ R is where M ψ is a complex symmetric matrix and diagonalized by a unitary matrix U ψ : Here, c α ≡ cos α with mixing angle α and τ is real. In our analysis, we take m LL and m RR to be real and positive, while m D has a physical phase as m D = |m D |e −iθ phys . We then obtain mass-squared eigenvalues of M † ψ M ψ as where ∆m 2 ψ ≡ m 2 ψ 2 − m 2 ψ 1 is given by Due to the mass hierarchy, m 2 ψ 2 > m 2 ψ 1 , we can focus on 0 ≤ α ≤ π/2. Note that physical predictions are unchanged for θ phys → θ phys + π, and we focus on the range of −π/2 < θ phys ≤ π/2 in our analysis. ψ L,R can be described in terms of mass eigenstates ψ 1,2 as 13) and the mass terms in Eq. (2.6) become where ψ M 1,2 ≡ ψ 1,2 + ψ c 1,2 are Majorana fermions. In order to analyze the mass spectrum for exotic scalar fields, we parameterize them as Hφ ≡ λ Hφ ± λ Hφ . Note that since all quartic couplings are positive, m 2 a φ is always smaller than m 2 σ φ . Diagonalzation of M 2 ± can be done by an orthogonal matrix as The mass-squared eigenvalues and the mixing angle θ are (2.21) Then, φ ± and η ± can be described in terms of mass eigenstates ϕ ± 1,2 as Since the mass parameter a can be set to be real and positive and m 2 > 0, we can focus on 0 ≤ θ ≤ π/2.
Radiative mass and coupling of the muon
The mass and Yukawa coupling of the muon are induced by one-loop corrections. When we move to the mass basis for exotic particles according to Eqs. (2.13) and (2.22), the relevant terms are written as where the explicit forms of y ia L,R and A ij are summarized in Table 3. These couplings lead to the radiative mass and effective Yukawa coupling of the muon at the one-loop level, through diagrams in Fig. 1: where p h 0 is the four-momentum of the SM Higgs boson, m ψa ≡ m 2 ψa , and B 0 (0, m 2 , m 2 ) and ) denote loop integrals for the self-energy and triangle type diagrams, respectively, whose explicit forms are summarized in Appendix B, and F(x 1,1 , x 1,2 , x 2,1 , x 2,2 ) is defined as Eq. (2.26) neglects sub-dominant contributions with a chirality flip on the muon line. We show the full form for y eff µ at the one-loop order in Appendix C. To numerically evaluate the loop functions with a non-zero p 2 h 0 , we use LoopTools [39]. Note that due to the radiatively generated muon mass, there is no standard relation between m rad µ and y eff µ , namely, m rad Hence, we need to check if the model satisfies a constraint from the measurement of h → µ + µ − . We discuss this constraint in Sec. 2.3.
Since m rad µ in Eq. (2.25) generally has a phase due to complex couplings y ia L,R , we need to remove it by a chiral rotation of the muon field as where θ µ is defined as m rad µ = m µ e iθµ . Here, m µ is understood as the observed muon mass and a real value, and θ µ can be obtained as θ µ = arg F(x 1,1 , x 1,2 , x 2,1 , x 2,2 )) . This rotation affects dipole operators, where q is the four-momentum for the photon. If there was no chiral rotation, C T (0) and C T (0) would be the muon g − 2 a µ and the muon EDM d µ , respectively. After performing the chiral rotation of Eq. (2.28), we can obtain the correct forms of a µ and d µ in our model as The leading contributions to C T (0) and C T (0) can be estimated from the diagram (b) in Fig. 1 as are loop integrals for the triangle type diagram, and approximations in the right hand sides are valid when m 2 In this case, we can obtain the following analytical forms of C 0 and C 1 : (2.38) Here, x i,a is defined below Eq. (2.27). Since the leading contributions to C T (0) and C T (0) are the same except for the overall couplings, Re[y ia L y ia R ] and Im[y ia L y ia R ], we can expect a sufficiently large d µ to be probed in near-future experiments when the muon g − 2 is predicted to be O(10 −9 ). That is, when with a µ 2.51 × 10 −9 . By using couplings in Table. 3 and Eqs. (2.25), (2.37) and (2.38), C T (0) and C T (0) can be rewritten as As F is proportional to m ψ 1,2 , their scalings are consistent with the rough estimation given in Eq. (1.3). It is notable that when we change θ phys → −θ phys , signs of sin τ and sin θ µ are flipped, the former of which leads to C T (0) → −C T (0) through Eq. (2.41), and hence, this change results in d µ → −d µ with a µ unchanged. This fact tells us that it is enough to focus on the range 0 < θ phys < π/2, because we are only interested in the prediction of |d µ | here. Furthermore, θ phys = 0 corresponds to a CP conserving limit which gives |d µ | = 0, while θ phys = π/2 leads to τ ≈ π/2 unless m LL = m RR (see Eq. (2.12)), predicting cos θ µ ≈ 0, and hence, Hereafter, we denote d µ as its absolute value |d µ | in our analysis.
Muon coupling constraints
In our model, the muon Yukawa coupling to the Higgs boson is generated at the one-loop level and does not follow the standard relation, m rad The ATLAS [40] and CMS [41] experiments have searched for the Higgs boson decay h → µ + µ − , which lead to constraints on the h-µ-µ coupling as where we use BR(h → µ + µ − ) SM 2.16 × 10 −4 for m h = 125.25 GeV [42], and κ µ is defined by comparing the decay width of h → µ + µ − to that of the SM, In our model, the width of h → µ + µ − is estimated as Then, we find Here, we have used 4m 2 µ m 2 h . Since exotic particles exclusively couple to the muon, the ratio between the Z → e + e − and Z → µ + µ − decay widths may constrain our parameter space. The current experimental status for this ratio is [43] The muon couplings to the Z boson can be parameterized as where g denotes the SU (2) L gauge coupling, θ W is the weak mixing angle, and g µ L = − 1 2 + sin 2 θ W , and g µ R = sin 2 θ W are the muon couplings to the Z boson in the SM. In our model, new physics contributions δg L,R are induced by the diagram (b) in Fig. 1 with replacing the photon to the Z boson, and their expressions are found in ref. [44]. The ratio in Eq. (2.47) is then estimated as where g e L,R = g µ L,R are the electron couplings to the Z boson in the SM, and we assume that new physics contributions are smaller than those of the SM, δg µ L,R g µ L,R . Then, Eq. (2.47) indicates that |δ µµ | must be less than O(10 −3 ).
Dark matter
The candidate of DM in our model is the lightest Majorana fermion ψ M 1 or the lightest neutral scalar a φ , depending on their masses. In the present paper, we focus on the case that ψ M 1 is the lightest exotic particle, and hence, gives the DM candidate. Hereafter, we denote ψ M 1 as ψ 1 for simplicity. For the case that a φ is the DM candidate, there is no direct correlation with the muon EDM, because the mass m a φ does not contribute to the muon EDM at the one-loop level.
The main annihilation mode of the DM fermion is ψ 1 ψ 1 → µμ through the t-channel exchange of ϕ ± i , as shown in the left of Fig. 2. In the expansion of the thermally averaged cross section by the DM velocity v, σv µμ = a µμ + b µμ v 2 + O(v 4 ), s-wave and p-wave contributions are given by where the second term in Eq. (3.1) is suppressed by m µ /m ψ 1 . Thus, the s-wave contribution dominates the total DM annihilation cross section in our focused parameter space. Note that for the annihilation mode ψ 1 ψ 1 → ν µνµ , the s-wave contribution is suppressed by a tiny neutrino mass, because there is no right-handed coupling y i1 R for the neutrino. The other annihilation cross sections, such as ψ 1 ψ 1 → γγ and ψ 1 ψ 1 → µμγ, are several orders of magnitude smaller than that of ψ 1 ψ 1 → µμ.
In the thermal freeze-out scenario, the number density of DM is calculated by the Boltzmann equation, Here, H(t) denotes the Hubble rate, and n ψ 1 is the number density of ψ 1 , while n eq ψ 1 is that in equilibrium. The effective annihilation cross section σv eff is estimated by summing all possible annihilation modes, i.e., ψ 1 ψ 1 → ¯ , V V , ¯ V ( = µ, ν; V, V = γ, Z, W ). However, when the DM and charged scalar masses are almost degenerate, coannihilation processes should be taken into account for solving the Boltzmann equation. In this case, we have [45] (σv) eff = 1 (g ψ 1 +ḡ ϕ + 1 ) 2 g 2 where g ψ 1 = 2 and g ϕ + 1 = 2 are internal degrees of freedom for ψ 1 and ϕ + 1 , respectively, T is the temperature, and (σv) XY denotes the (co)annihilation cross section whose initial state is XY . The corresponding diagrams are shown in Fig. 2. The second term in Eq. (3.4) is suppressed by the exponential factor in Eq. (3.5) and the third term is more suppressed due to the squared exponential factor when m ϕ + 1 m ψ 1 . As m ϕ + 1 decreases and is close to m ψ 1 , the second term gives a non-negligible contribution to (σv) eff [46,47].
The resultant DM relic density is given by where ρ c is the critical density of the Universe and n ψ 1 (t 0 ) is the today's number density of ψ 1 obtained by solving the Boltzmann equation (3.3). To calculate the DM relic density Ωh 2 including appropriate coannihilation processes, we use micrOMEGAs 5.2.13 [48,49]. Although our DM particle ψ 1 does not couple to the SM quarks and gluons, the DM-nucleon scattering is induced by contact and non-contact type interactions. In our model, relevant interactions for the scattering are L eff ⊃ a ψ 1 ψ 1 γ µ γ 5 ψ 1 ∂ ν F µν + C S,p ψ 1 ψ 1p p + C S,n ψ 1 ψ 1n n . (3.7) Here, p and n represent the proton and the neutron, and the effective coefficients are estimated as where x i,1 is defined below Eq. (2.27), µ ≡ m 2 µ /m 2 ψ 1 ,â ψ 1 (x, y) is the loop function for the anapole operator, which is given bŷ a ψ 1 (x, y) = 1 12 with ∆(x, y) = x 2 +(y −1) 2 −2x(y +1), C S,q denotes the effective coupling of an operator ψ 1 ψ 1q q with the SM quark q, and f (N ) T q is related to the quark mass contribution to the nucleon mass, whose value can be found in refs. [50][51][52][53][54][55][56][57][58][59]. y eff ψ 1 is the effective Yukawa coupling of ψ 1 , and it can be obtained by the replacement of m ψ 1 ↔ m µ and y L ↔ y R in the expression of y eff µ given in Eq. (C.1). This effective Yukawa coupling increases when m ψ 1 becomes large, because it is proportional to m ψ 1 like y eff µ (see Eq. (2.26)). The functionâ ψ 1 (x, y) is enhanced when x → 1 with y = 0. Therefore, the limit of m ψ 1 m ϕ + 1 leads to a large contribution to the cross section from a ψ 1 . Note that for the Majorana DM model, there are other contributions through the Z-penguin which lead to effective interactions such as (ψ 1 γ µ γ 5 ψ 1 )(qγ µ q) and (ψ 1 γ µ γ 5 ψ 1 )(qγ µ γ 5 q). However, these contributions are suppressed by the lepton mass (and the DM velocity for the former interaction), and we neglect their effects in our analysis. Using the effective couplings in Eq. (3.7), the differential cross section with respect to the recoil energy E R is estimated as where v is the DM velocity, α is the fine structure constant, f A = ZC S,p +(A−Z)C S,n with an atomic number Z and a mass number A, and m N , µ A and J A are the mass, magnetic moment and spin of the nucleus, respectively. F Helm (E R ) and F spin (E R ) denote form factors found in refs. [60,61]. It can be seen from Eq. (3.11) that the anapole contribution is suppressed by the DM velocity v or the recoil energy E R . On the other hand, there is no suppression for contributions from the contact-type interactions. It is notable that in our model, C S,N in f A is enhanced by y eff ψ 1 due to the absence of the tree-level muon Yukawa coupling. Recently, the LUX-ZEPLIN (LZ) experiment has reported their first results for spin-independent (SI) and spin-dependent (SD) DM-nucleon scattering cross sections [62]. The upper limit on the SI cross section has been improved, compared with previous results from the XENON1T [63,64] and PandaX-4T [65,66] experiments. The corresponding cross sections in our model are given by [50,67,68] σ scalar Here, µ N is the reduced mass for m ψ 1 and the nucleon mass 0.939 GeV.
Numerical analysis
In this section, we first summarize the independent parameters in our model. Then, the parameter space that gives the correct DM relic density and explains the muon g − 2 anomaly is identified and the size of the muon EDM is indicated in that region as well as more general parameter regions. We take account of muon coupling constraints presented in section 2.3 and also discuss constraints from DM direct and indirect detection experiments as well as collider searches.
Independent parameters
The Lagrangian of our model contains 18 parameters, y φ , y η , |m D |, |m LL |, |m RR |, θ phys , m 2 H , m 2 φ , m 2 η , a, λ H , λ φ , λ η , λ Hφ , λ Hη , λ φη , λ Hφ , λ Hφ . (4.1) Note that some of them are irrelevant to our analysis on the calculation of the muon g − 2, the muon EDM, the radiative mass, and the effective Yukawa coupling of the muon. The Higgs mass-squared parameter m 2 H is fixed by the minimization condition in Eq. (2.5), and λ H should be determined so that the SM Higgs mass, m h = 125.25 GeV, is correctly reproduced. The quartic couplings λ φ , λ η and λ φη are irrelevant to the mass spectrum of exotic particles, although these values should be consistent with perturbative unitarity bounds (commented below) and also chosen to avoid an unstable minimum of the scalar potential. Moreover, y φ y η can be fixed by using Eq. (2.25), but we need to check that values of the couplings do not exceed √ 4π. As a result, the relevant (and independent) input parameters for the analysis can be read as Note that λ Hφ and λ Hφ are relevant only to the masses of heavy neutral scalars, m 2 σ φ and m 2 a φ (see Eq. (2.17)), and irrelevant to our following analysis as long as the DM candidate of the model is ψ 1 . Furthermore, we discuss our results by using M 2 φ and M 2 η instead of m 2 φ , m 2 η , λ Hφ and λ Hη (see below Eq. (2.17)). We here comment on perturbative unitarity bounds [69,70], which are related to 2 → 2 scattering processes of scalar particles. At the tree level, it is clear that quartic couplings are related to their amplitudes. In addition, trilinear couplings also contribute to them through s-, t-and u-channel processes if scalar particles are not so heavy. There are studies on the bounds, e.g., for models extended by singlet scalars [71][72][73] and doublet scalars [74][75][76][77][78][79][80]. Since our model is a hybrid extension with one singlet and one doublet scalars, there are lots of scattering processes like hh To obtain perturbative unitarity bounds in our model, we use the SARAH/SPheno framework [81][82][83][84][85][86][87]. The details of the calculation for general scalar couplings can be found in ref. [88]. and change m LL and M 2 η as 320 GeV ≤ m LL ≤ 1200 GeV and (540 GeV) 2 ≤ M 2 η ≤ (1000 GeV) 2 , respectively. For this parameter choice, the lightest particle among X-odd particles is either ψ 1 or ϕ + 1 . The other parameters which are not shown in Eq. (4.2), i.e. scalar quartic couplings, does not affect the analysis here and are taken to be moderate values to satisfy perturbative unitarity bounds. Note that when the scalar trilinear coupling a becomes large, some of the quartic couplings should be O(1) to avoid the instability of the vacuum to give the correct electroweak symmetry breaking. We numerically check that the SM vacuum is stable when all of quartic couplings are within the range of 0.2-0.5 with the parameter choice shown above. These values of quartic couplings also satisfy perturbative unitarity bounds, which is checked by the SARAH/SPheno framework. In addition, since we have an additional SU (2) L doublet scalar, there is a new contribution to the T -parameter [89,90]. We have calculated the contribution by following refs. [91,92] and found that our parameter choice leads to ∆T ∼ 0.002, which satisfies the current constraint [93].
Results
The current discrepancy of (g − 2) µ is [1][2][3][4] ∆a µ = (2.51 ± 0.59) × 10 −9 , (4.4) whose 1σ and 2σ bands are shown as green and yellow shaded regions in Fig. 3. Note that a lighter m ψ 1 predicts a larger ∆a µ due to its dependence, ∆a µ ∼ 1/m 2 ψ 1 . In the figure, black lines correspond to contours for d µ in 10 −23 e cm unit. The future prospect of the muon EDM, which is reported as O(10 −21 ) e cm at the Fermilab Muon g − 2 experiment [8] and the J-PARC Muon g − 2/EDM experiment [9], is shown as the orange shaded region. The red band shows the parameter space where the correct DM relic density, Ωh 2 = 0.120±0.001 [94], is obtained. Outside of this band, the relic density changes rapidly, as one can see from the blue and turquoise contours which correspond to Ωh 2 = 0.5 and 1.0, respectively. Note that in the whole parameter space of Fig. 3, a new physics contribution to the ratio between the decay widths of Z → e + e − and Z → µ + µ − in Eq. (2.49) is sufficiently small, and we obtain |δ µµ | 4 × 10 −4 which is consistent with the current data (2.47).
For the case of m ψ 1 > m ϕ + 1 (the region below the dashed gray line), the ψ 1 → ϕ ± 1 +µ ∓ decay occurs at the tree level, and therefore, ψ 1 cannot be a DM candidate 6 . Without any interaction to break the exotic number symmetry X, ϕ + 1 is a stable exotic particle, which may be cosmologically dangerous. However, we can consider, for example, an interaction with the right-handed electron, L µ L φ † e R , to make ϕ + 1 decay into ν µ + e + 7 .
6 Since m ψ1 < m σ φ ,a φ in the current input parameters (see Eqs. (2.17) and (4.3)), the DM cannot decay into ν µ + σ φ , ν µ + a φ in the plotted region of Fig. 3. 7 This lepton flavor violating (LFV) interaction does not induce LFV processes such as µ → eγ Interestingly, the parameter region predicts a large d µ due to a small value of m ϕ + 1 and may be also explored by Higgs coupling measurements at future collider experiments. Ref. [100] summarizes future sensitivities for the measurements of the SM Higgs couplings. In particular, the Future Circular Collider (FCC) may be able to measure κ µ with relative precision of ∼ 0.4% whose contours are shown as dot-dashed magenta lines in Fig. 3 8 .
In the whole parameter region shown in the figure, the muon EDM is predicted to be larger than the future sensitivity of the PSI muEDM experiment, 6 × 10 −23 e cm [10][11][12]. The 2σ discrepancy of (g − 2) µ can be explained for 560 GeV < m ψ 1 < 780 GeV, while only the region of m ψ 1 m ϕ + 1 is favored for the correct DM relic density. With the current parameter choice, the coannihilation process plays an important role in obtaining the correct relic density.
For m ψ 1 850-860 GeV, the DM sector contribution to the muon EDM accidentally disappears. This behavior can be understood as follows. In this region, m LL 1000 GeV which means m LL m RR for our current setup. Eq. (2.12) tells us that θ phys = 0 or |m LL | = |m RR | can lead to tan τ = 0, which makes y ia L and y ia R real. Therefore, there is no contribution to the muon EDM for |m LL | = |m RR |, even when the physical phase has a non-zero value, θ phys = 0.
We now comment on constraints from DM searches at colliders and DM direct and indirect detection experiments.
Collider searches
At the Large Hadron Collider (LHC), we expect a pair production of exotic charged scalars ϕ 1 decaying into muons and DM fermions: whose signal is two muons plus a large missing energy. The signal is similar to that of a pair production of sleptons decaying into leptons and a missing energy. Then, the ATLAS [101,102] and CMS [103] experiments put a lower bound on the DM mass. However, it is less than 500 GeV [101], which is outside the plot range of Fig. 3. Ref. [104] has performed the numerical analysis to obtain a bound on the DM mass for the similar model, and it was found to be 200-300 GeV, depending on the size of the mixing angle s θ in Eq. (2.18) and the mass of ϕ + 1 . Ref. [102] has because of the muon number symmetry L µ . However, the interaction with a sizable coupling may be constrained by the muonium-antimuonium oscillation [95][96][97][98][99] although we do not need a large coupling for our purpose. 8 Here, we just assume that the central value of κ µ is 1 at future collider experiments.
Indirect detection
As mentioned in Sec. 3, the annihilation cross section of our DM ψ 1 is dominated by ψ 1 ψ 1 → µμ. For the parameter region in Fig. 3, we obtain the prediction of σv µμ O(10 −27 )-O(10 −28 ) cm 3 /s. Ref. [105] has studied a constraint on the annihilation cross section of a Majorana DM, whose annihilation modes are ψ 1 ψ 1 → ¯ γ and ψ 1 ψ 1 → γγ. The combination of the thermally averaged cross sections, σv µμγ + 2 σv γγ , is constrained to be less than 10 −26 -10 −27 cm 3 /s, depending on the DM mass. The cross sections of these annihilation processes, however, are several orders of magnitude smaller than that of σv µμ in our model, and therefore, ref. [105] does not put a constraint on the parameter region shown in Fig. 3. It is notable that due to the gauge invariance, we should consider the ψ 1 ψ 1 → ¯ γ process together with the ψ 1 ψ 1 → ¯ Z process. Such processes may be able to be explored by the PAMELA anti-proton search [106], and there are studies for Majorana DM models [107,108], although they indicate that it is difficult to observe a Majorana DM at current and future telescopes.
Direct detection
Our Majorana DM scattering with the nucleon is induced by interactions presented in Eq. (3.7). For the current parameter set, we obtain the SI DM-nucleon scattering cross section of O(10 −47 -10 −50 ) cm 2 . This is smaller than the current limit from the LZ experiment [62], which is (1.5-2.4) × 10 −46 cm 2 for the DM mass range in Fig. 3. The LZ experiment [62] also put constraints on the SD DM-proton and DM-neutron scattering cross sections, but both are weaker than that of the SI cross section, and therefore, no region of the parameter space is excluded by direct detection experiments. With the future sensitivity of the LZ experiment, the upper limit on the SI cross section will be improved by one order of magnitude [109,110], which is still not sufficient to explore our parameter space. By the future sensitivity of PandaX-4T with 5.6 tonne·year exposure [111], we may be able to explore the parameter space in Fig. 3. Their current limit on the SI DM-nucleon scattering cross section [65] can be read as (2.6-4.2) × 10 −46 cm 2 for the DM mass range of 550 GeV ≤ m ψ 1 ≤ 860 GeV, and hence, if the future limit is improved by a few orders of magnitude, a heavy DM mass region with m ψ 1 m ϕ + 1 will be explored at the PandaX-4T experiment.
Finally, let us discuss how our new physics contributions to ∆a µ and d µ depend on input parameter choices. First of all, a different choice of θ phys can change our predictions for ∆a µ and d µ shown in Fig. 3. It is expected that d µ is maximized by choosing θ phys ∼ π/4, because θ phys → 0 or θ phys → π/2 leads to d µ ≈ 0. On the other hand, the contribution to ∆a µ does not have such a clear dependence on θ phys . Actually, both observables strongly depend on the input parameter set of (m D , m LL , m RR , θ phys ). For example, if (m D , m LL , m RR ) = (700 GeV, 200 GeV, 1000 GeV) are chosen, ∆a µ < 0 for 0 < θ phys 0.42 and the 2σ deviation can be explained for π/6 < θ phys < π/4, and d µ is maximized around θ phys π/6. Instead, if we choose (m D , m LL , m RR ) = (500 GeV, 990 GeV, 1000 GeV), ∆a µ < 0 is predicted in almost all range of θ phys and the 2σ deviation can be explained only around θ phys 1.45, and the peak of d µ appears at θ phys 1.35. In any case, our prediction of the muon EDM is d µ > O(10 −22 ) e cm. These observables also depend on the values of M 2 φ , M 2 η . As one can see in Fig. 3, the predictions of ∆a µ and d µ become small as m ϕ + 1 increases. In contrast, a large a enhances the contributions by a few %. This is because a is related to the difference between m 2
Conclusion
In the present paper, we have investigated a prediction for the muon EDM obtained in a model of DM. As shown in Table 1, the radiative stability approach has a clear advantage to enhance the muon EDM, and we focused on a model in which the muon mass is generated radiatively. With appropriate discrete symmetries, exotic particles, ψ, φ and η, have couplings to the muon (and also to the SM Higgs doublet). In this model, one of the complex phases in the couplings cannot be removed by any field redefinition and provides a physical CP phase, which leads to a new contribution to the muon EDM. ψ L,R are singlet under the SM gauge groups and the lightest mode gives a candidate of the Majorana fermion DM.
We found that even when the DM mass is heavier than the current collider bound, m ψ 1 > 500 GeV, the model predicts a muon EDM larger than 10 −22 e cm which can be tested at the PSI muEDM experiment. In the parameter space where the discrepancy of the muon g − 2 and the correct DM relic density are explained at the same time, the model predicts d µ (4-5) × 10 −22 e cm. For the case of m ψ 1 > m ϕ + 1 , the muon EDM can be even larger, d µ (7-8) × 10 −22 e cm, due to a small value of m ϕ + 1 , although ψ 1 does not give a DM candidate. Furthermore, once we forget a new physics explanation for the muon g − 2 discrepancy as well as the DM relic density, the muon EDM can be larger than the future sensitivities of the ongoing Fermilab Muon g − 2 and projected J-PARC Muon g − 2/EDM experiments.
One of the most promising approaches to probe our DM model is a future muon collider (see e.g. ref. [112] and references therein) because a muon collider is expected to have a new particle mass reach higher than that of the LHC and also our DM fermion directly couples to the muon. It would be interesting to explore the phenomenology of our DM model to generate the radiative muon mass, the muon g − 2 and the muon EDM at a muon collider, which is left for future study.
A Neutrino sector A.1 A scalar triplet extension
In our model, due to the muon number symmetry, we need a further extension for obtaining the correct neutrino mixing angles. One of the simplest way to reproduce them is to introduce a scalar triplet, as discussed in appendix A of ref. [113]. At first, we can write down dimension-five operators which are related to lepton doublets as Therefore, the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix obtained from our model cannot be consistent with the experimental result at this stage. However, once we introduce a SU (2) L triplet scalar ∆ to the model, which has the U (1) Y charge of −1 and odd muon number, we can write additional terms of When ∆ acquires a nonzero VEV, v ∆ = 0, we can reproduce all elements for the neutrino mass matrix as Here, each (m ν ) ab is estimated by
A.2 Right-handed neutrinos
Another possibility to reproduce the correct PMNS matrix is to introduce three generations of the right-handed neutrinos (RHNs), denoted as N e,µ,τ R . Similar to the charged lepton sector, only N µ R is odd under the muon number L µ , and we have additional Dirac Yukawa couplings and Majorana mass terms for neutrinos as ( ( ) = e, τ ) where the last term breaks the L µ symmetry softly, which is required for the correct neutrino mixing angles. This can be understood diagrammatically, as shown in Fig. 4. In addition to this diagram, the mixing between ν e L and ν τ L is also induced by the same
A.3 Comment on LFV
For the second example, however, we have LFV processes due to the soft L µ breaking terms. To see this, we focus on the first two generations of leptons, namely, the electronmuon system. From Eq. (A.6) and the SM Yukawa interactions for the electron, we have one-loop contributions to off-diagonal elements of Yukawa couplings of charged leptons, as shown in Fig. 5. Due to the soft L µ breaking, the mass eigenstate ν α R can be written by one mixing angle θ N as After integrating out the RHNs, we obtain the off-diagonal element, where δy µe is obtained by Fig. 5 and roughly estimated as δy µe y e 16π 2 y ee ν y µµ ν sin θ N cos θ N ln with y e being the SM Yukawa coupling of the term L e L He R . This off-diagonal element can be removed by field redefinition of left-handed lepton doublets, e L = L e L cos θ eµ + L µ L sin θ eµ , µ L = −L e L sin θ eµ + L µ L cos θ eµ , (A.11) which lead to an electron coupling with exotic particles, where we define y e φ ≡ y φ sin θ eµ and y µ φ ≡ y φ cos θ eµ . Then, we have a one-loop contribution to µ → eγ by replacing µ L → e L in the diagram (b) of Fig. 1. Its branching ratio can be calculated as [114] where G F is the Fermi constant, BR(µ → eν µνe ) ≈ 1, and C eµ T and C eµ T are coefficients of dipole operators, whose definitions are From Eqs. (2.28), (2.33) and (2.34), leading contributions to C eµ T and C eµ T can be easily estimated by replacing y ia L → y ia L sin θ eµ . Similar to this replacement, our predictions of a µ and d µ in Eqs. From these facts, we have the following relation between C eµ T , C eµ T and a µ , d µ as and hence, the branching ratio in Eq. (A.13) can be expressed by a µ and d µ as By assuming a µ = 2.51 × 10 −9 and d µ = 4.5 × 10 −22 e cm as we found in our model, the upper limit on tan θ eµ can be obtained as tan θ eµ 6.7 × 10 −6 , (A. 19) where we have used the current upper bound on the branching ratio, BR(µ → eγ) < 4.2 × 10 −13 [115]. Then, the mixing angle in Eq. (A.11) should be tiny, which means a small off-diagonal element compared to y e , δy µe y e . To realize the constraint in Eq. (A.19), we roughly need δy µe /y e ∼ 6 × 10 −6 . Assuming ln(M 2 sin θ N ∼ cos θ N ∼ 1/ √ 2 due to the similar order for all m N to obtain large mixing angles for neutrinos, we obtain y ee ν y µµ ν ∼ 2 × 10 −3 , and the scale of RHNs will be O(10 10 ) GeV for the light neutrino masses of O(eV).
B Loop integrals
In the appendix, we summarize loop integrals relevant to our analysis. Note that we use dimensional regularization, and ∆ ≡ 2 − γ E + ln 4π with the Euler constant γ E and = 4 − D diverges when D → 4.
-Self-energy integral - When p 2 = 0 and m 2 0 = m 2 1 ≡ m 2 , we can obtain the following simple form: Note that B 0 has a divergent part which should be cancelled for physical predictions.
|
2022-12-07T06:42:54.016Z
|
2022-12-06T00:00:00.000
|
{
"year": 2022,
"sha1": "c946c285b704b66dd21a4b48fb8ec72e0df916ce",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c946c285b704b66dd21a4b48fb8ec72e0df916ce",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
79568897
|
pes2o/s2orc
|
v3-fos-license
|
Acute Kidney Injury (AKI)
Kidneys perform a multitude of essential functions within the human body. Of these the most important are (1) maintaining pH through regulation of acid/base levels and (2) excreting end products of metabolism. As for most organ-systems, these functions are especially important for healing following trauma and/or surgery and decline with age. Acute Kidney Injury (AKI) is one of the common forms of organ failures seen in the ICU and elderly patients are more prone to it. The causes maybe classified as Prerenal (inadequate perfusion), renal (inherent kidney disease) and post-renal (urinary obstruction). Preventing AKI should be an important concern in all critically ill patients but especially important in the elderly patients since the development of AKI can significantly increase in-hospital mortality. Once AKI has set in a systematic and step-wise approach of diagnosis and management is key to avoiding adverse outcomes.
Introduction
Kidneys perform a multitude of essential functions within the human body. Of these the most important are (1) maintaining pH through regulation of acid/base levels and (2) excreting end products of metabolism. These functions are especially important for healing following trauma and/or surgery. The essential functions of the kidney take place in two distinct yet connected microscopic entities within the renal parenchyma-glomerulus and the tubules. The process of removing the end products of metabolism starts with the glomerular capillaries filtering the blood and passing the filtrate onto the renal tubules. One of the measures of renal function is the glomerular filtration rate (GFR)-volume of fluid passing from the glomerulus onto the renal tubules per minute. Within the renal tubules two processes control what is excreted in the urine: (1) selective reabsorption by which almost 99% of the filtrate volume is reabsorbed back into the circulation and (2) active secretion from the blood into the tubules of substances that are to be excreted, creatinine being one of them. Through glomerular filtration and tubular active secretion, nearly all the creatinine in the renal artery blood is removed with hardly any present in the renal veins. Thus creatinine clearance rate (CCR) defined as the volume of blood cleared of creatinine per minute closely approximates the GFR and is commonly used as a measure of GFR, which is more difficult to directly measure. Normal values of CCR are given in Table 39.1. There are gender differences in CCR, being lower in females likely due to their lower average muscle mass, which is the principal source of creatinine.
There are major morphologic changes that occur in the kidney with increasing age (Table 39.2). These morphologic changes directly affect renal function. The renal blood flow declines to half by age 80 from its peak at age 20 with a progressive decline in GFR. This decline in GFR is manifested by decrease in CCR that is maximum at age 20 and, on average, declines by about 6.5 mL/min per decade post age 20 [1].
The overall impact of these changes is loss of renal concentrating and diluting ability, decreased ability to conserve sodium, lower levels of renin and aldosterone with decreased prostaglandin production and an enhanced vasoconstrictive response leading to increased susceptibility to ischemia and nephrotoxic medications [2]. One of the important reasons of poorer tolerance to injury and surgery among elderly is the decline in renal function and reserve.
Acute renal failure is the term that was utilized in the past to describe injury to the kidney resulting in the kidney unable to perform its essential functions. Usually renal failure is associated with oliguria (urine output <20 mL/h-oliguric renal failure), though it can be observed with more normal or even excessive urine output (non-oliguric renal failure). Multiple studies demonstrated that the development of acute renal failure was associated with a 50% increase in the relative risk of inhospital mortality. More recently it has been realized that even smaller insults to the kidneys that do not result in overt acute renal failure can adversely affect outcomes [3,4]. Hence the concept and term acute renal failure has been replaced by RIFLE criteria, which encompasses a spectrum of renal dysfunction from "risk" of damage to overt "end stage" renal failure with AKI in the middle of that spectrum. RIFLE includes both urinary output criteria and metabolic criteria (Serum creatinine or GFR) (Table 39.3) [5]. At the "risk" category, the sensitivity for injury is high though the specificity relatively poor. Hence patients diagnosed at risk may have suffered renal injury, but if not meeting the risk criteria, the probability of renal injury is very low. These criteria have been shown to correlate with outcomes [6,7].
While the outcome of any patient who develops AKI is worse, multiple studies and metaanalysis have demonstrated that the incidence of AKI is higher and the degree to which AKI adversely impacts outcomes is more pronounced in the elderly [8,9].
Causes of AKI
Causes of AKI are multitude and are classified into prerenal, renal, and post-renal (Table 39.4). Prerenal denotes a reduction in renal perfusion either total perfusion in terms of volume and/or reduction in perfusion pressure. This leads to the kidney being unable to perform its function even though there is no inherent renal pathology. Renal causes are those where the kidney does not perform its function due to inherent renal disease either acute or chronic. Post-renal includes any disease or condition causing an obstruction to the free flow of urine from the renal collecting system down to the external urethral meatus. Large majority (>75%) of patients with AKI encountered in the surgical intensive care unit (ICU) are either hypovolemia causing prerenal AKI, or acute tubular necrosis (ATN) causing renal AKI.
Prerenal AKI
In surgical patients with prerenal AKI, the cause is most often related to reduction in effective circulating volume either due to blood loss or redistribution. The later occurs in ill patients from systemic inflammation and loss of intravascular volume into the interstitial space. It can also occur in patients with heart failure where, while there is overall fluid retention, the intravascular volume is depleted. Auto-regulatory mechanisms present within the kidneys allow them to function despite reduced perfusion, but these mechanisms too are less effective with age, can be overwhelmed by extreme reduction in perfusion, and can be interfered with. These auto-regulatory pathways are dependent upon chemical signaling involving prostaglandins and renin-angiotensin II pathway. Thus use of nonsteroidal anti-inflammatory medications affecting the production of prostaglandins and ACE inhibitors interfering with the renin pathway adversely affect auto-regulation and can cause severe AKI in an at-risk patient [10,11].
Reduced renal perfusion directly leads to a reduction in GFR. Less filtrate reaching the tubules results in increased reabsorption of urea causing an increase in blood urea nitrogen (BUN). Since creatinine is principally secreted into the tubular lumen and is less dependent upon glomerular filtration, creatinine rise is limited in prerenal AKI. This leads to a higher BUN/ creatinine ratio (>20) in prerenal AKI. Although this elevated ratio is a strong pointer to prerenal AKI, by itself it is not diagnostic since it could be elevated in hyper-catabolic states as well. The sine qua non of prerenal AKI is the intense conservation of Na and water by the kidneys. This is demonstrated by oliguria, highly concentrated urine (urine osmolality >500 mOsm/kg) with very low Na concentration (usually <10 mEq/L) and low fractional excretion of Na (FE Na < 1%see below).
Renal AKI
Renal AKI results from reduced renal function as a result of renal parenchymal disease. These states may be classified as vascular, glomerular, and tubulointerstitial. Vascular causes include bilateral occlusions of the major renal vessels (renal artery and/or vein) or widespread microscopic thrombosis of intrarenal vasculature occurring in a variety of syndromes (thrombotic thrombocytopenic purpura, hemolytic uremic syndrome, etc.). Glomerular dysfunction is seen in the multiple types of acute glomerulonephritis that lead to renal dysfunction. Glomerular and The large majority of renal AKI observed in this setting are caused by tubulointerstitial diseaseacute tubular necrosis (ATN) and acute interstitial nephritis (AIN). ATN is by far the most common form of renal AKI. Pathologically ATN is associated with (1) necrosis and sloughing of the epithelial cells lining the lumen of the tubules causing obstruction; (2) back leak of the filtrate into the circulation through the disrupted tubular epithelium; and (3) reduced glomerular blood flow likely due to afferent arterial vasoconstriction. These three together result in severe renal dysfunction associated with significant rise in the serum BUN and creatinine with a BUN/Cr ratio of <20. Additionally, the sloughed off epithelial cells from the tubular lumen make their way down the urinary passage and can appear on urinalysis as cellular casts. Since renal function is directly affected, the urine is usually not concentrated and has osmolality that is similar to plasma. For the same reason, the kidneys fail to conserve Na and hence FE Na is elevated to >3%. As noted above, reduction in renal perfusion leads to prerenal AKI that is often rapidly reversible by improving renal perfusion. When the reduced perfusion is severe it leads to ATN where recovery is slower since the epithelial lining of the tubules has to be regenerated [12]. A rare but very severe form of AKI is caused by bilateral acute cortical necrosis (ACN). Unlike ATN, where the tubules are primarily affected and glomeruli spared, in ACN both the tubules and glomeruli are affected by the necrotic process. The inciting event is usually very severe shock from any cause though majority of cases are seen in association with obstetric emergenciesplacental abruption, amniotic fluid embolism, toxemia of pregnancy, etc. Pathologically there are fibrin thrombi within the capillary beds of the kidney with necrosis. Severe oliguria is the norm with CAN and unlike ATN recovery is uncommon. AIN can be caused by multiple disorders and is characterized by acute inflammation of the renal interstitium and tubules. The nature of the interstitial infiltrate is dependent upon the pri-mary condition. Since AIN is often seen as an allergic reaction to medications, it is associated with eosinophilic infiltrate within the renal parenchyma and cutaneous manifestations. It can also be caused by some rare infections such as brucellosis and Epstein-Barr viral infection.
Post-renal AKI
Post-renal AKI is also known as "obstructive uropathy." It is caused by complete obstruction to urinary outflow. For a patient to develop postrenal AKI, the obstruction needs to be complete or near complete and affect both kidneys. Longstanding partial obstruction can lead to inability of the kidneys to concentrate urine and an acquired form of nephrogenic diabetes insipidus [13]. The common causes of complete bilateral obstruction leading to post-renal form of AKI include genitourinary malignancy, enlarged prostate, and urinary stone disease. A sometime missed cause is papillary necrosis caused by NSAID use where the renal papilla undergo necrosis, slough off and causes obstruction. Such patients can present with painless hematuria. A complete or near complete cessation of urinary flow should prompt consideration of post-renal AKI, since with prerenal and renal causes, the decrease in urinary output is rarely complete. While renal and prerenal AKI is diagnosed in the appropriate settings by laboratory tests of blood and urine, post-renal AKI requires imaging to arrive at the correct diagnosis.
Approach to AKI in the Surgical ICU ( Fig. 39.1)
Prevention
Even relatively mild degrees of renal dysfunction are associated with worse outcomes; hence, preventing AKI from occurring in the first place must be an important goal of all critical care in the ICU. Since in most instances AKI is, or at Stepwise algorithm for any ICU patient at risk of AKI least starts as, prerenal from loss of effective circulating volume, keeping a patient euvolemic is extremely important. In the past in the surgical ICUs, in an attempt to prevent the development of AKI, patients were often given too much volume especially in the form of crystalloids. This was directly related to development of multiple complications related to too much volume, namely ARDS, abdominal compartment syndrome, etc. The current understanding is that while hypoperfusion due to hypovolemia should be avoided to prevent organ dysfunction such as AKI, hypervolemia is to be avoided as well and the goal should be euvolemia. Instead of relying completely on provider judgment in assessing the intravascular volume status of a patient, more objective criteria-stroke volume/systolic pressure variation; intensivist performed point of care Echocardiogram and passive leg raise tests, etc.-should be utilized. Secondly when using nephrotoxic medications a careful risk/benefit analysis should be performed to ensure that the benefit of the medication clearly outweighs the risk of nephrotoxicity. If such nephrotoxic medications are utilized, the dose should be carefully adjusted for the individual patient to account for age or other factors related to poorer clearance.
While general principles of prevention outlined above apply to almost all patients admitted to the ICU, they are especially important in patients that have conditions placing them at greater risk of developing AKI. Surgical patients at particular risk of developing AKI are (1) patients suffering hemorrhagic shock due to trauma or any other surgical condition, e.g., ruptured aneurysm and severe necrotizing pancreatitis; (2) postoperative patients following major abdominal, vascular, or open heart surgery; (3) traumatized patients with massive crush injury releasing myoglobin and causing pigment induced nephropathy (see below); (4) major burn patients; (5) patients with preexisting renal disease; (6) patients suffering from systemic sepsis; and (7) geriatric patients. There have been attempts to quantify risk in specific populations but these remain imprecise and of moderate sensitivity at best [14]. There have also been attempts at developing interventions that could prevent the development of AKI in these at-risk patients. These include use of low dose Dopamine, diuretics, renal protective agents such as Mannitol, and alkalinizing agents such as sodium bicarbonate.
To date no such strategy has proven effective in preventing AKI [15,16]. The best strategy for prevention as mentioned above is maintenance of euvolemia, and strict attention to the use of nephrotoxic agents.
Diagnostic Approach to AKI in the ICU
Despite all preventative measures, some critically ill patients will develop AKI. All patients at risk should be carefully monitored with hourly urine output and at least once daily checking of BUN and serum creatinine. If there is an abrupt decrease in urinary output, the catheter should be checked for blockage. If no blockage is found patients meeting "at risk" RIFLE criteria should have another careful evaluation of (1) intravascular volume; (2) mean arterial pressures; (3) use of diuretics and nephrotoxic agents. In addition, patients should be evaluated by checking FE Na to differentiate prerenal from renal AKI and microscopic urine analysis to detect presence and type of casts. The presence and type of casts seen on microscopic examination can help determine the cause of the AKI. A 24-hour urine collection is necessary to accurately determine FE Na by the following formula: FE Na = [Urinary Na × Plasma Creatinine/Plasma Na × Urinary Creatinine] × 100.
In a patient in prerenal AKI, all diuretics should be suspended and intravascular volume replenished to euvolemic levels. After adequate volume resuscitation, pressors should be utilized to increase the renal perfusion pressure. In most cases of prerenal AKI, these measures should suffice in reversing the process. In specific patients where post-renal AKI is suspected, a sonogram should be performed to evaluate any dilatation of the urinary passage suggesting obstruction. If such an obstruction is found, it will need to be relieved to reverse the AKI. In patients where the prerenal AKI does not reverse, obstruction has been ruled out, or renal AKI is diagnosed, the treatment of AKI should be as outlined below.
AKI Therapy
Treatment for AKI should be done in the following stepwise fashion (Fig. 39.1 were strictly restricted to 1-1.5 L of fluid per day. This created problems with essential therapeutic drugs and also led to severe thirst. Now with relative ease of renal replacement therapy, fluid restriction is not necessary and patients should get adequate fluids to maintain euvolemia, get appropriate medications, and be supplied with adequate calories and proteins.
Pigment Induced Nephropathy
Increased plasma levels of oxygen transporting pigments-myoglobin and hemoglobin-can lead to AKI. Myoglobinemia, seen with injuries involving muscle crush and sometimes after heavy use of street drugs [17,18], is the more common form since its low molecular weight of ~17,000 Daltons allows it to be filtered by the glomerulus and form proteinaceous casts within the tubular lumen. The pigment is directly toxic to the tubular cells via free oxygen radicals. Serum levels of creatine kinase are elevated to >5000 U/L though often run much higher. Free hemoglobin is a less common cause of AKI since the molecule is much larger and is usually not filtered through the glomerulus. Additionally, free hemoglobin binds to haptoglobin forming a large complex that cannot be filtered. Only when there is massive hemolysis that exhausts the supply of haptaglobin does free hemoglobin appear in the circulation and cause hemoglobinuric AKI. The pathophysiology is the same for both the pigments. Therapy for pigment nephropathy follows the same general principles outlined. The major difference is the addition of forced diuresis with volume expansion and use of furosemide. Mannitol can also be used though judiciously since it can acutely increase the circulating volume and if diuresis does not occur, lead to volume overload [19]. Both pigments tend to be more soluble in alkaline urine, hence using sodium bicarbonate to alkalinize the urine to pH > 6.5 is also recommended, especially to prevent the development of AKI in at-risk patients.
Contrast Induced Nephropathy (CIN)
Intravenous radiocontrast agents can lead to AKI that has been termed CIN. With rapid expansion of diagnostic and therapeutic radiologic interventions coupled with an aging population with significant comorbidities, the incidence of CIN is rising. Risk factors for CIN are presented in Table 39.5. The actual incidence even in prospective studies varies from a low of <5% to a high of 50%. This is likely due to different study populations and differing definitions of CIN. The most commonly accepted definition is an absolute increase in serum creatinine of 0.5 mg/dL or increase of 25% above baseline. The increase commonly occurs 48-72 h post contrast exposure [20]. The proposed mechanism CIN likely involves vasoconstriction within the renal parenchyma leading to AKI with prerenal type of presentation though there is some evidence of direct toxicity mediated by free radicals [21]. A number of interventions have been studied to reduce the incidence of CIN. Interventions that have shown benefit at least in some, though not all, studies are: volume expansion, alkalanization, use of N-acetylcystein, limiting volume of contrast agent, using lower osmolarity agents, and discontinuing other nephrotoxic medications [22][23][24][25][26][27]. Hemodialysis either before or after radiocontrast administration to dialyze out the agent has not been shown to be beneficial [28].
HepatoRenal Syndrome (HRS)
HRS is a unique form of AKI that is seen in patients with advanced hepatic disease. The onset can vary from fairly acute to quite insidious. The typical clinical presentation is very similar to prerenal forms of AKI likely due to severe intrarenal vasoconstriction rather than inherent renal parenchymal disease. This is supported by the observation that HRS rapidly resolves if the hepatic disease is reversed or a functioning liver is transplanted. The pathophysiology likely involves vasodilatation in the splanchnic circulation caused by portal hypertension. This vasodilatation results in pooling of the blood volume within the splanchnic circulation especially in the large dilated mesenteric veins. This in turn leads to poor venous return to the heart and reduced perfusion to the rest of the body including the kidney. The kidneys respond to this relative hypo perfusion by afferent arteriolar vasoconstriction that leads to reduced GFR. Patients with HRS are prone to developing hepatic encephalopathy. Electrolyte disturbances-hypokalemia and hyponatremia-and acid-base disorders are more often seen in HRS as compared to other causes of AKI. While the principles of care are similar in this form of AKI, fluid balance becomes much more challenging since for the AKI a relatively full intravascular compartment and avoidance of diuretics is preferred, while to prevent ascites and peripheral edema from hepatic insufficiency, a mild degree of hypovolemia with diuretic use is preferred. Drainage of ascites especially if causing abdominal compartment syndrome (see below), either externally or internally via a peritoneovenous shunts may offer partial relief.
Portal-systemic shunts too may ameliorate the AKI, but AKI in and of itself is not an indication to perform such shunts [29][30][31][32].
Abdominal Compartment Syndrome (ACS) Associated AKI
Over the past two decades ACS has been accepted as a distinct nosologic entity where an increase in intra-abdominal pressure leads to organ system dysfunction [33]. Kidneys, along with the lungs, are the most sensitive organs to elevated intraabdominal pressures. Initially renal function is affected by the elevated intra-abdominal pressure compressing the renal veins. In later stages as the ACS progresses and cardiac output falls due to diminished venous return, this further contributes to AKI. Initially the presentation is that of oliguric with a prerenal picture, but if the ACS progress, ATN sets in. The only effective therapy is rapid relief of the intra-abdominal hypertension usually by surgical decompression [34].
Prognosis and Outcome
The majority (~80%) of the patients that develop AKI and survive will have return of renal function and be dialysis free [35]. While that maybe so, the prognosis for renal function is dependent upon the severity of the initial insult that caused the AKI. Patients with brief vascular insults will likely recover near baseline function within 72 h, while those with more severe and prolonged insults requiring dialysis for the AKI will likely have some long term effect where the serum creatinine remains 1-2 mg/dL above the pre-insult baseline value [36,37]. During the recovery phase it is extremely important to avoid a second insult in the form of hypovolemia and nephrotoxic medications, etc. Recovery from AKI is heralded by increasing urine output and lack of rise or decrease in serum creatinine despite not getting dialysis. Differing functions of the kidney may recover at different time intervals. Urine output may increase first followed by reduction in serum creatinine and finally the ability to regenerate bicarbonate and maintain body pH. During the recovery phase electrolyte imbalances are common, hence careful monitoring is essential to avoid life-threatening abnormalities from developing.
It has long been known that the development of AKI is associated with a higher mortality. The reasons for this remain a bit unclear and maybe related to the adverse impact of AKI on the immune function [38]. Reported mortality rates range from 25 to 64% [39,40]. The variation likely represents differing study populations. Overall mortality rates are lower for non-oliguric forms of AKI and AKI due to CIN, while very high rates are reported for HRS. The exact contribution of AKI as an independent risk factor for mortality among critically ill patients is debated. In a prospective study by Hoa et al. on postcardiac surgery patients, 145 of 843 (17%) developed AKI and AKI was found to be an independent risk factor for mortality with a hazard ratio of 7.8. The overall numbers in terms of outcome for patients developing AKI in the ICU have not changed significantly for the past five decades [41].
As mentioned in the early part of the chapter, multiple studies and meta-analysis have demonstrated that the overall outcomes in general and the return of renal function after AKI in particular are adversely affected by age [8,9].
Renal Replacement Therapy (RRT)
In critically ill patients with rapidly deteriorating renal function coupled with a hyper-catabolic state from the primary illness, unless renal replacement therapy is provided, the patient will likely die. In broad terms RRT can be provided in two forms-hemodialysis (HD) and peritoneal dialysis (PD). In both types of dialysis the principle of solute and fluid removal is the same. In both blood and dialysate fluid are separated by a semipermeable membrane. Solutes and fluid moves across this membrane following concentration and osmotic gradients. The two main processes are convection, where hydrostatic pressure serves as the driving force and diffusion where concentration gradients and osmotic pressure serve as the driving forces. By manipulating flow rates of blood and dialysate, and the composition of the dialysate, it is possible to control what gets removed-fluid or solute-and, in the case of solute, what types of solute-high molecular weight or low/medium molecular weight. In the case of HD the semipermeable membrane is in a cartridge called the hemofilter or hemodialyzer, while in case of PD, the peritoneal surface serves as the semipermeable membrane. For the most part, in the ICU setting PD is not used, and the focus of RRT in this chapter will be HD.
There are two forms of HD-intermittent (IHD) and continuous (CRRT). IHD is the main form of RRT utilized for the large majority of patients with end stage renal disease and is also utilized in the ICU for more stable patients. CRRT is less taxing to the hemodynamics of the patient and hence is often utilized for critically ill more unstable patients that need fluid and or solute removal due to AKI. RRT is associated with a host of complications. The complete discussion of all the complications and their management is beyond the scope of this chapter and discussed elsewhere in the text. The common complications are-thrombosis of the access or dialysis circuit, infection, hypotension, and electrolyte imbalances. The decision to initiate RRT should be taken with a careful evaluation of risks and benefits, and if initiated, meticulous attention to detail is paramount to minimize complications.
Indications
Indications for dialysis maybe divided into emergent and non-emergent. Emergent indications are those where without dialysis the patient may die probably within a very short period of time. These can be summarized by the mnemonic AEIOU: A-acidosis; E-electrolytes principally hyperkalemia; I-ingestions or overdose of medications/drugs; O-overload of fluid causing heart failure; U-uremia leading to encephalitis/pericarditis. The principal nonemergent indication for dialysis is ESRD where the renal function has deteriorated to the point that without dialysis, the patient cannot survive long term. In between these two extremes is the use of dialysis for AKI where there is an expectation of return of renal function sufficient for the patient to live without dialysis. Despite AKI being a very common disorder encountered in the ICU, surprisingly there is little consensus on the indications for dialysis. Some units start dialysis quite early with the expectation that by doing so, the ultimate outcome is improved. Other units tend to delay dialysis till the uremia leads to encephalopathy or an emergent indication emerges. The results of studies to determine the ideal indication and timing of initiating dialysis for AKI are mixed. Combining the results of multiple studies by meta-analysis is hampered by differing studies using differing definitions of early and late dialysis, and also that the older studies were performed with IHD, while the more recent ones have been done with CRRT [42,43]. In an attempt to provide some objective guidance, the AKI Network has published guidelines [44]. The guidelines emphasize: (1) the indications maybe taken as absolute and relative. Absolute indications are such that by itself each absolute indication would merit dialysis. On the other hand, relative indications are such that while by itself the individual indication may not merit dialysis, however when taken with the entire clinical scenario, the patient merits dialysis. The latter occurs most often in the face of MSOF. The indications are summarized in Table 39.6; (2) fluid overload in critically ill patients is associated with worse outcomes, hence in critically ill patients with fluid overload early CRRT may help with fluid management and possibly improve outcomes [45]; and (3) in line with #2, there is a tendency towards initiating dialysis early in patients with oliguric AKI as opposed to non-oliguric AKI. The above discussion notwithstanding, all agree that dialysis should be administered to treat severe uremia even in the absence of any of the emergent indications. Severe uremia is usually defined as BUN of >100 mg/dL. Lastly the panel did not critically evaluate some emerging evidence of the early use of CRRT in patients with sepsis where the investigators claim that by dialyzing early and vigorously, inflammatory cytokines are removed and outcomes improved. The hypothesis while intriguing remains unproven [46,47].
Access
In patients with ESRD, dialysis is often anticipated and planned for by creation of an arteriovenous fistula or graft even before the patient actually needs dialysis. In patients with AKI encountered in the ICU, dialysis is usually performed via large (12-15Fr) dual lumen catheters inserted percutaneously into a large vein or directly into the right atrium. These catheters are of two types-cuffed, tunneled and uncuffed, non-tunneled. Un-cuffed non-tunneled catheters are placed in the unit just as central venous lines. The commonly used ones are made of polyurethane and usually placed acutely for urgent dialysis in the internal jugular or femoral veins. Insertion into the subclavian vein is avoided to prevent stricture of vein that may hamper future placement of fistula or graft in the ipsilateral upper extremity. When placed with appropriate antiseptic precautions, they are safe to use for 2-3 weeks duration. If it is anticipated that dialysis maybe required for a longer duration, a cuffed tunneled catheter is preferred. These are made of silicone usually inserted with fluoroscopic guidance and have a cuff that sits in a subcutaneous tunnel. The tunnel and cuff tend to prevent catheter infection and hence these can be used for longer periods of time.
Dose
As is the case with exact indications and timing, there is also no consensus regarding the ideal dose or intensity of dialysis. While a number of smaller and usually retrospective studies suggest that higher intensity dialysis is associated with improved outcomes [48][49][50], larger prospective studies failed to demonstrate that [51,52]. Guidelines as to how much dialysis should be administered to patients with AKI are published by the Acute Renal Failure Trial Network (ATN Trial) [52]. Most centers tend to keep the BUN at about 70 mg/ dL. Besides solute reduction, the other component of dialysis is intravascular volume management. In patients that are septic and in the state of systemic inflammation, the capillaries remain hyper-permeable and any removal of fluid from the intravascular compartment leads to hemodynamic instability even though the total body water is increased. On the other hand, in patients that are recovering and the state of inflammation is subsiding, the capillaries regain their selective permeability and removing fluid from the intravascular compartment does not lead to hemodynamic compromise, rather there is resorption of the "third" space fluid from the interstitial compartment. Objective measures of intravascular volume should be utilized in determining how much volume should be removed.
Modality
As in other issues related to AKI, there is ongoing debate as which modality-IHD or CRRTis superior. It is generally accepted that CRRT is better tolerated specially by critically ill patients that may have some degree of hemodynamic compromise. The results of studies are mixed in terms of overall mortality and return of renal function [53][54][55][56][57]. The modalities can be difficult to directly compare since it is difficult to dialyze the same amount of solute and volume with IHD when it is performed a few hours usually on alternate days, as with CRRT that can be performed round the clock. In the largest randomized prospective study to date, the outcomes were similar for the two forms of dialysis [58].
One surprising result of that study, unlike many others, was that even critically ill patients could tolerate IHD if the dialysate had a very high concentration of Na. In the US most ICUs will opt for CRRT in critically ill patients especially those with unstable or tenuous hemodynamic status, and utilize IHD for the stable patients. In UK and Australia, CRRT is the modality of choice for AKI in the ICU.
Summary
The large majority of patients admitted to the surgical ICU are at risk of AKI as defined by the RIFLE. Due to a host of anatomic and physiologic changes within the kidney that occur with age, the risk of AKI is significantly higher. The occurrence of even mild AKI adversely affects overall outcomes with the elderly often having the worst outcomes of all. Meticulous attention to fluid management and minimizing the use of nephrotoxic medications can help reduce the incidence. All at-risk patients that do develop AKI should have a reexamination of intravascular volume and discontinuation or adjustment of dosage of nephrotoxic medications. RRT therapy is often required for managing patients that do develop AKI.
|
2019-03-17T13:11:33.499Z
|
2017-07-28T00:00:00.000
|
{
"year": 2017,
"sha1": "8fdb0612e1f3a3451c1f5ef349d70767bd593229",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6aa702e5b54381c2c72c4efe97f6dfa8c1c25471",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
38090241
|
pes2o/s2orc
|
v3-fos-license
|
Overexpression of microRNA-196b Accelerates Invasiveness of Cancer Cells in Recurrent Epithelial Ovarian Cancer Through Regulation of Homeobox A9
Background/Aim: Although microRNAs (miRNAs) are known to influence messenger RNA post-transcriptional control and contribute to human tumorigenesis, little is known about the differences in miRNA expression between primary and recurrent epithelial ovarian cancer (EOC). The purpose of this study was to assess the differential miRNA expression between primary and recurrent EOC and to investigate whether miR-196b could regulate the expression of the Homeobox A9 (HOXA9) gene, and thus affect the invasiveness of cancer cells in recurrent EOC. Materials and Methods: Microarrays were used to generate the expression profiles of 6658 miRNAs from samples of 10 patients with EOC. miRNA expression patterns were compared between primary and recurrent EOC. Aberrantly expressed miRNA, associated genes, and invasion activities were validated by a luciferase assay and an in vitro invasion assay. Results: miRNA microarray analysis identified 33 overexpressed miRNAs (including miR-196b) and 18 under expressed miRNAs in recurrent EOC from 6658 human miRNAs. HOXA9 expression was inversely correlated with miR-196b levels in recurrent EOC. We noted that miR-196b induced ovarian cancer cell invasiveness in recurrent EOC by an in vitro invasion assay. Conclusion: Overexpression of miR196b may contribute to invasion activities in recurrent EOC by regulating the HOXA9 gene. Moreover, miR-196b can be a potential biomarker in recurrent EOC. Epithelial ovarian cancer (EOC) is the most lethal of all gynecological cancers, the fourth leading cause of cancerrelated deaths in women in the United States, and the fifth most common malignancy in women in developed countries (1). In general, less than half (45%) of EOC patients can survive for more than five years after the initial diagnosis (2). The poor survival from EOC is due to the high percentage of patients diagnosed at an advanced stage, who often develop resistance to combined chemotherapy and show substantially poor prognosis. Most patients with EOC are treated with platinumand taxane-based chemotherapy. Although initial treatment is successful for 80-90% patients, most responders eventually acquire resistance to a wide range of chemotherapeutic agents. Prediction of patients that respond to a distinct therapy would help to optimize tailored treatment. MicroRNAs (miRNAs) are small (~22 nucleotides) noncoding RNAs that regulate gene expression at the transcriptional and/or post-transcriptional levels (3). These molecules typically reduce the translation and stability of messenger RNA (mRNA), including that of genes mediating processes in tumorigenesis such as inflammation, cell cycle regulation, stress response, differentiation, apoptosis, and invasion. miRNA targeting is initiated through specific base-pairing interactions between the 5’ end ("seed" region) of the miRNA and sites within the coding and untranslated regions (UTRs) of mRNAs; target sites in the 3’-UTR lead to more effective mRNA destabilization (4). miRNAs play a role in the tumorigenic process and are potential 137 This article is freely accessible online. Correspondence to: Prof. Young Lae Cho, M.D. Kyungpook National University Medical Center, Daegu, Korea, 807 Hogukno, Buk-gu, 702-210 Daegu, Republic of Korea. Tel: +82 532002681, Fax: +82 532002028, e-mail: ylchoknuh@naver.com and Dr. HyoSung Jeon, Ph.D. Mmonitor Inc., Daegu, Korea, Seongseogongdanro 11-gil, Dalseo-gu, Daegu, Republic of Korea. Tel: +82 532542505, Fax: +82 532542507, e-mail: jeonh@mmonitorings.com
Epithelial ovarian cancer (EOC) is the most lethal of all gynecological cancers, the fourth leading cause of cancerrelated deaths in women in the United States, and the fifth most common malignancy in women in developed countries (1). In general, less than half (45%) of EOC patients can survive for more than five years after the initial diagnosis (2). The poor survival from EOC is due to the high percentage of patients diagnosed at an advanced stage, who often develop resistance to combined chemotherapy and show substantially poor prognosis.
Most patients with EOC are treated with platinum-and taxane-based chemotherapy. Although initial treatment is successful for 80-90% patients, most responders eventually acquire resistance to a wide range of chemotherapeutic agents. Prediction of patients that respond to a distinct therapy would help to optimize tailored treatment.
MicroRNAs (miRNAs) are small (~22 nucleotides) noncoding RNAs that regulate gene expression at the transcriptional and/or post-transcriptional levels (3). These molecules typically reduce the translation and stability of messenger RNA (mRNA), including that of genes mediating processes in tumorigenesis such as inflammation, cell cycle regulation, stress response, differentiation, apoptosis, and invasion. miRNA targeting is initiated through specific base-pairing interactions between the 5' end ("seed" region) of the miRNA and sites within the coding and untranslated regions (UTRs) of mRNAs; target sites in the 3'-UTR lead to more effective mRNA destabilization (4). miRNAs play a role in the tumorigenic process and are potential therapeutic targets and novel biomarkers in most human cancers. Moreover, numerous miRNA profiling studies in EOC have identified miRNAs associated with chemotherapy resistance and disease progression (5)(6)(7)(8). However, little is known about the differences in miRNA expression between primary and recurrent EOC. Previously, we reported aberrant miRNA expression in recurrent EOC compared to that in primary EOC. To understand the biology of recurrent ovarian cancer, we examined the expression of 6658 miRNAs in both primary and recurrent EOC samples. Among 6,658 human miRNAs, 33 miRNAs were overexpressed and 18 miRNAs were under expressed in recurrent EOC (9).
The purpose of this study was to investigate the expression level of miRNAs in recurrent EOC and to examine the expression levels of key components of the miRNA processing machinery.
Materials and Methods
Patients and tissue samples. Between September 2013 and May 2014, tumor tissue specimens were obtained from 10 Korean patients with EOC (primary EOC, n=5; recurrent EOC, n=5) who underwent surgery at the Kyungpook National University Medical Center, Daegu, Korea. Histopathologic diagnoses were established using the World Health Organization criteria, and the tumor histotype was determined as serous cystadenocarcinoma in all patients. The primary and recurrent EOC cases were different patient populations. The 5 patients with recurrent EOC had received at least 6 cycles of platinum-based combination chemotherapy (paclitaxel plus carboplatin). Tissue specimens were obtained during surgery, and they were rapidly frozen in liquid nitrogen and stored at −80˚C until analysis. Tissue samples were histologically confirmed by hematoxylin-eosin staining. The Institutional Review Board approved the study protocol, and written informed consent was obtained from all patients. miRNA microarray. Total RNA (2 μg) was extracted from the transduced cells using Trizol and the RNeasy Miniprep kit (Qiagen, Hilden, Germany) according to the manufacturers' protocols. The RNA quality was verified with the Agilent RNA Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). Biotinylated cRNA was amplified using double in vitro transcription in accordance with the Affymetrix small sample labeling protocol VII (Affymetrix, Santa Clara, CA, USA). Total RNA was then hybridized onto an Affymetrix GeneChip ® miRNA 4.0 as per standard protocols of the provider. Fluorescence intensities were quantified and analyzed using the Genechip operating software (Affymetrix). Raw data were normalized by the Robust Multi-array Average (RMA) method to remove systematic variations. Briefly, RMA corrects raw data for background by using a formula based on normal distribution and uses a linear model to estimate values on a log-scale. Transcripts whose logtransformed expression ratios differed by at least two-folds between recurrent and primary EOC tissues, were identified.
Construction of plasmid and luciferase assay.
To investigate whether miR-196b modulates HOXA9 expression by binding its 3'-UTR, a luciferase assay was performed using psiCHECK2 plasmid containing the HOXA9 3'-UTR. The 1,035 bp fragment of HOXA9 3'-UTR was synthesized by PCR and cloned into a dual luciferase vector, psiCHECK2 (Promega, Madison, WI, USA). The forward primer with a XhoI restriction site (5'-CCA CTC GAG AAA GAA CTG TCC GTC CCC CT-3') and the reverse primer with the NotI restriction site (5'-CCA GCG GCC GCG GTC AGT AGG CCT TGA GGT AAC-3') were used to amplify the HOXA9 3'-UTR region. The correct sequence of the clone was verified by DNA sequencing. The 293T cells were seeded in 12-well plates in DMEM supplemented with 10% heat-inactivated FBS and then transfected with psiCHECK2-HOXA9 constructs containing the 3'-UTR of HOXA9, in the presence of a miR-196b mimic or scrambled miRNA (Ambion) using the Effectene (Qiagen, Hilden, Germany) transfection reagent. The cells were harvested at 48 hours after transfection and the cell lysates were prepared according to Promega instruction manual. Renilla luciferase activity was measured using a Lumat LB953 luminometer (EG & G Berthhold, Bad Wildbad, Germany), and the results were normalized using luciferase activity. All experiments were performed in triplicate.
In vitro invasion assays. The invasion assay was performed in triplicate using 48-well microchemotaxis chambers (Neuro. Probe, Inc., Gaithersburg, MD, USA) with 8-μm pore membranes (Neuro. Probe, Inc., Gaithersburg, MD, USA) pre-coated with 10 μg/ml Matrigel (BD Bioscience). SK-OV-3 cells (1×10 4 ) in 50 μl of serum-free medium were seeded in the upper chamber, and the lower chamber was filled with 26-27 μl medium containing 10% FBS. After incubation for 24 h at 37˚C, cells that migrated to the lower surface of the membranes were stained with a Diff-Quick kit (Sysmex, Kobe, Japan) and then counted under a microscope. The chambers were stained with 0.2% crystal violet and analyzed by photography. The migration assay was performed with the same procedure using membranes coated with 5 μg/ml collagen IV (TRAVIGEN, Gaithersbug, MD, USA).
Statistical analysis. For Genechip microarray analysis, statistical comparisons were made using the Student's t-test. Differentially expressed miRNAs between primary and recurrent EOC were detected through the t-test and data were considered statistically significant when p<0.05.
Selection of biomarkers of recurrent EOC.
In the Affymetrix miRNA 4.0 analysis, 4 miRNAs including miR-551b, miR-19b, miR-196b, and miR-3198, were significantly overexpressed in recurrent EOC from among 6,658 human miRNAs. We chose miR-196b for further studies because it was suggested to be a potential marker in patients with lung adenocarcinoma (10) and in gastric and oral cancer (11,12). However, the role of miR-196b in EOC is not fully investigated. The result of the microarray analysis showed that miR-196b transcription level was significantly higher in recurrent EOC than that in primary EOC. The expression level of miR-196b was 138.5 (average score) in recurrent EOC and 6.5 (average score) in primary EOC (p=0.031).
Repression of HOXA9 transcription by miR-196b.
To identify a direct target gene of miR-196b, we searched public databases such as TagetScan and then selected HOXA9 from the top 10 target genes including HOXA7, HOXC8, SMC3, SLC9A6, HOXA9, HMGA2, CTBS, ZMYND11, C1arf88, and COL16A1. In the female reproductive system, HOXA9, HOXA10, HOX11, and HOXA13 are expressed along the Müllerian duct axis (13). Especially, HOXA9 is expressed in the fallopian tubes and has a functional role in normal development and in adult tissues, as well as in cancer (14). Therefore, HOXA9 might be a candidate target of miR-196b. To verify whether miR-196b could regulate the HOXA9 gene directly, we generated a Renilla luciferase reporter plasmid cloned downstream of the HOXA9 3'-UTR containing the putative miR-196b binding sequences, which was predicted to contain two binding sites, position 940-959 bp and 1973-1995 of HOXA9 3'-UTR. The constructs were then co-transfected into 293T cells along with miR-196b, or scrambled miRNA, and Renilla luciferase activity was measured after 48 h. As shown in Figure 1, Renilla activity with the miR-196b mimic was significantly lower than that with scrambled miRNA (p=0.003). This suggests that miR-196b represses HOXA9 transcription by directly binding to its 3'-UTR.
Induction of cancer cell invasion by miR-196b.
To determine whether miR-196b has an effect on ovarian cancer progression, an invasion assay was performed with SK-OV-3 ovarian cancer cells in vitro. miR-196b was significantly overexpressed in recurrent EOC. The miR196b mimic and scrambled miRNA were transfected into SK-OV-3 ovarian cancer cells. The invasion assays indicated that overexpression of miR-196b could significantly induce invasiveness in SK-OV-3 cells (Figure 2), suggesting that HOXA9 might be involved in inhibiting aggressive behavior in recurrent EOC.
Discussion
Management of EOC has improved over the last 20 years due to effective surgery and optimized combinational chemotherapy, i.e. platinum-based drugs combined with taxanes (15). However, the overall cure rate is only 30%. The issues of tumor recurrence, drug resistance, enhanced invasion, and metastasis remain a challenge in the treatment and clinical management of EOC. While existing therapies are considered relatively effective in the treatment of ovarian tumors, mesenchymal/stem cell-like metastasizing EOC cells are generally resistant to these therapies and are considered largely responsible for EOC recurrence (16)(17)(18). Several miRNAs are involved in response to chemotherapy. Low levels of miR-200c have been associated with a mesenchymal phenotype and may thus be associated with chemoresistance (19). Furthermore, low levels of miR-199a may be a reliable predictor for chemoresistance in recurrent tumors (7). These miRNAs can affect the response to chemotherapeutic drugs and might have both prognostic, as well as predictive value. However, the mechanism of tumor recurrence and chemoresistance in EOC remains poorly understood. Moreover, only a few studies have reported the differential miRNA profiles between primary and recurrent EOC. Laios et al. demonstrated that miR-9 and miR-223 could be of potential importance as biomarkers in recurrent EOC (20).
To understand the biology of recurrent EOC and evaluate recurrent EOC-specific biomarkers, we investigated the expression of miRNAs in primary and recurrent EOC. Among the overexpressed miRNAs, miR-19b has a role in oncogenic progression via targeting TP53 (11,21,22). miR-551b and miR-3198 were poorly discovered with tumorigenesis. miR-196b was demonstrated to have oncogenic functions in many cancers except ovarian cancer. miR-196b promotes invasiveness in oral cancer through the NME4-JNK-TIMP1-MMP signaling pathway (11), and miR-196b overexpression in gastric cancer was correlated with poor prognosis (23).
The homeotic or HOX genes share a common 120 base pair DNA sequence or the homeobox, which encodes a 61-amino acid peptide called the homeodomain. HOX genes are involved in the development of various cancers. HOX genes act not only as transcriptional activators in cancers but also as transcriptional repressors. Thus, both up-regulation and downregulation of the members of the HOX family of transcription factors appear critical for the promotion of tumorigenesis (24). Aberrant expression of HOX genes has been reported in various cancers including EOC. Ko et al. reported that HOXA9 stimulated the ability of EOC cells to attach to peritoneal cells and to migrate. Thus, HOXA9 contributes to poor outcomes by promoting intraepithelial dissemination via induction of P-cadherin (25). However, HOXA9 was expressed at lower levels in cancerous tissues compared to normal tissues in the context of breast cancer (26). Recent review articles demonstrated that HOX genes are essential regulators of tissue identity and drive tumorigenesis and progression through their involvement in regulating processes such as differentiation, proliferation, adhesion, migration, and apoptosis in EOC (27). Our data revealed that HOXA9 downregulation led to invasiveness by miR-196b in EOC. Thus, it is not yet known whether HOXA9 functions as an oncogene or as a tumor suppressor, and its role in ovarian tumorigenesis and progression is not yet identified. Genomic analysis of tumors from large cohorts of EOC patients with documented treatment and outcome data are needed to further elucidate the mechanisms underlying HOX gene dysregulation, establish their functional prognostic and predictive roles in EOC development, and replicate previous clinical findings.
In the present study, 21 miRNAs were overexpressed and 16 miRNAs were under expressed in recurrent EOC. HOXA9 was directly regulated by miR-196b through binding of its 3'-UTR. miR-196b was overexpressed in recurrent EOC and induced ovarian cancer cell invasiveness. Based on these data, miR-196b can be used a biomarker predicting EOC recurrence. HOXA9 plays a central role in controlling the aggressive behavior of recurrent EOC.
Conflicts of Interest
The Authors declare that there are no conflicts of interest.
|
2018-04-03T04:12:19.796Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "bb51617f733659f29cb0349ddce3ba04d7885113",
"oa_license": null,
"oa_url": "http://cgp.iiarjournals.org/content/14/2/137.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a552ba74e46c1ab4e56d6d75ec19fdb8248390b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
248525057
|
pes2o/s2orc
|
v3-fos-license
|
RF Interference in Lens-Based Massive MIMO Systems -- An Application Note
We analyze the uplink radio frequency (RF) interference from a multiplicity of single-antenna user equipments transmitting to a cellular base station (BS) within the same time-frequency resource. The BS is assumed to operate with a lens antenna array, which induces additional focusing gain for the incoming signals. Considering line-of-sight propagation conditions, we characterize the multiuser RF interference properties via approximation of the mainlobe interference as well as the effective interferer probability. The results derived in this application note are foundational to more general multiuser interference analysis across different propagation conditions, which we present in a follow up paper.
Notation. Boldface upper and lower case symbols are used to denote matrices and vectors, while lightface upper and lower case symbols denote scalar quantities. Note that the Hermitian transpose is denoted by (·) H . The scalar norm is denoted as |·| and the floor, ceiling and indicator functions are expressed as ⌊·⌋, ⌈·⌉, and ½ (·), respectively. The "sinc" function is given by sinc (x) = sin (x) /x, while the "maximum" and "minimum" functions are given by max (·) and min (·). Finally, O (·) denotes the "order" of a mathematical term.
I. SYSTEM MODEL
We consider the uplink of a single-cell system, where the base station (BS) is equipped with a large lens antenna array containing M elements. Multiuser operation is assumed, where the lens array receives uplink data streams from L single-antenna user terminals within the same time-frequency interval. The user terminals are located with a uniform random distribution in area covering a net sector of 2π/3 radians. The array at the BS consists of a flat electromagnetic (EM) lens with elements that are located on the focal arc of the lens. Without loss of generality, we consider azimuth direction-of-arrivals (DOAs) and assume that the flat lens is employed with negligible thickness. 1 The EM lens array has a total aperture of D y × D z , as the array is located on the y−z plane and is centered at the origin. The focal arc of the lens is defined as a semi-circle around the lens's center in the azimuth plane (x − y plane) with radius F . Here F physically represents the focal length of the lens. According to this, each element's locations with respect to the lens's phase center can be written as where θ m ∈ −π 2 , π 2 is the angle of the m-th antenna element with respect to the x-axis, where m ∈ M. Note that M = 0, ±1, . . . , ± M −1 2 denotes the set of antenna indices in the lens array. 2 The antenna elements are deployed on the focal arc such thatθ m = sin (θ m ) are equally spaced in the interval [−1, 1] as indicated in [1]. Doing this yields whereD = Dy λ is the lens dimension along the azimuth plane normalized by the carrier wavelength, λ. According to this formulation, more elements are deployed in the center of the array than those on either sides. The relationship between M andD can be observed from (1) as M = 1 + ⌊2D⌋. As in [1], when the lens array receives an uplink signal in the form of a uniform plane wave from terminal ℓ, with an azimuth DOA φ ℓ , the resultant signal received by the m−th element of the array can be written as where A = DyDz λ 2 is the normalized aperture, Φ 0 is a common phase shift from the lens's aperture to the array, andφ ℓ = sin (φ ℓ ) ∈ [−1, 1] is referred to as the spatial frequency corresponding to φ ℓ . In line with [1,[3][4][5], we assume that the insertion loss of the lens, as well as its boundary effects are negligible. The expression in (2) in the interference power via transmission from undesired users to the BS. We denote the total interference power at the BS by I LOS and analyze its form subsequently.
where g ℓ and h k are the maximum-ratio combining vector of user ℓ and the propagation channel vector from user k to the BS. Note that ∆ ℓ,k =φ ℓ −φ k , and∆ ℓ,k =φ ℓ −φ k .
to write whereφ ℓ = sin (φ ℓ ) andφ k = sin (φ k ), respectively. Using the fact that ∆ ℓ,k =φ ℓ −φ k and (5) can be re-written as Similarly, after some straightforward algebraic manipulations, D LOS ℓ can be expressed as Substituting (5) and (7) into (4) yields the desired result and concludes the proof. are both proportional to cos(2πD(φ ℓ −φ k )). Note that in that case, the LOS propagation channel has a substantially different form/structure in that it is constructed out of phase shifted exponential functions instead of sinc functions (as for the lens arrays).
Remark 2.
According to (4), Fig. 1 depicts I LOS ℓ as a function ofφ ℓ −φ k . One can observe: is non-monotonic and non-periodic in nature, making its probability density and cumulative density mathematically intractable to analyze. This is true irrespective of the number of interfering sources present in the system.
2) With an increase in the number of elements at the lens array, majority of the interference will lie in the mainlobe (defined from the peak to the first null) of the interference pattern, while the relative sidelobe interference levels are significantly lower. The power ratio between the peak of the main lobe relative to the first sidelobe is approximately 13 dB.
For further discussions, see [6] and references therein.
3) From the result in Theorem 1, the first nulls on either side of the mainlobe appear at As a result, the mainlobe width can be written as 2 M d . Considering the fact that any further analysis of the instantaneous LOS interference power is intractable, to understand the fundamental nature of its probability and cumulative densities, we approximate it by recognizing that the RF interference, I LOS ℓ , is composed of a mainlobe surrounded by many smaller sidelobes, and the shape is determined by M, the number of antenna elements at the lens array. As the relative sidelobe levels are negligible particularly for moderate and larger arrays, the "effective" RF interference can be approximated by the mainlobe only. The details of this approximation are as follows, based on which an effective interferer probability is derived. Approximation 1. The mainlobe (effective) RF interference can be approximated as where Θ ℓ,k =D φ ℓ −φ k is the normalized angular separation of terminals ℓ and k over half the mainlobe width, such that when Θ ℓ,k ∈ [−1, 1], interferer k falls within the mainlobe of the interference pattern. Moreover, the effective interference from interferer k, given M elements is
|
2022-05-06T01:15:58.198Z
|
2022-05-05T00:00:00.000
|
{
"year": 2022,
"sha1": "014be530bf38ccd60e7efbede33ea790c3c065b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "014be530bf38ccd60e7efbede33ea790c3c065b5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
}
|
250094279
|
pes2o/s2orc
|
v3-fos-license
|
Treatment Patterns of Atopic Dermatitis Medication in 0–10-Year-Olds: A Nationwide Prescription-Based Study
Introduction The literature on treatment patterns for paediatric atopic dermatitis (AD) is scarce and is rarely based on real-world data. Using national registers, we sought to establish up-to-date, population-based prevalence estimates, predictors of risk and disease burden and a comprehensive overview of treatment patterns and course for paediatric patients with AD. Methods Dispensed prescriptions for the entire Norwegian child population aged 0–10 years from 2014 to 2020 were analysed. Results There were 176,458 paediatric patients with AD. Of these, 99.2% received topical corticosteroids, 5.1% received topical calcineurin inhibitors, 37.1% received potent topical corticosteroids and 2.1% received systemic corticosteroids. Of the 59,335 live births in Norway (2014), 14,385 [24.8%; 95% confidence interval (CI) 24.5–25.1] paediatric patients were treated for AD before the age of 6 years, and of these, only 934 (6.5%; 95% CI 6.1–6.9) received medication annually for 5 years or more. Compared with girls, 17.9% (95% CI 6.5–27.9) more boys were treated for at least 5 years, receiving 6.4% (95% CI 1.2–11.3) more potent topical corticosteroids and 12.4% (95% CI 6.5–18.0) more were treated for skin infections. Compared with patients with late-onset treatment, 18.9% (95% CI 7.5–29.0) more paediatric patients with early-onset treatment were still receiving treatment at 5 years of age, 15.7% (95% CI 7.1–23.4) more paediatric patients received potent topical corticosteroids and 44.4% (95% CI 36.5–51.2) more paediatric patients were treated for skin infections. Conclusion Most paediatric patients were treated for a mild disease for a limited period. Although the prevalence of AD is higher at a younger age, these paediatric patients were the least likely to receive potent topical corticosteroids. Male sex and early-onset AD are associated with and are potential predictors of long-term treatment and treatment of potent topical corticosteroids, antihistamines and skin infections, which may have clinical utility for personalised prognosis, healthcare planning and future AD prevention trials. Supplementary Information The online version contains supplementary material available at 10.1007/s13555-022-00754-6.
The literature on treatment patterns and disease severity, particularly in paediatric patients under 2 years of age, is sparse and is rarely based on real-world data. Further details on predictors of risk are needed to better facilitate interventions that may halt this epidemic rise of atopic dermatitis in our paediatric populations.
Covering an entire nation of children up to 10 years of age, we sought to establish up-to-date, population-based prevalence estimates and predictors of risk and disease burden and a comprehensive overview of treatment patterns and course for paediatric patients with atopic dermatitis.
What was learned from the study?
We found that male sex and early-onset atopic dermatitis are associated with and are potential predictors of long-term treatment and treatment of potent topical corticosteroids, antihistamines and skin infections.
The highest burden of AD was evident during the first years of life, with a peak prevalence at 1 year of age. We encourage further research to investigate the presumed unmet therapeutic needs of this vulnerable patient group and the applicability of current guidelines, particularly in paediatric patients under
INTRODUCTION
Atopic dermatitis (AD) causes the most significant burden of disability in the global context of skin diseases and substantial morbidity, including pruritus and reduced personal and family quality of life [1].
Although AD has lagged behind psoriasis in treatment development, a broader therapeutic landscape for AD has emerged in recent years. In the nineteenth century, conventional treatment mainly compromised ointments [2]. Systemic and topical corticosteroids (TCSs) were introduced in the 1950s [3]. Immunosuppressive agents, such as cyclosporine and azathioprine, became available treatment options in the mid-1990s [4]. More recently, second-line systemic options for AD (e.g. JAK inhibitors) have gained a place in therapy but are rarely used and are not approved for young paediatric patients.
The heterogeneity of the clinical picture and disease course of patients with AD indicates a complex reality and uncertain disease trajectories. The literature on treatment patterns and disease severity, particularly in patients under 2 years of age, is sparse [5,6]. A review by Siegfried et al. [6] revealed limited data on longterm and combination treatment, treatment of severe AD, and systemic corticosteroids in children [6]. In addition, methodological divergence, assessments of AD signs, variations in participants, clinical settings and countries studied were evident.
National health registers from the Nordic countries provide valid, real-world epidemiological data, identifying patients at the individual level on the basis of dispensed prescriptions for disease-specific medications [7][8][9]. Unique person identifiers and a nationwide sample size provide advanced access to comparative longitudinal data that enable large-scale nationwide cohort studies. We conducted a study covering all paediatric patients who were dispensed prescriptions for AD specific medication up to the age of 10 years from 2014 to 2020 using a novel dataset. The primary objective was to obtain a comprehensive up-todate overview of prescription-based treatments in paediatric AD. Our secondary objective was to identify treatment patterns and how these relate to long-term and potent topical AD treatment.
Ethics Approval and Consent
The observational study was conducted between January 2020 and October 2021. The study was approved by the Regional Committees for Medical
Study Population
The study covers an annual child population from birth to age 10, consisting of 683,468 patients in 2014 and decreasing to 672,188 in 2020. A total of 59,335 children born in 2014 were followed for 6 years. All children residing in Norway aged 0 to 10 years who had received AD-specific medication [TCSs or topical calcineurin inhibitors (TCIs) or both for external use] were followed up from 1 January 2014 to 31 December 2020.
Registers and Coding Classifications
The nationwide Norwegian Prescription Database (NorPD) holds a unique encrypted personal identifier for all prescriptions dispensed by pharmacies to the Norwegian population, enabling us to track dispensing at the individual level over time.
A unique pseudonym replaced the patient number ID. Patient characteristics included age, month and year of birth, date of death, sex, dispense date, generic drug name and ATC codes. Reimbursable prescriptions included codes from the International Statistical Classification of Diseases, Tenth Revision (ICD-10) and the International Classification of Primary Care, version two (ICPC-2) [10].
In Norway, reimbursable prescriptions are issued for chronic diseases. Population statistics were obtained from Statistics Norway.
Algorithm for Identifying Paediatric Patients with Atopic Dermatitis Treatment
Patients were considered to have AD if they met at least one requirement for either criterion, 1 or 2.
1. Criterion 1, on the basis of medical diagnoses: patients with recorded reimbursement prescriptions including associated disease-specific diagnoses of ''atopic dermatitis/eczema'', recorded as ICD-10 (L20) or ICPC-2 (S87). 2. Criterion 2, on the basis of disease-specific medication dispensed: patients with nonreimbursable prescriptions (no AD diagnosis as in criterion 1) were considered to have AD if the child, within 1 year, the child had either: • C two prescriptions of TCS (minimum 14 days apart) • C one prescription of TCI 3. Non-AD criteria: Patients classified under criterion 2, with co-occurring ICD-10/ ICPC-2 skin diagnoses (which could lead to identical treatments) or co-occurring skin disease-specific medications (primarily prescribed for other conditions), were not considered to have AD.
The online Supplementary Material provides further explanations of the algorithm employed.
Categorising Paediatric Patients based on the Potency of TCSs
Patients were categorised into three levels on the basis of the highest potency of TCS treatment received (with or without TCIs, systemic treatment including corticosteroids, immunosuppressants, calcineurin inhibitors, folic acid analogues and interferons). Level 1 was defined as patients treated exclusively with weak TCSs (group I). Level 2 was defined as moderate TCSs (group II). Level 3 was defined as potent or very potent TCSs (group III/IV). A more potent TCS class overruled a less potent one.
Statistical Analysis
We used Poisson regression based on the algorithm to calculate the 1-year prevalence of dispensed drugs with a 95% confidence interval (CI). Data were stratified by age and sex, early and late treatment initiation, and years of treatment. The dataset was adjusted for sex differences within the population. Descriptive statistics were reported as mean, standard deviation (SD) or median for continuous variables, and as frequency (per cent) for categorical variables. A chi-square test tested the differences between rates. P \ 0.05 (2-sided test) was considered statistically significant.
The annual prevalence (based on age) was measured as the number of individuals receiving at least one prescription of AD medication per age. Continuous assessments of the same individuals over time can occur. The dominator from Statistics Norway was presented by sex and age on the basis of the midyear Norwegian population each year.
The 2014 birth cohort was stratified by index age (baseline age), defined as the age of exposure (dispensed prescription of AD medication). Patients with an index age of 0-6 months were set as the reference group to stratify and assess the treatment pattern and predictor of severity and long-term AD treatment. The 2014 birth cohort was divided into four cohorts (6-month periods) according to index age. Patients with an early index age (0-6 months) were compared with patients with a late index age (18-24 months). Age-and sex-specific analyses and analyses of the strength of TCSs dispensed were performed.
The total number of years of AD treatment were analysed to assess the duration of treatment. We also analysed the number of patients receiving regular AD treatment (persistence) with a maximum interval of 1 year between redemptions for at least 2 years, from index age to 6 years of age.
Days of follow-up were defined as the period between indexation (index age) and the first date of emigration, death or cut-off date of the NorPD data (31 December 2020), whichever occurred first. Data were analysed using Stata/ MP software (version 17.0; StataCorp LLP).
Prescription and Patient Selection
From 2014 to 2020, 176,458 patients were treated for AD according to the algorithm. Overall, 90.7% (160,022) of the included patients had a physician-issued reimbursable prescription and associated AD diagnoses (criterion 1). There were 589,687 topical AD medication prescriptions (317,593 dispensed to 92,436 boys and 262,094 to 84,022 girls). The median observation period per child was 38.2 months (interquartile range 16.0; 60.0).
Treatment Patterns in Paediatric Patients
Aged 0-10 years, (Table 1) The period prevalence for all ages combined was 6.7% (95% CI 6.7-6.8). The statistics displayed a significant preponderance of boys receiving AD treatment. There were no significant differences between the sexes after the age of 4. The average prescriptions dispensed per year per child, and the mean number of grams of prescribed topical treatment indicated a steady decline with increasing age.
Almost all patients were dispensed TCSs. Only 1435 patients (0.8%; 95% CI 0.8-0.9) were prescribed TCIs as single therapy (excluding other topical therapies). Only a minority were prescribed very potent (group IV) TCSs. The number of patients receiving potent TCSs (group III) and TCIs increased with age.
The analysis revealed (387/6,658 girls compared with 547/7727 boys) 17.9% (95% CI 6.5-27.9) more boys than girls received at least 5 years of dispensed AD treatment (or more). However, there was no statistically significant difference between the sexes in terms of persistence (regular redemption of AD treatment). Accordingly, (2428/6658 girls compared with 3010/7727 boys) 6.4% (95% CI 1.2-11.3) more boys received potent/very potent TCSs than girls. In addition, (2428/6658 girls compared with 3010/7727 boys) 12.4% (95% CI 6.5-18.0) more boys were treated for skin infections (at least one of the following: weak TCSs in combination with antibiotics or topical antibiotics, antiseptics or disinfectants). We found that (1956/6658 girls compared with 2412 /7727 boys) 5.9% (95% CI 0.1-11.3) more boys than girls received antihistamines before the age of 6 and (122 /6658 girls compared with 195/7727 boys) 27.4% (95% CI 9.0-42.1) more boys received systemic corticosteroids. Overall, (596/2505 early index age compared with 347/1799 late index age) 18.9% (95% CI 7.5-29.0) more patients with an early index age (0-6 months) were still receiving AD treatment at 5 years of age compared with patients with a late index age (18-24 months When we analysed antihistamines received before the age of 6, we found (964/2505 early index age compared with 559/1799 late index age) 20.9% (95% CI 12.2-28.7) more patients with an early index age than a late index age. We also found an increased rate of prescribed systemic corticosteroids in patients with an early index age. However, the results were not statistically significant.
DISCUSSION
Globally, to our knowledge, this is the only nationwide study to quantify paediatric AD disease-specific prescriptions and provide a realworld overview of prevalence, treatment patterns, course and predictors, including subgroup characteristics.
The prevalence of Norwegian children receiving AD treatment has decreased with age, ranging from 11% at age 1 to under 5% at age 10 ( Table 1). The decline was expected on the basis of the commonly observed disease course of early AD onset, followed by improvement in adolescence. In a 2020 US claims data analysis of AD paediatric patients, Paller et al. [5] observed that most patients receiving AD treatment were 0-1 years old. This study confirms their findings regarding the high prevalence of early-onset treatment.
In the 2014 birth cohort, most patients received short-term treatment of TCSs/TCIs. Around 7% of the patients received AD medication annually for 5 years (or longer) and merely one in four patients were still receiving AD treatment at age 5. In a review of 45 studies involving 110,651 children, the authors found that 80% of childhood AD did not persist by age 8 [11], which underlines our findings.
A substantial proportion of young patients with flexural and facial skin involvement may explain why approximately 80% of them, in the first year of life, received weak or moderately potent TCSs as the highest potency, dropping to 50% by age 10. This finding, together with the general short-term need for AD treatment, is consistent with current knowledge that AD is a mild disease in the majority of cases [5,[12][13][14].
The number of TCI prescriptions increased with age, accompanied by more potent TCSs, confirming the findings in the US study. A minority of patients received very potent TCSs or systemic therapy. Four out of ten patients received (at least once before the age of 10) potent or very potent TCSs, indicating moderate to severe disease. Since AD is not treated with systemic therapy alone, the prevalence of severe AD before the age of 6 is estimated to be 9.2% in the 2014 birth cohort (according to the proxy). A recent study by Silverberg et al. estimated the proportion of severe AD in 18 countries to be 3.1%-11.0% (except Israel; 24.9%) in children under 6 years of age [15]. These paediatric patients with severe and complex disease are potential candidates for future systemic medication.
Although AD is a chronic disease, the age of onset and disease expression varies across individuals and within seasons. In addition, AD often runs within families. Although it is not recommended, familial sharing of prescribed medication does occur. Moreover, patients may fill their prescriptions just before their birthday, which means they might have received sufficient medication for the following year. If we account for the age of onset (of AD treatment) and frequency of redemption (at least every second year), the proportion of patients receiving regular AD treatment was roughly 13%, which could reinforce the assumption of generally poor adherence in patients with AD [16].
In the 2014 birth cohort, nearly one in five patients received early-onset AD treatment (index age: 0-6 months). We suggest that earlyonset AD treatment is associated with significantly severe AD patterns. Overall, patients with early-onset AD treatment were treated with more TCSs and TCIs (higher number of prescriptions and number of grams), they received prescriptions more regularly with more potent (or very potent) TCSs and more were treated for skin infections. In addition, significantly more patients with early-onset AD treatment were still receiving AD treatment at age 5 compared with patients with late-onset AD treatment (index age: 18-24 months). A Danish study of AD disease severity in paediatric patients found that early-onset AD (\ 1 year of age) was associated with more severe disease [17]. Previous research suggests that patients with early-onset AD have a significantly higher frequency of filaggrin loss-of-function mutations, increased AD duration and hospitalisation, inadequate disease control and increased persistence [18,19]. All these studies are consistent with our findings.
The highest burden of AD was evident during the first years of life, with a peak prevalence at 1 year of age. A Danish study [20] concluded that children with AD had the highest disease burden in the second year of life. In the present study, the mean annual number of grams of AD medication per child hardly decreased with age. However, more medication was prescribed to the youngest patients relative to body size. Moreover, the highest annual number of prescriptions and the highest number of combination treatments and skin infection treatments were associated with early onset AD treatment, confirming our findings that AD is a more common and severe condition in the first years of life [20]. Although the prevalence and burden of AD is substantially higher at a younger age, these patients were the least likely to receive potent TCSs [5]. Overall, treatment with more potent TCSs could lead to more rapid skin improvement and disease control, ultimately resulting in fewer TCSs being used overall and fewer physician visits and prescriptions (implying it is also more cost effective) [21][22][23][24][25][26]. Conclusively, guidelines for the potency of medications adapted to the severity of the disease and the anatomical site of the application according to age, especially in patients under 2 years of age, need to be more specific [27]. It could enhance the potential to treat young paediatric patients more effectively and safely.
The preponderance of boys receiving earlyage AD treatment reflects previous research [28][29][30]. A recent Norwegian study showed that the male sex was predictive of high transepidermal water loss at 3 months of age [31]. According to another recent review, the point prevalence of AD in girls was 24% compared with 35% in boys before age 1. In school-aged children, the prevalence was around 11% in girls and 8% in boys [32]. In addition, the Danish study concluded that disease severity was associated with the male sex, which is consistent with our findings that increased prescription of potent/very potent TCSs are associated with a prolonged disease course and increased risk for skin infections.
There are notable discrepancies in the literature regarding paediatric patients treated with TCIs. In a review by Siegfried et al., TCI treatment ranged from 0% to 52% [6]. Accordingly, we found that only 5% of the patients received TCIs, consistent with Paller et al. The 2005The /2006 warnings about the long-term effects (i.e. lymphoma) may have led to less frequent prescribing of TCIs [33]. In addition, the preparations are expensive and not approved as a reimbursement prescription (although individual reimbursement can be granted).
While mainly prescribed for allergies, antihistamines were commonly prescribed for AD. The proportion of dispensed antihistamines was significantly higher in patients with early-onset AD treatment than late-onset treatment. Notably, early-onset AD is associated with a higher risk of seasonal allergies and asthma than lateonset AD [34]. However, another observational cohort study suggests that early-onset and earlyresolving AD are not associated with the development of allergic disease at 3 years of age [35]. The published literature on early-onset and early-resolving AD is scarce, and the heterogeneity of AD needs further investigation.
Although systemic corticosteroids can lead to rapid clearing of AD, their use is limited owing to the side effects and the risk of severe rebound flare when discontinued [36]. The total number of systemic corticosteroids administered in the study population was low, close to 2% (a maximum estimate considering the number of prescriptions without ICD/ICPC coding). Furthermore, the high number of dispensed systemic corticosteroids is probably determined by the burden of comorbid asthma and hay fever. A more reasonable estimate would be around 1%, which contrasts with the high consumption (24%) in US paediatric patients recorded by Paller et al. As the course and severity of paediatric AD are likely to be similar in the USA and Norway, this result suggests that non-medical factors (e.g. extent of private health care and treatment traditions) play an essential role in clinical decisions.
In the US database study, prescribed systemic treatment, including immunosuppressants, calcineurin inhibitors, folic acid analogues and interferons ranged from 0.0% in patients aged 1 year to 0.3% in patients aged 10 years [5]. Such marginal prescribing might be rooted in the lack of robust long-term data on the effects of these drugs on paediatric patients [37].
Strengths and Limitations
The large sample size of the longitudinal individual-based novel dataset for the entire population of Norwegian children under the age of 11 ensures robustness with high significance and generalisability. Another strength is that children from all social strata are included in the study, as the social welfare system in Norway is free of charge for paediatric patients. Moreover, Norway has a high number of practising physicians who provide accessible healthcare throughout the country. Another major strength is the NorPD's complete coverage of all prescriptions dispensed by pharmacies to the Norwegian population, including all outpatients.
Topical hydrocortisone 1% is available over the counter and could affect sensitivity. However, the Norwegian welfare system provides reimbursement prescriptions (free of charge) for paediatric patients with chronic diseases such as AD. Consequently, an over-the-counter purchase is a more expensive option, and the analysed sample is expected to be representative [38]. Finally, although this study is performed retrospectively, the actual data are collected in a prospective fashion independent of the study itself, thus eliminating some of the inherent biases commonly identified for traditional retrospective studies (e.g. recall bias, information bias, interview bias, data collection biases and primary non-compliance [39]. Several potential limitations should be discussed. Firstly, the prevalence of AD treatment is closely linked with outcome definitions and should be interpreted cautiously. Secondly, the correspondence between prescriptions dispensed and actual medication use is unknown and should be considered a maximum estimate. Conversely, the time between the first and last prescription received should be interpreted as a minimum estimate, as the time course of administration is unknown. Thirdly, TCSs are prescribed for a broad group of skin conditions, perhaps distorting the true picture of AD drug treatment in the study population [7,8]. Although often used as the gold standard in studies, physician-recorded diagnoses may lead to incorrect coding and interfere with the prescribing proportions' denominators. This study's algorithm was predominantly based on physician-recorded diagnoses (criterion one). The criterion two (algorithm) was minimised to only 9.3% of the patients included. A validation study [7] found that two or more annual prescriptions of TCS yielded a sensitivity value of 40% and a positive predictive value of 60%. However, the non-AD criteria (criterion 3) increased the positive predictive value. Fourthly, 1.0% of prescriptions lacked identification numbers and were excluded. However, AD is defined as a chronic disease, and a paediatric AD patient would presumably have received prior or subsequent medical treatment. It is therefore conceivable that the majority of the excluded prescriptions belong to the included patients.
Finally, this study does not address carbamide (urea) creams. Dupilumab was licensed in 2020 for patients over 12 years of age, and crisaborole was not licenced AD treatment during the study period. Moreover, this study does not include phototherapy and climate therapy under the auspices of the public health service.
CONCLUSIONS
In this nationwide real-world registry study, all topical and systemic medications dispensed were documented up to the age of 10 years.
We found that AD was a mild and short-term condition in most paediatric patients. Only a minority of the patients received potent TCSs. Male sex and early-onset AD are associated with, and are potential predictors of, long-term treatment and treatment of potent topical corticosteroids, antihistamines and skin infections. Systemic treatments such as corticosteroids, immunosuppressants, calcineurin inhibitors, folic acid analogues and interferons were marginally prescribed.
There is a need for real-world global knowledge transfer, learning from longitudinal existing treatment patterns in paediatric patients and how differences in treatment patterns are associated with the subsequent prevalence and course of AD in older patients. Although the recommended clinical guidelines were followed, we encourage further research to investigate the presumed unmet therapeutic needs of this vulnerable patient group and the applicability of current guidelines, particularly in paediatric patients under 2 years of age. full access to the data and takes responsibility for data integrity and accuracy of the data analysis. Study concept, methodology and design: All authors. Acquisition, analysis and interpterion of data: All authors. The first draft of the manuscript was written by Cathrine Helene Mohn and Jon Anders Halvorsen. Critical revision of the manuscript for important intellectual content: All authors. Statistical analysis: Cathrine Helene Mohn. Administrative, technical or material support: All authors Study supervision: Jon Anders Halvorsen Disclosures. Jon Anders Halvorsen has financial/personal connections to AbbVie and Celgene. Cathrine Helene Mohn has received a research grant from Sanofi Genzyme. Hege S. Blix, Anja Maria Braend, Per Nafstad and Ståle Nygard have nothing to disclose.
Compliance with Ethics Guidelines. The observational study was conducted between January 2020 and October 2021. The study was approved by the Regional Committees for Data Availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/.
|
2022-06-29T13:35:04.372Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "1093e2a89231ce648e662059107c7528eafc8d01",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13555-022-00754-6.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "c0801d266b6a1b7fd25721da97c9bdb625cae867",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23565763
|
pes2o/s2orc
|
v3-fos-license
|
Addition of bevacizumab for malignant pleural effusion as the manifestation of acquired EGFR-TKI resistance in NSCLC patients
This study aimed to investigate the role of bevacizumab in patients with advanced non-small cell lung cancer (NSCLC) who had developed acquired resistance to EGFR-TKIs therapy that manifested as malignant pleural effusion (MPE). In total, 86 patients were included. 47 patients received bevacizumab plus continued EGFR-TKIs and 39 patients received bevacizumab plus chemotherapy. The curative efficacy rate for MPE in bevacizumab plus EGFR-TKIs group was significantly higher than that in bevacizumab plus chemotherapy group (89.4% vs. 64.1%, respectively; P = 0.005). Patients in bevacizumab plus EGFR-TKIs group had longer progression-free survival (PFS) than those in bevacizumab plus chemotherapy group (median PFS 6.3 vs. 4.8 months, P = 0.042). While patients with acquired T790M mutation in bevacizumab plus EGFR-TKIs group had a significantly longer PFS than those in bevacizumab plus chemotherapy group (median PFS 6.9 vs. 4.6 months, P = 0.022), patients with negative T790M had similar PFS (median PFS 6.1 vs. 5.5 months, P = 0.588). Overall survival (OS) was similar between two groups (P = 0.480). In multivariate analysis, curative efficacy was an independent prognostic factor (HR 0.275, P = 0.047). In conclusion bevacizumab plus EGFR-TKIs could be a valuable treatment for NSCLC patients presenting with MPE upon resistant to EGFR-TKIs therapy, especially for those with acquired T790M mutation.
INTRODUCTION
For patients with EGFR mutant non-small cell lung cancer (NSCLC), several trials have consistently demonstrated EGFR-tyrosine kinase inhibitors (TKIs) such as gefitinib, erlotinib, afatinib and icotinib can result in better outcomes than standard platinum-based chemotherapy [1][2][3][4]. Unfortunately, most patients who initially respond to EGFR-TKIs will inevitably develop resistance within 1 year [5][6][7]. A previous study had categorized the clinical failure modes of EGFR-TKIs into three groups namely dramatic, gradual and local progression [8]. Dramatic progression was the most common failure modes (57.3%) and most of these cases are due to the malignant pleural effusion (MPE) [8][9][10][11]. Although the recommended therapeutic strategy
MPE is the abnormal fluid accumulation in the pleural space, which may eventually impair the normal function of the heart and be potentially life-threatening [14,15]. Currently, there are several management options for MPE including the tube drainage, chemical pleurodesis and use of chemotherapeutic agents. However, the relapse rate can be as high as 50%. Recently, the recombinant antivascular endothelial growth factor (VEGF) monoclonal antibody bevacizumab, has been shown to be efficient in suppressing the accumulation of pleural fluid [16]. Several other studies have evaluated the efficacy of bevacizumab combined with chemotherapy and the results showed that bevacizumab plus chemotherapy could achieve a higher control rate (range from 60.8% to 83.3%) of MPE than chemotherapy alone and significantly alleviate the symptom [13,17,18].
Besides that, bevacizumab also showed promising results in patients with EGFR sensitizing mutation. The subgroup analysis from the Chinese registration study of bevacizumab and BEYOND study showed that bevacizumab plus carboplatin and paclitaxel obtained 12.4 months PFS in patients with NSCLC and EGFR mutation, which was significantly longer than 7.9 months of the chemotherapy alone group in NSCLC patients with sensitizing EGFR mutations [19]. Several other studies also showed that bevacizumab plus EGFR-TKI had a significant longer PFS than EGFR-TKI alone for NSCLC patients with EGFR mutations with reasonable toxic-effect profiles [20][21][22]. The BELIEF study further suggested that bevacizumab plus EGFR-TKI seems to be preferentially effective in patients with T790M mutation in 2015 ESMO [23]. Hence, we hypothesize that bevacizumab plus EGFR-TKI might be used as a rational therapeutic option for NSCLC patients who developed acquired resistance to EGFR-TKIs that presented as MPE.
To validate our hypothesis, we retrospectively analyzed the therapeutic effect of bevacizumab in 86 Chinese EGFR mutant NSCLC patients. We compared bevacizumab plus continuation with EGFR-TKIs vs. bevacizumab plus switched chemotherapy as the subsequent treatment for MPE as the manifestation of acquisition of acquired resistance to EGFR-TKIs. In addition, their difference was further compared based on the status of acquired EGFR T790M mutation.
Patient characteristics
A total of 86 patients who developed acquired resistance due to MPE were included in this study. 47 of them received bevacizumab plus continued EGFR-TKI and 39 received bevacizumab plus switched chemotherapy. The baseline characteristics of patients are listed in Table 1. The median patient age was 59 years (range, 45-78), and more than half of the patients were females (N = 56, 65.1%) and never-smokers (N = 63, 73.3%). All patients had histologically proven adenocarcinoma of the lung. The demographics including age, sex, smoking status, ECOG PS score, histological classification, EGFR mutation type, previous EGFR-TKIs therapy, and lines of treatment were similar between the two groups. Three EGFR-TKIs were used in the study, including gefitinib (55.8%), Table S1). In total, forty-four patients showed EGFR T790M mutation. Among them, 23 patients received bevacizumab with the continuation of EGFR-TKI treatment and 21 received bevacizumab plus switched chemotherapy treatment (Figure 1).
DISCUSSION
As far as we know, this is the first study to assess the therapeutic effect of bevacizumab in NSCLC patients who presented with MPE as the manifestation of acquired resistance to EGFR-TKI. We found that the addition of bevacizumab was effective to control MPE in NSCLC patients after failure of EGFR-TKI therapy. Moreover, we found that bevacizumab plus continued EGFR-TKI significantly improved curative efficacy of MPE and PFS, especially in patients with T790M mutations, which suggested that bevacizumab plus continued EGFR-TKI could be considered as a proper option for EGFR-TKI acquired resistance mainly presented as MPE.
MPE is one of the common progressive modes of advanced NSCLC patients with EGFR mutation receiving EGFR-TKIs and most often represents poor prognosis [16]. The current treatment options for NSCLC patients with MPE involve the tube drainage, chemical pleurodesis and intrapleural administration of chemotherapeutic agents, etc [17,24]. However, the clinical outcome of these therapies is inconsistent. Previous study showed Oncotarget 62653 www.impactjournals.com/oncotarget that VEGF is an essential mediator in the formation of pleural effusions [25], which can promote the formation of MPE by increasing the vascular permeability, stimulating the proliferation of vascular endothelial cells, promoting the efflux of plasma proteins and activating enzymes that degrade the extracellular matrix [16,25]. Some studies have also demonstrated that bevacizumab-based chemotherapy can significantly suppress MPE than chemotherapy alone [13,[16][17][18]. In a prospective study, patients with NSCLC-induced MPE were randomly assigned to receive bevacizumab plus cisplatin or cisplatin alone, it was found that the curative efficacy in bevacizumab group was significantly higher than that in the cisplatin group (83.3% vs. 50.0%, P < 0.05) [16]. Another phase II study also confirmed that bevacizuamb plus chemotherapy had a significant effect on MPE control in NSCLC patients. The MPE control rate was 91.3% in bevacizumab with carboplatin and paclitaxel group versus 78.3% in carboplatin and paclitaxel group (P = 0.08) [17]. The present study was the first study to assess the efficacy of bevacizumab plus continued EGFR-TKI or switched chemotherapy in NSCLC patients who developed EGFR-TKI acquired resistance and presented as MPE. Our results have demonstrated that the addition of bevacizumab to EGFR-TKI or chemotherapy had the similar MPE control rate (90.7%). Moreover, our study further demonstrated that the addition of bevacizumab to EGFR-TKI was more effective than to chemotherapy in EGFR mutant NSCLC patients who developed EGFR-TKI acquired resistance and presented as MPE.
Theoretically, bevacizumab in combination with EGFR-TKIs might improve the anti-tumor effect because they target different tumor growth pathways (angiogenesis and EGFR activity, respectively). A previous study reported that combined blockade of the VEGF and EGFR pathways could abrogate both primary resistance to EGFR-TKIs and or acquired resistance due to T790M mutation [26]. The effect of bevacizumab plus EGFR-TKIs as first-line therapy in patients with advanced NSCLC harboring EGFR mutations has been demonstrated in the phase II trials [19][20][21]. In JO25567 trial, the addition of bevacizumab to erlotinib significantly prolonged PFS in NSCLC patients with EGFR mutation compared to erlotinib alone (median PFS: 16.0 vs. 9.7 months; P = 0.0015) [20]. The effect of bevacizumab plus EGFR-TKI was also demonstrated in the Okayama Lung Cancer Study Group Trial 1001, which suggested that bevacizumab plus gefitinib could achieve 14.4 months PFS [21]. Interestingly, our previous phase III BEYOND trial has showed that bevacizumab plus carboplatin and paclitaxel could achieve PFS of 12.4 months PFS in patient with non-squamous NSCLC and EGFR mutation [19], which was similar to the historical data of PFS using EGFR-TKIs as the first-line therapy in NSCLC patients with sensitizing EGFR mutations [2,3]. Furthermore, the ASPIRATION study prospectively assessed continuing EGFR-TKI (erlotinib) beyond progression and showed that patients could had a median of 3.1 months PFS benefit after initial progression [27]. Taken together, these studies demonstrated that the addition of bevacizumab with continued EGFR-TKI might be used as a rational therapeutic option for NSCLC patients who developed acquired resistance to EGFR-TKIs that presented as MPE.
In the current study, our results further demonstrated that bevacizumab plus EGFR-TKI or chemotherapy was also effective in NSCLC patients who presented with MPE as the manifestation of acquired resistance to EGFR-TKI. Furthermore, we have demonstrated that bevacizumab plus EGFR-TKI can have superior efficacy than bevacizumab plus chemotherapy, therefore suggested that bevacizumab plus continued EGFR-TKIs is a rational treatment for MPE upon resistance,which warrant largescale, randomized clinical trials to validate.
Our study further performed a subgroup analysis based on the T790M mutation status. Compared to bevacizumab plus switched chemotherapy, we found that bevacizumab plus continued EGFR-TKIs significantly prolonged PFS in patients with T790M mutation but not those without this particular mutation. T790M is one of the most common mechanisms of acquired resistance to the first-generation EGFR-TKIs. To date in China, there is still no standard therapy for patients with T790M-mediated EGFR-TKI resistance although several new agents such as CO-1686 and AZD9291 that target T790M are being investigated in phase III trials (e.g. AURA3) and showed promising results in phase I/II trials [6,28]. In a preclinical study, Naumov et al. reported that EGFR-TKI resistance could be associated with VEGF elevation in both the tumor cells and host stroma and combined blockade of the VEGF receptor and EGFR pathways could abrogate both primary resistance to EGFR-TKIs and or acquired resistance due to T790M mutation [26]. In 2015 European Cancer Congress (ECC), R.A. Stahel et al. reported that patients with EGFR T790M mutation received bevacizumab plus erlotinib had a significant longer PFS than those without EGFR T790M mutation (median PFS 16.0 vs. 10.5 months). Almost all of the subgroup analysis suggested patients with EGFR T790M mutation trended to have better PFS. Intriguingly, Furugaki K et al. reported in the xenograft models that bevacizumab plus erlotinib did not enhance antitumor activity in erlotinib primary resistant tumors with T790M mutation but did enhanced the killing when the tumors could still be suppressed by erlotinib [29]. This suggested the dependency on EGFR signaling is likely required to achieve the maximal combination effect of EGFR-TKI and bevacizumab -partly because VEGF signaling shares some of the common downstream effectors of EGFR signaling, therefore mechanisms confers primary resistance to EGFR-TKI might impair the response to VEGF inhibition too. The better combination effect observed in patients with acquired T790M mutation is probably because their cancer cells still rely heavily www.impactjournals.com/oncotarget on EGFR and its downstream signaling for growth and survival. Further prospective trials are needed to define whether bevacizumab plus EGFR-TKIs has clinical value for patients who have primary resistance to EGFR-TKI that harbor de novo T790M mutation.
We must mention that we have several limitations in this study. Firstly, the sample size is small and the nature of the retrospective study may have introduced collection bias. Secondly, not all of the patients with EGFR mutations received EGFR-TKIs as first-line treatment, which will inevitably induce the imbalance in OS analysis. Thirdly, most of the patients had their EGFR T790M mutation diagnosed by using pleural effusion, which might be inconsistent with findings from the tumor tissues. Last but not least, majority of the included patients received treatment after third-line therapy but we did not collect the details of following treatment. This would result in a bias when performed the OS analysis that should be acknowledged.
In conclusion, we found that bevacizumab was effective to control MPE in patients who developed acquired resistance to EGFR-TKI. Bevacizumab plus continued EGFR-TKI resulted in better effusion control and longer PFS than bevacizumab plus switched chemotherapy, especially for patients harboring acquired EGFR T790M mutation. This observation suggested bevacizuamb plus continued EGFR-TKI should be considered as a proper regimen for patients who have failed first-line or second-line EGFR-TKIs therapy due to MPE.
Study population
We retrospectively reviewed the medial records of advanced NSCLC patients with sensitizing EGFR mutation (either exon 19 deletion or Leu858Arg mutation) that had received bevacizumab therapy after the acquisition of resistance to EGFR-TKIs therapy (gefitinib, erlotinib or icotinib) from December 2011 and December 2015. Patients met the following criteria were included: 1) age > 18 years; 2) cytologically or histologically confirmed advanced NSCLC with EGFR sensitizing mutations (e.g. exon 19 deletion or L858R); 3) chest X-ray, ultrasonography or computed tomography (CT) scan showing newly developed or increased large areas of unilateral or bilateral pleural effusion or polyserositis; 4) malignant tumor cells found in the pleural fluid to confirm MPE; 5) without previous treatment of bevacizumab. All patients received gefitinib, erlotinib or icotinib orally at the recommended dose, either as the first-line therapy, or after first-line standard chemotherapy. Patients received first-line therapy got another line of chemotherapy upon resistance would be excluded. Once the patients got acquired resistance of EGFR-TKI due to MPE, their pleural fluid (50 mL) was drained for cytological evaluation and molecular mutation detection were performed once cancer cells were found in the pleural fluid. The other local therapies such as talc pleurodesis and chemotherapy were excluded into this study. Considering the situation that bevacizumab is still not covered by the health insurance system in China and AVAiL study showed that bevacizumab of 7.5 mg/kg had a similar PFS as the dose of 15 mg/kg, bevacizumab was administrated 7.5 mg/kg by intrathoracic or intravenous injection initially and then intravenously on day 1 of a 21-day cycle until progressive disease (PD) again. Major clinicopathological characteristics including demographic information, Eastern Cooperative Oncology Group performance status (ECOG PS), smoking history, clinical staging [30] and lung cancer histology (WHO classification) [31] were collected. Never smoking was defined as < 100 cigarettes in a lifetime. Smoking status, ECOG PS and age were evaluated at the time of diagnosis. The response to treatment was recorded in accordance with the Response Evaluation Criteria in Solid Tumors guidelines (version 1.1) and the survival via Kaplan-Meier method. The study was conducted with the approval of the ethics committee in Shanghai Pulmonary Hospital and a written informed consent was obtained from each participant to use his or her clinical information for research analysis. All of us confirmed that all methods were performed in accordance with the relevant guidelines and regulations.
Treatment and response evaluation
After the development of resistance to EGFR-TKIs therapy and manifested mainly as MPE, the eligible patients received bevacizumab plus continuation of EGFR-TKIs or switched to chemotherapy and bevacizumab as the 2 nd or 3 rd line treatment. This is a retrospective study and the therapeutic regimen in this study was assigned according to the agreement between the patients' decision and thoracic oncologists' consultation. The chemotherapy setting was identified according to doctor's experience and patients' willing, economic situation and performance. The evaluation of MPE control was determined according to previous studies [13,16,32]. Briefly, complete remission (CR) meant the accumulated fluid had disappeared and remained stable for at least four weeks; partial remission (PR) was defined as when >50% of the accumulated fluid had disappeared, symptoms had improved, and the remaining fluid did not increase for at least four weeks; remission not obvious (NC) was considered when <50% of the accumulated fluid had disappeared; PD was considered when the accumulated fluid had increased. The curative efficacy for MPE was calculated by taking the sum of CR and PR. Baseline assessments were usually performed within 2 weeks of starting treatment after thoracentesis. www.impactjournals.com/oncotarget A chest CT scan was performed every 2 cycles (6 weeks) in routine clinical practice or otherwise as symptoms indicated. Responses were confirmed by subsequent CT scans performed 4 to 6 weeks after the initial response documentation.
EGFR mutation analysis
All mutational analyses were performed at the Tongji University Medical School Thoracic Cancer Institute, Shanghai. Briefly, DNA was extracted using the DNeasy Blood and Tissue Kit or the QIAamp DNA FFPE Tissue Kit (both from Qiagen, Hilden, Germany). EGFR mutations were tested by amplification refractory mutation system (ARMS) as described in our previous studies . The kits were obtained from Amoy Diagnostics Co. Ltd., Xiamen, China.
Statistical methods
Categorical variables were compared using chisquare tests, or Fisher's exact tests when necessary. Student t-test was conducted for comparison of continuous variables such as the means between the two groups. OS (overall survival) was calculated from the date of lung cancer diagnosis to the date of death from any cause or was censored at the last follow-up date. PFS (progressionfree survival) was defined as the time from the date of the start of treatment to the date of documented disease progression, death from any cause, or the last follow-up. Kaplan-Meier estimates were used in the analysis of the time-to-event variables, and the 95% confidence interval (CI) for the median time to event was calculated. The logrank test was used to compare cumulative survival in the two groups. Cox proportional hazards model was used for uni-and multivariate survival analyses to calculate the hazard ratios (HR) and corresponding 95% CI. P values are two-sided and were considered significant when less than 0.05. All statistical analyses were performed using the SPSS statistical software, version 20.0 (SPSS Inc., Chicago, IL, USA).
|
2018-01-24T17:25:33.048Z
|
2017-03-09T00:00:00.000
|
{
"year": 2017,
"sha1": "478b8a0231cb688f71f03ad1d19fc49f092c1be1",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=16061&path[]=51328",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "478b8a0231cb688f71f03ad1d19fc49f092c1be1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251348485
|
pes2o/s2orc
|
v3-fos-license
|
Cardiometabolic and Anthropometric Outcomes of Intermittent Fasting Among Civil Servants With Overweight and Obesity: Study Protocol for a Nonrandomized Controlled Trial
Background: Overweight and obesity among adults are a growing global public health threat and an essential risk factor for various noncommunicable diseases. Although intermittent fasting is a generally new dietary approach to weight management that has been increasingly practiced worldwide, the effectiveness of 2 days per week dry fasting remains unclear. Objective: The Cardiometabolic and Anthropometric Outcomes of Intermittent Fasting study aims to determine the cardiometabolic, anthropometric, dietary intake, and quality of life changes among civil servants with overweight and obesity, following combined intermittent fasting and healthy plate (IFHP) and healthy plate (HP) and explore the participants’experiences. Methods: We designed a mixed methods quasi-experimental study to evaluate the effectiveness of the IFHP and HP methods among adults with overweight and obesity. A total of 177 participants were recruited for this study, of which 91 (51.4%) were allocated to the IFHP group and 86 (48.6%) to the HP group. The intervention comprised 2 phases: supervised (12 weeks) and unsupervised (12 weeks). Data collection was conducted at baseline, after the supervised phase (week 12), and after the unsupervised phase (week 24). Serum and whole blood samples were collected from each participant for analysis. Data on sociodemographic factors, quality of life, physical activity, and dietary intake were also obtained using questionnaires during data collection. Results: Most of the participants were female (147/177, 83.1%) and Malay (141/177, 79.7%). The expected outcomes of this study are changes in body weight, body composition, quality of life, physical activity, dietary intake, and cardiometabolic parameters such as fasting blood glucose, 2-hour postprandial blood glucose, hemoglobin A1c, fasting insulin, and lipid profile. Conclusions: The Cardiometabolic and Anthropometric Outcomes of Intermittent Fasting study is a mixed methods study to evaluate the effectiveness of combined IFHP and HP interventions on cardiometabolic and anthropometric parameters and explore participants’ experiences throughout the study.
Background
The overweight and obesity epidemic has become one of the most alarming public health threats. Although largely preventable, the worldwide prevalence of obesity nearly tripled between 1975 and 2016 and shows an increasing trend. In 2016, >1.9 billion adults were overweight, and 650 million were obese. If this current trend continues, it is estimated that 2.7 billion and >1 billion adults will be overweight and obese, respectively, by 2025 [1]. In Malaysia, the National Health and Morbidity Survey (NHMS) 2019 reported that 50.1% of adults were either overweight (30.4%) or obese (19.7%), which increased compared with the NHMS 2011 (overweight 29.4% and obesity 15.1%) and 2015 (overweight 30% and obesity 17.7%) findings [2].
The relationship between obesity and poor health outcomes is well established. Despite the increased risk of noncommunicable diseases such as hypertension, diabetes, stroke, coronary heart disease, and certain cancers, a growing body of literature has demonstrated a positive relationship between obesity and various mental health issues such as depression and poor quality of life [3,4]. The World Health Organization has established multiple strategies that describe the actions that need to be taken by stakeholders at the global, regional, and local levels to combat obesity in adults and children. Furthermore, effective and feasible policy actions have been included in the "Global action plan on physical activity 2018-2030: more active people for a healthier world" to increase physical activity globally [1].
As obesity occurs because of a positive energy balance in the body, strategies for preventing and treating obesity mainly focus on dietary modification and increasing physical activity. A form of calorie restriction dietary protocol is intermittent fasting (IF), which encompasses various eating diet plans that cycle between fasting and nonfasting states over a defined period to create a negative energy balance, thereby inducing weight loss. Studies have shown that IF is effective in reducing body weight and improving metabolic outcomes [5,6]. Although the effects of wet IF have been well documented, the benefits of dry IF (except for Ramadhan fasting for Muslims) have not been clearly indicated in previous studies. Wet or water IF is defined as fasting during which all food and drink except water are restricted [7], whereas dry IF is complete fasting without any food and fluid intake [8]. In countries with predominantly Muslims, the practice of dry fasting during Ramadan and the voluntary 2 days per week fasting (Mondays and Thursdays) are widely conducted.
As portion size is a crucial determinant of energy intake, portion control by controlling serving size is another practical method for reducing calorie intake and promoting weight loss. This portion-control method has been widely practiced and studied worldwide, using different portion divisions depending on the culture and eating habits [9,10]. The Malaysia Healthy Plate, a portion-control dietary plan, was created to translate the messages in the Malaysia Dietary Guideline 2010 and Malaysia Food Pyramid 2010 [11]. It is a visual tool that emphasizes the quarter-quarter-half concept and provides a quick visual technique that helps ensure that the intake of food is within the recommended guidelines. Specifically, the Malaysia Healthy Plate is a single-meal guide that divides the plate into a quarter plate of grains or grain products; a quarter plate of fish, poultry, meat, or egg; and a half plate of fruits and vegetables [12].
Objectives
Although weight loss has been reported in studies of conventional IF, the adaptive phenomenon of dry IF on 2 nonconsecutive days per week on cardiometabolic and anthropometric outcomes remain unclear. Similarly, although the Malaysian Healthy Plate policy has been widely practiced and publicized since 2010, the effectiveness of this eating plan in improving cardiometabolic risks and promoting weight loss is still not well documented. Thus, we established the Cardiometabolic and Anthropometric Outcomes of Intermittent Fasting study to determine the cardiometabolic, anthropometric, dietary intake, and quality of life changes among civil servants with overweight and obesity, following combined IF and healthy plate (IFHP) and healthy plate (HP) and explore the participants' experiences throughout the study. We hypothesized that combining dry IF and HP diet protocols will improve these parameters more than HP alone.
Study Design
This is a quasi-experimental study applying a mixed study method that consists of 2 parts: quantitative and qualitative. The quantitative part involved allocating participants into 2 intervention arms: the combined IFHP group and the HP group and measuring the parameters of cardiometabolic risk and anthropometrics at baseline, month 3 and month 6. The qualitative part aimed to explore the facilitators and barriers that enabled or admonished the success of weight loss in the IFHP group through focus group discussions (FGDs) conducted at month 6.
Study Site
The participants were allocated to each intervention arm according to the institutes in the National Institutes of Health in Setia Alam; the Institute for Medical Research, Jalan Pahang; and the Institut Latihan Kementerian Kesihatan Malaysia (Teknologi Makmal Perubatan), Jalan Pahang. The distance between Jalan Pahang and Setia Alam is approximately 40 km. The allocation was done in such a manner to avoid contamination bias. The study population had a relatively similar sociodemographic features, environment, facility, and nature of work. The allocation of the intervention was determined a priori and was based on the feasibility of monitoring the study participants.
Inclusion and Exclusion Criteria
Workers aged 19 to 59 years with a BMI of ≥23 kg/m 2 (overweight or obese), ready to participate in the intervention (assessed through readiness to participate in screening), and providing informed consent were included in this study.
Workers who (1) had recent involvement in weight loss program or activity (IF, diet changes, or physical activity changes or any activities that were performed constantly to reduce weight); (2) were affected by any eating disorder; (3) were diagnosed with diabetes and hypertension (on medication) or other metabolic health disturbances such as thyroid disease, chronic kidney disease, malignancy, and polycystic ovarian syndrome; (4) were taking any medication or supplements that can affect study outcome; (5) were pregnant; and (6) had lack of capacity or language skills to independently follow the protocol were excluded from this study.
Sample Size
Sample size calculation was conducted using a power and sample size program. It followed the rules required for comparison between the 2 groups [13]. The sample size was estimated using the level of significance (α=.05) and power of the study (1-β=.80), minimum suggested difference (delta) of 5% (SD 10%) weight loss that may be achieved under this intervention, and the corresponding differences among groups.
The assumption of 5% weight loss used in the sample size calculation was based on a review conducted by Ryan and Yockey [14], which stated that a minimum weight loss of 5% is needed to improve cardiometabolic risk such as hypertension, diabetes mellitus, and hyperlipidemia. The minimum sample size required for this study was 64 participants. Considering 40% attrition, the required sample size for each arm was 90 participants. A total of 180 participants were required for this study.
Dietary Protocols
The combined IFHP regimen consisted of dry fasting from dawn to dusk for 2 days a week (Mondays and Thursdays) and practice of HP for the rest of the week. During the fasting days, the participants were encouraged to have a meal before dawn. No food or drink was allowed after dawn (approximately 13 hours) until sunset. They did not need to follow an HP diet on the fasting day. Smoking and sexual activity were also forbidden during the fasting day, following the Sunnah fasting obligation. Fasting adherence records were taken as 0, 1, or 2 fasting day or days per week. For the rest of the week, they were obligated to consume meals according to the HP concept. The female participants were discouraged from fasting during menstruation.
Participants in the HP group were asked to practice the HP concept daily: division of plate portions into a quarter for protein, a quarter for complex carbohydrates, and a half for fruits and vegetables. Participants were advised to practice HP for all 3 main meals per day. However, the practice of at least one HP meal per day is considered the minimum requirement for adherence to the dietary protocol. The research assistants (Nurul Hidayah binti Mat Yusoff and Norsyuhada binti Japri) monitored the intervention through a daily record of food intake picture (one meal per day) and a weekly fasting record.
Recruitment Phase
The recruitment of participants for this study comprised two phases: (1) health screening and (2) readiness to participate in screening. An invitation to participate in the study was sent through Google Forms, emails, and phone calls. Volunteers were screened for inclusion and exclusion criteria by the study team. Those eligible were screened again for readiness to participate, which was conducted through face-to-face interviews. During the interviews, the participants' readiness (motivation, willingness to commit, and enthusiasm) was assessed by a trained psychologist from the Institute for Health Behavioral Research. Only participants deemed ready to commit were included in the study. Informed consent, as well as a behavioral contract, was signed by each enrolled participant after being clearly explained the purpose of each document in the research. Both researchers and participants were unblinded to the intervention.
Intervention Phase
The overall duration of the intervention phase was 6 months, with 12 weeks in the supervised phase, followed by 12 weeks in the unsupervised phase. During the supervised phase, the participants started the diet protocol according to the designated intervention group. In the IFHP group, participants were reminded to fast, through a message sent to their mobile phones on the eve of fasting days, and accomplishments were recorded twice weekly. Participants in both groups were required to send a picture of one of their meals to the research assistant. In contrast, no fasting reminder, weekly fasting records, or meal pictures were sent during the latter unsupervised phase.
Data Collection
During the 6-month study duration, data collection was conducted at 3 points: baseline (before starting the supervised phase), month 3 (at the end of the supervised phase), and month 6 (at the end of the unsupervised phase). Participants were asked to answer questionnaires on social demography (during baseline only), the Food Frequency Questionnaire (FFQ), the International Physical Activity Questionnaire-Short Form (IPAQ-SF), and the Obesity and Weight-Loss Quality of Life (OWLQOL) Questionnaire. Anthropometry measurements and fasting blood samples were taken. Oral glucose tolerance test and body composition analysis were also performed.
Self-administered Questionnaire
The FFQ was used to determine the frequency of food and beverage consumption over the previous month. The questionnaire consisted of questions covering the frequency of cereals and cereal products, fast food, meat and meat products, fish and seafood, eggs, legumes and legume products, milk and milk products, vegetables, fruits, drinks, alcoholic drinks, confectionaries, bread spreads, and flavor intake. The validated Malay version of the FFQ used in this study consisted of 165 items, and the participants required approximately 30 minutes to answer the questions at each point of data collection [15]. The records were analyzed using Nutritionist Pro Nutrition Analysis Software 7.8.0 (Axxya Systems, 2021) to determine their energy and macronutrient intake.
To measure the quality of life, a validated Malay version of the OWLQOL questionnaire was used. The OWLQOL is a self-administered questionnaire that assesses participants' feelings about obesity and their efforts in weight loss [16]. The 17 OWLQOL items consist of 7-point scale responses ranging from 0 (not at all) to 6 (a very great deal). The score for each item was reversed before the total score was obtained. Consequently, it was transformed to a scale of 0 to 100, with a higher score indicating better quality of life [17]. The Malay version of the questionnaire has been validated among 28 female health staff with overweight and obesity, with a Cronbach of .953 [18]. The participants needed approximately 10 minutes to complete the questionnaire.
The IPAQ-SF was used to measure the participants' physical activity in the past week. The questionnaire was validated for use by adults in 12 countries [19]. In this study, we used the Malay version of the IPAQ-SF, which was validated in a study using data obtained from the NHMS 2011 [20]. The participants were requested to record how many days in the past week they spent on specific activities (vigorous and moderate activities and walking) for at least 10 minutes and the amount of time (in minutes) they engaged in a particular activity on an ordinary day. Physical activity level was calculated as the energy expenditure or metabolic equivalent task (MET) minutes per week (MET-minutes per week) based on the IPAQ scoring protocol [21]. To obtain MET scores for each activity, the total minutes spent on vigorous activity, moderate-intensity activity, and walking over the last 7 days were multiplied by 8.0, 4.0, and 3.3, respectively. The total physical activity score was calculated as the sum of all the MET scores from the 3 activity groups. Physical activity can also be categorized into low, moderate, and high physical activity levels, based on the scoring protocol available in the IPAQ website guidelines [21].
Sedentary behavior was also measured in this study. The question used to measure sedentary behavior was included in the IPAQ-SF questionnaire, based on the IPAQ sitting question. Participants were asked to state the total time they spent (hours) sitting or lying down, whether in the workplace, at home, or while traveling, excluding the time spent sleeping, on a typical day. The total daily sitting time was used as an indicator of sedentary behavior.
Anthropometric Measurements
Body weight and height were measured using a seca electronic column scale (SECA GmbH and Co KG) in kilograms and centimeters to the nearest 0.1 kg and 0.1 cm, respectively. Body weight was measured in light clothing, and participants were asked to remove their outer garments and shoes. BMI was calculated by dividing weight by height squared (kg/m 2 ). As only participants with overweight and obesity were included, we categorized them into overweight (23.0-27.4 kg/m 2 ), preobese (27.5-32.4 kg/m 2 ), obese class I (32.5-37.4 kg/m 2 ), and obese class II (≥37.5 kg/m 2 ), based on the cutoff points for public health action for Malaysia [22].
Waist and hip circumferences were measured using a SECA measuring tape (SECA), to the nearest 0.1 cm with the participant standing. Waist circumference was measured at the midpoint between the top of the iliac crest and the lower margin of the last palpable rib, whereas hip circumference was measured at the widest diameter around the buttocks. The waist-to-hip ratio was calculated by dividing the waist measurement by the hip measurement. On the basis of the World Health Organization cutoff points, waist-to-hip ratio of 0.90 cm (men) and 0.85 cm (women) are abnormal, and the risks of metabolic complications substantially increased beyond these points [23].
Blood pressure was measured using an automated upper arm device (Omron Automated Blood Pressure Monitor; HEM 7130). Body composition parameters such as fat mass and fat-free mass were measured using a tetrapolar bioimpedance multifrequency InBody 770 analyzer (Biospace). Personal profiles (age, height, weight, and sex) were entered upon measurement reading.
For each parameter measured 2 measurements were taken, and the average of the 2 measurements was calculated to minimize the measurement error.
Biochemical Testing
Before blood collection, all participants were required to fast overnight for approximately 8 to 10 hours. Approximately 15 ml of fasting venous blood was taken from participants by the medical officers for standard biochemical tests such as fasting blood glucose, hemoglobin A 1c , fasting insulin, and fasting lipid profile such as triglycerides, total cholesterol, high-density lipoprotein cholesterol, and low-density lipoprotein (LDL) cholesterol. The oral glucose tolerance test was performed by asking the participant to drink a 250-mL solution that consists of 75 g of glucose, and another 5 mL of venous blood was collected 2 hours after the oral glucose tolerance test.
Blood samples were processed within 2 hours, and aliquots of serum or plasma samples were stored at −20 °C before analysis. Excess blood samples will be stored for up to 20 years and will be used for future research, as clearly stated in the consent form.
The hemoglobin A 1c level was determined by cationic exchanged high-performance liquid chromatography (Adams A 1c HA-8160; Arkray Inc) following the National Glycohemoglobin Standardization Programme Guidelines.
Fasting plasma glucose, triglycerides, total cholesterol, high-density lipoprotein cholesterol, and LDL cholesterol were analyzed using an automated analyzer (Dirui CS-400) with reagents purchased from Randox Laboratories. Consent was obtained from the participant for permission to extract DNA or RNA and store the remaining samples at −80 °C for future biomarker research related to obesity.
DNA Extraction
Genomic DNA was isolated from frozen peripheral blood samples using the QIAamp Blood Mini Extraction Kit, according to the manufacturer's protocol (Qiagen). Briefly, 20 µL QIAGEN Protease will be added in 200 µL of blood sample, followed by 200 µL Buffer AL. The mixture was vortexed thoroughly and incubated at 56 °C for 10 minutes. Later, 200 µL of absolute ethanol will be added to the mixture before being transferred to a QIAamp Mini spin column and centrifuged at 8000 rpm. Next, 2 washing steps will be performed using washing buffers AW1 and AW2. Finally, 100 µL of distilled water will be added and incubated at room temperature for 1 minute followed by centrifuging at 8000 rpm for 1 minute to elute the DNA. The quality and quantity of the extracted DNA will be quantified using NanoDrop before being stored at −20°C for future use.
Qualitative Method
The FGDs were conducted after month 6 of the study to explore the experience the participant went through, including the enablers and barriers that led to their weight loss outcomes, and obtain their insights on how to improve the intervention.
A trained psychologist from the Institute for Behavioral Research conducted the FGDs using a predetermined outlined interview guide with probes. An audio recorder was used to record conversations and for transcription. Each FGD took approximately 60 to 90 minutes to complete. In total, 4 groups were involved in the FGD, in which 2 groups consisted of participants who successfully reduced weight by at least 4% from their baseline weight, whereas the other 2 groups were among those who did not meet the requirements for weight loss as predetermined by the study parameters.
A summary of the study outcomes, based on part of the study, is listed in Table 1.
Training of Study Team
Before the beginning of data collection, a 3-day workshop was conducted to train the research team members on the skills needed during data collection. The training included techniques for anthropometric measurements, body composition measurements using the InBody 770 analyzer, and explanation on questionnaires used in the study. Presentation on data collection workflow, intervention supervision processes, and biochemical testing procedures was also included in the training. In addition, selected study members were trained by a psychologist from the Institute for Behavioral Research on conducting readiness to participate in screening during the recruitment phase.
Data Management
To ascertain that data collection and record keeping are conducted efficiently, a data collection booklet was developed and assigned to each participant in this study. This booklet consisted of 5 sections (sociodemographic, quality of life, physical activity, dietary record, and anthropometry measurements) sorted into 3 parts representing each point of data collection: baseline, month 3, and month 6. This booklet was used as a data-collection tool to record the responses and measurements of the participants.
The data recorded in the booklet were entered into a database at the end of the study. The data cleaning procedure was conducted by crosschecking all the entered data in the database and booklet and exploring the data to detect any significant outliers that possibly resulted from measurement errors or data entry.
Data for qualitative part were collected using an audio recorder, and each recording was transcribed verbatim by the qualitative team. Each transcript was checked by an independent member and confirmed by a transcriber and interviewer. Consent for audio recording was obtained before the interviews.
Data Analysis
Analysis was conducted using the SPSS software (version 25; IBM Corp). Data for continuous variables were presented as mean (SD) or as median (IQR) for nonnormally distributed data. For categorical variables, frequencies were calculated and are presented as percentages. Variables were compared using the independent 2-tailed t test or Mann-Whitney U test for continuous variables and the chi-square or Fisher exact (n≤5 in any cell) test for categorical variables. All statistical tests were 2-sided, and the significance level was set at P<.05. In further analysis, repeated measures ANOVA will be used to compare the within-and between-group changes in the outcomes, adjusted for possible confounders such as age, ethnicity, and gender.
Data from the FGDs were analyzed using thematic analysis. Interviews transcribed verbatim were independently read by a qualitative researcher (MZJ) to identify the preliminary codes.
An interpretivist approach was taken to interpret and code participants' experiences for the entire study. In this way, the coding of participants' feedback was performed without assumptions or subjective interpretation by the researchers. Meaning units were reviewed, identified, and sorted into codes, before grouping them into categories. Finally, through consensus, the content of each category group was summarized and grouped into main themes. The best representative participants' quotes for each theme were chosen to support the results. Thematic analysis was used, where initial open codes were generated from the data, after which the codes were organized into larger themes.
Ethics Approval and Informed Consent
Ethics approval for this study has been obtained from the Medical Research & Ethics Committee, Ministry of Health Malaysia (NMRR-19-3261-51726). Before recruitment, written informed consent was obtained from each participant, including the storage of samples for biochemical and future DNA analyses. Participants were also informed that future research will be related to medical conditions and current interventions and that their privacy will be protected. Before enrollment, all participants were fully informed of the potential risks associated with engaging in this study. They were free to withdraw from the study at any time during their participation. This study was conducted in full conformity with the current revision of the Declaration of Helsinki and International Council for Harmonisation Guidelines for Good Clinical Practice.
Study Participants
A total of 302 volunteers were interested in joining the study and underwent the screening process. After screening for BMI and eligibility during the first stage of recruitment, of 302 volunteers, only 203 (67.2%) were found eligible. Most volunteers were excluded owing to a BMI of <23 kg/m 2 or having chronic diseases on medication, such as diabetes mellitus and hypertension. During the readiness to participate in screening (stage 2 recruitment phase), 27 volunteers were excluded for various reasons but were not limited to reasons such as commitment issues, furthering education, or moving away. Hence, there were a total of 177 participants recruited in this study; 91 (51.4%) in the IFHP group and 86 (48.6%) in the HP group (Figure 1). During the supervised intervention period, 28 participants withdrew from our study in both groups (IFHP: 16/28, 57%; HP: 12/28, 43%), whereas 27 withdrew during the unsupervised period (IFHP: 12/27, 44%; HP: 15/27, 56%). The reasons for withdrawal included pregnancy (13/55, 24%), inability to commit to study intervention (24/55, 44%), transfer to a different workplace (8/55, 15%), being diagnosed with hypertension, diabetes, hypercholesterolemia on medication (5/55, 9%), and other reasons (5/55, 9%). There were 63 and 59 participants who completed the study in the IFHP and HP groups, respectively. A comparison of anthropometric measurements of the study participants among the intervention groups is presented in Table 3. On the basis of the BMI category, most participants were preobese (69/177, 38.9%), followed by overweight (63/177, 35.6%), obese class I (32/177, 18.1%), and obese class II (13/177, 7.3%). Although the BMI of the participants in the IFHP group was slightly higher than that of the participants in the HP group, the difference was not statistically significant (P=.13; Table 3). Overall, no significant differences were observed in sociodemographic characteristics and anthropometric measurements between the 2 groups at baseline, except for ethnicity and job category (P<.05; Tables 2 and 3).
Sociodemographic Characteristics
After 6 months after the intervention, 21 participants were interviewed for their feedback across 4 different groups: 2 being successful in their weight loss attempts and the other 2 groups that did not meet the requirements of weight loss reduction. Four main themes were constructed from the feedback given: efficacy toward the intervention, barriers and facilitators that enable or admonish weight loss attempts, support during the intervention, and perceived sustainability of the intervention. This feedback serves as a platform for researchers to improve their future interventions.
Principal Findings
We hypothesized that there would be a significant improvement in cardiometabolic and anthropometric parameters among participants in the IFHP group following the intervention after 3 and 6 months. We expected that these changes would also be present among participants in the HP group, but the degree of change would be more prominent among participants in the IFHP group. We also believe that the effectiveness of the intervention in improving cardiometabolic and anthropometric outcomes was driven by both personal motivation and a strong support system.
The preliminary results showed no significant differences in most sociodemographic characteristics and anthropometric measurements of the participants between the 2 intervention groups. The significant difference observed in job categories is most likely because of the departmental units stationed at the Institute for Medical Research Jalan Pahang being diagnostic based; thus, there was a higher proportion of medical laboratory technologists in the HP group than in the IFHP group.
Comparison With Prior Work
On the basis of a meta-analysis by Harris et al [24], there was no significant difference in weight loss between continuous and intermittent energy restriction [24]. Thus, it can be concluded that to reduce body weight, intermittent energy restriction is an alternative method to continuous energy restriction and may be preferred because of its feasibility and practicality.
Despite robust evidence supporting the effectiveness of wet IF in reducing weight and improving cardiometabolic risks, the data on dry IF, especially 2 days per week fasting (such as fasting on Mondays and Thursdays) remain limited. Voluntary Sunnah fasting on Monday and Thursday is widely practiced by Muslims worldwide and is culturally more acceptable in Malaysia, as most Malaysians are Muslims. In 2013, Teng et al [25] compared the effect of dry fasting on Mondays and Thursdays combined with calorie restriction (fasting calorie restriction), with the control of metabolic parameters. They found that participants in the fasting calorie restriction groups showed significant interaction effects on body weight, BMI, body composition, blood pressure, total cholesterol, and LDL cholesterol compared with the participants in the control group [25]. However, as fasting was combined with calorie restriction and compared with the control group, the sole effect of fasting was not sought. Furthermore, the study was conducted among Malay men aged 50 to 70 years, thus limiting the generalizability of the findings to the general population. Meanwhile, this study compared the combined IFHP with the use of HP method alone and involved adults (aged >18 years) of both sexes.
Strengths and Limitations
A strength of our study is that we applied both quantitative and qualitative methods. The mixed method allows us to measure the effectiveness of the intervention and explore the challenges of practicing it simultaneously. This integrated framework is crucial for a better understanding of the challenges faced during dietary intervention in obesity prevention programs so that it can be improvised in future programs to improve the outcomes and sustainability of health changes.
The main limitation of our study is the effect of the movement control order due to the COVID-19 pandemic on intervention compliance and weight management. The implementation of movement control orders and work from home for those working would limit their physical activity, expose them to unhealthy eating, reduce their motivation toward weight loss, and affect their control of food intake. Studies have shown that social lockdowns have negative consequences on weight-related behavior and weight management [26,27].
Future Directions
Adherence to dietary interventions is essential to ensure the validity of research findings and, most importantly, to ascertain the sustainability of the desired outcomes beyond the research period. As described by the World Health Organization, adherence is "the extent to which a person's behavior-taking medication, following a diet, and executing lifestyle changes, corresponds with agreed recommendations from a health care provider" [28]. In this study, we examined the elements of adherence in both quantitative and qualitative parts. Quantitatively, we investigated adherence to intervention and sustainability of outcomes' changes in the unsupervised intervention phase. We measured the outcomes at the end of the phase and compared them with those of the previous 2 parameters. In the qualitative part, we assessed readiness to participate in screening during the recruitment phase, in which the motivation and readiness to comply with intervention protocols were assessed. In addition, FGDs were conducted at the end of the study to explore the participants' experiences while undergoing the intervention, including barriers and enablers that influence their adherence to the diet protocols.
We plan to disseminate the study results to collaborating institutes and organizations, study participants, and respective stakeholders. These findings will be submitted to 2 peer-reviewed journals and presented at academic conferences.
Conclusions
With the increase in the prevalence of overweight and obesity worldwide, a weight loss method that is not only effective but also practical and easy to comply with is required. Although IF has been widely practiced and studied, data on the effectiveness of dry fasting in reducing weight and cardiometabolic risk are limited. We established the Cardiometabolic and Anthropometric Outcomes of Intermittent Fasting study to determine the effectiveness of a combined IFHP and HP in improving anthropometric and cardiometabolic outcomes among civil servants with overweight and obesity. The mixed methods study was designed to measure the changes quantitatively, reflect the participants' points of view, and ensure that the study findings are grounded in their experiences. The study findings and their in-depth explanation may contribute to the development of more effective and feasible obesity prevention methods, improve current health policies, and provide new insights that will stimulate new research questions in the future.
|
2022-05-13T15:18:01.435Z
|
2021-09-24T00:00:00.000
|
{
"year": 2022,
"sha1": "ce3d2a5580ffb6067fdb7e3ef45da607a2d8a506",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/33801",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4c2722b7054c8f4113e9056544c78e795cfcfe1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
40895561
|
pes2o/s2orc
|
v3-fos-license
|
Trichobezoars in children : therapeutic complications
Trichobezoars are concretions formed by the accumulation of hair or fibers in the gastrointestinal tract, usually associated with underlying psychiatric disorders in females between 13 and 20 years old. Endoscopy, the gold standard for diagnosis, brings some additional advantages: sample taking, size reducing and, rarely, mass removal. This study shows that endoscopy can cause severe complications resulting in a surgical emergency.
Introduction
Bezoars are concretions formed in the gastrointestinal tract by gradual accumulation of non-absorbable food or fibers.There are dif-ferent types of bezoars, according to their constitutive material: trichobezoar, phytobezoars, lactobezoars, or any indigestible material.Trichobezoars, hair or hair-like fibers, are associated with underlying psychiatric disorders 1,2 and most cases are reported in females, between 13 and 20 years of age. 1 Clinical manifestations depend on the bezoar's location and size. 3ffected patients can remain asymptomatic for many years, till the bezoar increases in size to the point of intestinal obstruction.Abdominal pain, nausea and vomiting, obstruction, and peritonitis are the most common presentation. 1n a lot of cases trichobezoar is confined within the stomach, but in some cases extends from the stomach to the small intestine (even colon).This is an unusual and rare form called Rapunzel syndrome. 4ndoscopy is the gold standard for diagnosis and brings some additional advantages: sample taking, size reducing and, rarely, mass removal.Laparotomy is still the treatment more frequently chosen, though. 5
Case report
A 10-year-old female, born at 40-week gestations, presented to the emergency room with a poorly localized abdominal pain from the morning.She was alert and responsive, she had good general physical conditions, normal thoracic and cardiovascular examination, treatable abdomen, but mild abdominal tenderness in the right abdominal quadrants.A palpable mass of about 4x6 cm was recognized in the epigastrium.Peristalsis was present, liver and spleen within the limits.The rest of the physical examination was normal.Parents reported early satiety and no changes in her bowel habits.There was no history of acid reflux, diarrhea, increased flatulence, recent illnesses or fever.The Pediatric Surgeon required urgent blood examination, plain radiograph and ultrasound (US) of the abdomen.Blood tests were normal.Plain radiograph of the abdomen showed a moderate dis-Ped.Med.Chir.(Med.Surg.Ped.), 2014, 36: 221-223 Trichobezoars in children: therapeutic complications tension of the stomach, almost completely occupied by a nonhomogenous radio-opaque material.Insignificant air-fluid levels or subdiaphragmatic free air in standing.Although limited by meteorism, ultrasound did not show any abnormalities.Urgent abdominal computed tomography (CT) with contrast showed a 13x6x4 cm mass into the lumen of the stomach.An upper gastrointestinal endoscopy was performed to identify the origin of the mass, to try the reduction in size and to provide for its removal.[Figure1] During the execution of the procedure, the endoscope wedged in the mass occupying the stomach.Therefore, a gastrotomy was performed in emergency to remove the tool manually.The foreign body was taken away in one piece without any other complications.[Figure2] The histological examination described a brownish, hard consistent, 15x7x3.5 cm mass, including hair and food material.It confirmed the diagnosis of trichobezoar.From direct interview with parents and the anamnesis, a history of trycophagia and a picture of obsessive-compulsive disorder (OCD) emerged.The patient admitted that she liked eating hair.After surgery the patient was treated with antibiotics, analgesic drugs and TPN for 8 days; she was discharged home 10 days later, having recovered without complications.Psychiatric follow-up was arranged.
Discussion
Bezoars are masses of non-absorbable food or fibers, progressively accumulated in the gastrointestinal tract.The first case of human bezoar was described in 1779 during an autopsy of a patient who died from gastric perforation and peritonitis. 1ased on their composition, bezoars are classified into phytobezoars (composed of vegetable or fruit fibers), trichobezoars (balls of hair or hair-like fibers), diospyrobezoars (of persimmon), pharmacobe-zoar (of pills), lactobezoars (of milk curd), lithobezoars (fragments of stones) or plasticobezoars (plastic). 2,6,7astric trichobezoar represents 50% of all bezoars, and the incidence in the general population varies from 0.4% to 1%.However, as the condition occurs mainly in patients with psychiatric disorders, it is possible that this incidence is underestimated. 7Most cases of trichobezoar are reported in females, between 13 and 20 years of age. 1 Bezoars are usually detected in patients with prior gastric surgery, because it reduces gastric motility and delays gastric emptying. 3Otherwise, trichobezoars are associated with underlying psychiatric disorders, predominantly founded in emotionally disturbed or mentally retarded youngsters.Most patients with trichobezoars suffer from trichotillomania (an "impulse control disorder" characterized by an irresistible, intense urge to pull out hair) and trichophagia. 1,6,8Rarely they chew hair from other sources, including hair from wigs. 6,8Trichobezoar occurs in 1% of patients with trichophagia.It forms because hair escapes from peristaltic propulsion due to their slippery surface, and is retained in the folds of the gastric mucosa. 1,2As hair accumulates, a single solid mass forms, assuming the shape of the stomach.The patient' s breath acquires a putrid smell due to the decomposition and the fermentation of fats.The acid secretions of the stomach denature hair' s proteins and make black the bezoar. 1 Clinical manifestations depend on the location of the bezoar. 3eviews showed that epigastric pain (70.2%), epigastric mass (70%), nausea and vomiting (64%), hematemesis (61%), weight loss (38%), diarrhea and constipation (32%) are the most common symptoms; while other patients can remain asymptomatic. 1Less frequently, it is associated with weight loss, anorexia, hematemesis, protein-losing enteropathy, iron deficiency, and megaloblastic anemia. 1,7hen not recognized, the trichobezoar grows in size and weight, increasing the risk of gastric mucosal erosion, ulceration and stomach or small intestine perforation.These complications are caused by the reduction of the blood supply to the mucosa of the stomach and part of the intestine.Acute pancreatitis, gastric emphysema and, less frequently, intussusception, obstructive jaundice and death, have been reported in the literature. 1,4Small bowel obstruction is a rare complication caused by the migration of gastric bezoars to the small bowel by fragmentation of a portion, extension or total translocation or by a primary localized bezoars in the small bowel. 2,3Usually, this circumstance occurs in association with underlying diseases such as diverticulum, stricture or tumor. 3n the early stages, most trichobezoars are not recognized, due to their nonspecific presentation or lack of symptoms.Severe halitosis, patchy alopecia, a previous history of trichotillomania and trichophagia may suggest a diagnosis of trichobezoar. 6,7At the physical examination, an abdominal mass can be found.Imaging (X-ray, US, CT) may show a mass or filling defect, but the gold standard for diagnosis is upper gastrointestinal endoscopy.Besides direct visualization, this procedure allows sample taking to determine the composition of the mass, size reducing and potential therapeutic intervention. 1,2he adopted therapeutic approach (endoscopy, laparoscopy or laparotomy) depends on the bezoar consistence, size and location.Phytobezoars (vegetables made) and lactobezoars (milk-made) can be easily treated endoscopically, while trichobezoars, especially the very large ones(>20 cm), usually require a surgical intervention.Specialized medical devices can fragment trichobezoars, either mechanically or with acoustic waves, in order to facilitate their surgical or endoscopic removal. 1,4urgery is indicated in Rapunzel syndrome, in case of very large size bezoars or when severe complications occur (perforation or hemorrhage).Laparoscopy is performed in case of small to moderate-size bezoars. 1 Endoscopic removal is the most attractive choice for its treatment; however, successful endoscopic removals are remarkably scarce (5%). 4,5he size, density and hardness of the mass often make the fragmentation impossible and endoscopy not a viable therapeutic option. 4[3][4][5]9 However, the repeated introduction of the endoscope and its manipulation could bring severe complications, such as esophageal ulceration or perforation, esophagitis and intestinal obstruction due to the migrations of fragments. 4,5owadays laparoscopic removal is not a tempting therapeutic choice.Many disadvantages, as spilling contaminated hair fragments into the abdominal cavity and the difficulty to remove the mass, may undermine positive clinical resolution. 4However, innovative laparoscopic techniques to treat gastric trichobezoar are at stake. 5,10They consist in removing the mass laparoscopically through a gastrotomy in a water-impervious bag 10 or using a laparoscopic-assisted techniques to provide an excellent access to the stomach and remove as quick and safe as possible the mass. 5hese studies confirm the advantages of laparoscopic-assisted procedures in reducing the operative complications and the operative time, and in avoiding the risk of peritoneal contamination. 5,10However, the lack of invasiveness of endoscopy or laparoscopy does not outweigh the disadvantages and the complexity of these procedures. 4aparotomy is the treatment more frequently chosen: it is 100% effective, rarely complicated and it allows a careful examination of the entire gastrointestinal tract. 5esides dissolution or removal, treatment should focus on prevention of recurrence, since elimination of the mass will not alter the conditions contributing to bezoar formation. 2For these reasons, in addition to the acute surgical treatment, parental counseling, neuropsychiatric treatment, follow-up and behavioral therapy are essential to prevent recurrence. 1,4
Conclusions
Trichobezoar is an under-diagnosed entity that has to be considered in the differential diagnosis of abdominal pain and a non-tender abdominal mass in young children.Endoscopy is a valuable diagnostic modality.In some cases it can be a successful therapeutic approach, in others it can cause severe complications resulting in a surgical emergency.
Figure 1 .
Figure 1.Trichobezoar in the stomach seen at endoscopy Figure 2. Trichobezoar extracted from the stomach
|
2018-04-03T04:56:41.041Z
|
2014-12-30T00:00:00.000
|
{
"year": 2014,
"sha1": "8269fd2eba9bf84de4f12f6add192c32564c8e0f",
"oa_license": "CCBYNC",
"oa_url": "https://www.pediatrmedchir.org/pmc/article/download/101/101",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8269fd2eba9bf84de4f12f6add192c32564c8e0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246338002
|
pes2o/s2orc
|
v3-fos-license
|
Punctuated equilibrium and progressive friction in socialist autocracy, democracy and hybrid regimes
Abstract The analysis of public policy agendas in comparative politics has been somewhat limited in terms of geography, time frame and political system, with studies on full-blown autocracies and hybrid regimes few and far between. This article addresses this gap by comparing policy dynamics in three Hungarian regimes over 73 years. Besides our theoretical contribution related to policy-making in Socialist autocracy and illiberal democracy, we also test hypotheses related to non-democratic regimes. We find that – similarly to developed democracies – policy agendas in autocracies are mostly stable with occasional but large-scale “punctuations”. Our data also confirms that these punctuations are more pronounced in non-democratic polities. However, based on our results, illiberal political systems, such as the hybrid regime of Viktor Orbán, are difficult to pin down on such a clear-cut continuum between democracy and autocracy as the level of punctuation differs by policy agendas from parliamentary debates to budgets.
Introduction
During the past decades, punctuated equilibrium theory (PET) has not only become one of the fastest developing subfield of policy studies (Weible 2014: 10), but also a premier field of empirical studies concerning policy issue priorities. PET claims that it explains what separate theories of policy change on the one hand, and policy stability on the other hand cannot explain: policy dynamics (Baumgartner, Jones, and Mortensen 2014: 59). It offers a new framework for understanding stasis and "large-scale departures from the past" in various policy domains by emphasising two elements of the policy process: "issue definition and agenda setting" (Baumgartner, Jones, and Mortensen 2014: 60). This perspective "recognized the critical role of information in the policy process in a way that the election-centred model has not (and as) a consequence, agenda changes can occur in the absence of elections or public opinion" (Bryan, Jones, and Baumgartner 2012: 6).
One of the corollary ideas of this research agenda is the "stick-slip dynamics" of the policy process (Bryan, Jones and Baumgartner 2012: 8-9). In the political system, institutions, ideologies and norms all play a part in stabilising behaviour and, therefore, add an element of friction vis-á-vis driving forces for policy change (such as interest group lobbying or social movements). This theory of stick-slip dynamics in public policy-making has been put to the test in a score of research articles with a domestic or comparative focus, and with geographical scope mostly covering the USA and Western European democratic countries (Baumgartner et al. 2009;Walgrave and Nuytemans 2009;Walgrave and Vliegenthart 2010;Brouard 2013;Green-Pedersen and Walgrave 2014;Bonafont, Palau, and Baumgartner 2015;Vliegenthart et al. 2015;Baumgartner, Breunig, and Grossman 2019).
While these studies focused on democratic countries in the Western world, some newer papers extended the scope of investigation to non-democratic countries such as the military regimes of Turkey and Brazil, the Russian case, Hong Kong and colonial Malta. (Lam and Chan 2015;Chan and Zhao 2016;Baumgartner, Carammia, Epp, Noble, Rey, and Yildirim 2017). In some cases, regime change is directly discussed from the perspective of PET . The most recent contribution to this literature (Bryan, Jones, Epp and Baumgartner 2019) provides a conceptual framework for analysing friction in different regimes, notably by focusing on the role of centralisation, incentives and information.
Yet, when it comes to another region with a turbulent past and multiple regimes changes, Central and Eastern Europe (CEE), similar studies are few and far between (for two exceptions see (Boda and Patkós 2018;Sebők and Berki 2018). In light of this gap in the literature, the dual purpose of this article is to conceptualise policy dynamics for settings beyond liberal democracy and to extend the external validity of previous research on policy dynamics both in a geographical and historical sense. We investigate the core hypotheses of PET research in the context of a CEE country (Hungary) for a time period that covers multiple regimes: Socialist autocracy , liberal democracy ) and a so-called "hybrid regime" (Levitsky and Way 2010) which in Hungary is associated with the second and third Orbán governments (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018).
The research question of this article concerns the differences of Hungarian regimes in terms of their public policy dynamics. We follow Baumgartner et al. (2009: 608-609) and Baumgartner, Carammia, Epp, Noble, Rey, and Yildirim (2017) in testing four hypotheses. The first is the General Punctuation Hypothesis (H1), which states that output change distributions from human decision-making institutions dealing with complex problems are characterised by stability interspersed by events of punctuation as measured by a positive value of its most widely used statistical indicator (kurtosis). The second hypothesis, regarding "progressive friction" (H2), posits that these kurtosis values increase as we move from input to process, and from process to output series. The third hypothesis regarding "informational advantage" (H3) states that the level of punctuation is higher in less democratic regimes. The fourth one refers to a "hybrid anomaly", which states that the level of punctuation is the highest during hybrid regimes vis-ávis all other regimes.
We test these propositions in the context of Hungarian politics and public policy following World War II. This extension provides geographical, political and socioeconomic breadth to extant research. Our analysis lends support to the general punctuation and the progressive friction hypotheses. The informational advantage hypothesis is also partly upheld by our evidence. However, we find limited evidence for the hybrid anomaly, which only exerts itself in the latter sections of the policy process. These results complement the findings of Sebők and Boda (2021): while their book analyses Hungarian policy-making and agendas between 1867 and 2018 from a qualitative and case-based perspective, we use a quantitative research design for comparing regimes of a shorter period (covering the regimes between 1945 and 2018).
In the following, we first provide a review of the relevant literature and the sources of the hypotheses tested. Second, we provide historical context and institutional detail related to the role of interpellations and laws in policy-making in settings beyond liberal democracy. Next, we explicate our case selection and present the data and methods used. The following section provides an empirical analysis of issue attention based on four of our hypotheses. The Discussion section evaluates the results in light of the extant comparative literature as well as the methodological problems associated with conducting comparative research on PET. The final section concludes by returning to the substantive question of the sources of dynamics in an inter-connected system of policy venues and agendas.
Theory
In the past two decades, the PET of public policy (Baumgartner and Jones 1991, 2002, 2010 has turned into a widely tested, and largely supported, theory of the policy dynamics of Western democracies (see e.g. Baumgartner et al. 2009;Breunig, Koski, and Mortensen 2009;Mortensen et al. 2011). Yet, in parallel to the emergence of this literature, a new trend took hold on the peripheryand later: in the midst ofdeveloped democracies with substantial effects on how policy decisions were made. This novel phenomenon was the rise of hybrid regimes and illiberal policy-making in an era of what was supposed to be "end of history".
The political systems in question adopted key procedural characteristics of democracy, such as regular elections, while simultaneously displaying attributes more closely associated with authoritarian regimes, from the repression of the free press to infringements of civil rights. These regimes defied traditional categorisations and have been identified as, inter alia, competitive authoritarianism (Levitsky and Way 2002), electoral authoritarianism (Schedler 2015), illiberal democracy (Zakaria 1997), backsliding democracies (Bermeo 2016), and, perhaps most often, as hybrid regimes (Diamond 2015). Hungary under the leadership of prime minister Viktor Orbán was often mentioned by politicians and scholars alike as an ideal-typical case of such democratic backsliding.
Throughout most of its history, the development of Hungarian parliamentarism did not diverge substantially from the Western European model (Pesti 2002: 103; for a complete classification of regimes based on various sources see: Bódiné Beliznai and Mezey 2003: 107-110;Föglein, Mezey, and Révész T. 2003: 315-319;Sebők and Berki 2018: 611). After the liberation of Hungary from Nazi occupation from 1944 on, an instable democracy was installed in which Soviet influence was overwhelming. This was followed by the de jure takeover of Hungarian democracy by USSRaligned domestic forces in 1949. In the pseudo-parliamentarism of this Socialist autocracy between 1949 and 1989, the central role of the Hungarian parliament was abolished, marking an exception in Hungarian political development (Pesti 2002: 163;Horváth 2003: 468). The fall of this regime, and the establishment of a democracy in 1989/1990, resulted in the restoration of the central role of parliament in the Hungarian polity.
In a further development, the electoral victory of Viktor Orbán's right-wing populist Fidesz party in 2010, and the subsequent constitutional and policy changes, started a long-standing debate on the characteristics of the new Hungarian political regime. Many authors pointed to Orbán's new regime when describing the general backsliding of democracy in CEE (Ágh 2013;Sedelmeier 2013;Greskovits 2015;Hanley and Vachudova 2018). There is some debate when it comes to the finer details of this development. Lührmann and Lindberg (2019) cite an outright autocratisation of the regime while Batory (2015) describes the role of Fidesz as a populist-in-government phenomenon.
Some authors classify this process as an illiberal backlash (Buzogány and Varga 2018) noting that the decreasing role of liberal ideas had been originated in the period before 2010. The result of this 'backlash' is usually seen as a creation of a (diffusely) defective democracy (Bogaards 2018) or (externally constrained) hybrid regime (Bozóki and Hegedűs 2018;Böcskei and Szabó 2019). What is less clear, to what extent can changes be compared to Russia, this model polity of a hybrid regime (Buzogány 2017). We conclude from this brief overview of the literature that the post-2010, "hybrid" regime of Hungary is at least worth investigating if we are interested in the external validity of well-established theories of policy dynamics.
What are the characteristics of policy-making in non-democratic settings? The selectorate theory Bueno de Mesquita and his co-authors (Bueno de Mesquita et al. 1999;Bueno de Mesquita and Smith 2011) posits that the size of the "selectorate", that is, the group of people which has an institutionally granted right or norm choosing the government, influences the substance of decisions made by the government. In a similar manner, Svolik (2012) explores the logic of party-based co-optation in autocracies which governs the hierarchical assignment of services and benefits.
In these theories, policy-making is subordinated to regime survival which may distort the traditional role of the policy agenda in liberal democracies which is to gather and filter information that guides public policy decision-making. Consequently, policy agendas in autocracies and hybrid regimes may be more punctuated than in liberal democracies. In light of these theoretical considerations, our article pursues the dual aim of conceptualising policy dynamics for non-democratic polities and to extend the punctuated equilibrium framework to hitherto understudied regime types, periods and regions. We undertake this challenge by investigating four hypotheses derived from the relevant literature. We first test three hypotheses originally formulated by Baumgartner et al. (2009: 608-609) and Baumgartner, Carammia, Epp, Noble, Rey, and Yildirim (2017), as well as a fourth one based on Sebők and Berki (2018).
The first of these is the general punctuation hypothesis (H1), according to which "output change distributions from human decision-making institutions dealing with complex problems will be characterised by positive kurtosis". PET states that policy change does not occur in incremental steps but is "often disjoint, episodic, and not always predictable" (Bryan D Jones and Baumgartner 2012: 1). This school of thought was built on the empirical finding of Baumgartner and Jones (1991) that the distribution of policy changes over time does not follow the normal distribution instead it is characterised by punctuations which can be quantified by the deviation from the Gaussian distribution at the tails of the distribution of changes (this deviation is often measured by the statistical metric of kurtosis).
This general, yet empirical theory was later underpinned by a theory of information. Policy outputs are not a direct function of societal inputs as decision-making that may display the characteristics of a "stick-slip dynamics" (Bryan D Jones and Baumgartner 2012: 8-9). This uses an analogy from the study of earthquakes: both a dynamic force pushing on the earth's tectonic plates and a retarding force (called friction) contribute to its status. If "the forces acting on the plates are strong enough, the plates release, and, rather than slide incrementally in adjustment, slip violently, resulting in the earthquake" (Bryan D Jones and Baumgartner 2012: 8). By using the earthquake analogy, it relates the dynamic force pushing on the earth's tectonic plates to public inputs and the retarding force, or friction, to institutional inertia. They stabilise behaviour in a progressive manner: as the number and organisational scale of institutional players grow, friction is also expected to increase in size (Bryan D Jones and Baumgartner 2012: 8).
The friction hypothesis focuses on the role of various policy agendas play in shaping policy outputs (such as laws) or outcomes (e.g. budget outlays). Figure 1 presents how various data sources used in the empirical literature align on this progressive friction scale. The underlying idea for the friction process is simple: the number of issues a politician can handle at a time is limited, so it is of great importance which issues the decision-maker addresses.
Extensive media coverage of one issue or political demonstrations related to another can have a marked impact on party preferences. These information sources may be referred to as political inputs for policy decisions (Baumgartner et al. 2009: 605). These are social processes that governments monitor and act upon and may include information from social movements, the mass media, lobbyists, systematic data collection (such as the unemployment or the poverty rates). Yet as we move throughout the policy process, the actionable universe of information narrows due to the bounded rationality of decision-makers. Translating this theoretical finding to a falsifiable hypothesis, we can posit that kurtosis values will increase as one moves from input to process to output series. This is our second hypothesis which is called progressive friction.
These notions of punctuated equilibrium and progressive institutional friction have served as the springboard for an ever-evolving empirical research agenda. Besides input-output analysis connecting public preferences and public spending (Soroka and Wlezien 2005), virtually all intervening aspects of the general linkage process have been examined in a piecemeal manner for various variable pairs gauging the effect of public opinion on agendas venues such as political campaigns (Bevan and Krewel 2015) or executive speeches (Jennings and John 2009). The comparative testing of the friction hypothesis has yielded results that support this pattern (Baumgartner et al. 2009; Green-Pedersen and Walgrave 2014). Nevertheless, only limited testing has been undertaken for several regions outside the USA and Western Europe, such as the CEE area.
The investigation of this region from a PET perspective does not only provide for an extension of the external validity of the literature, but also offers cases which allow for the comparison of policy dynamics in different political regimes. Most CEE countries have experienced multiple regime changes during the 20th century. In Hungary, the polity was in almost constant flux with regime changes in 1918, 1919 (twice), 1944, 1945, 1949, 1956 and 1990. Despite the relatively understudied nature of regime dynamics in PET, we can rely on a few studies which directly addressed this issue.
A study on Hong Kong showed that the main characteristics in the dynamics of policy changes in not-free regimes are similar to those of free regimes (Lam and Chan 2015: 552). Looking at the People's Republic of China, Chan and Zhao (2016) complemented this insight by pointing to the difficulties in data collection in the case of not-free regimes, which exacerbates punctuation. Baumgartner, Carammia, Epp, Noble, Rey, and Yildirim (2017) analysed rival hypotheses by comparing free, partially free and not-free periods in Russia, Turkey, Brazil and Malta. They only found evidence for the informational advantage theory which claims that free regimes can collect information about the socio-economic environment more effectively. In not-free and partially free regimes, the media is constrained, civil society is controlled or repressed, and the opposition's activity (if it can exist in a legal form) is limited by the government. All these actors can be considered as part of the polity's policy capacity (Boda and Patkós 2018), as they mediate the relationship between state and society. Without their contributions, the flow of valid societal information for policy-makers is impeded. Hence, non-democratic regimes suffer an "informational disadvantage" (Baumgartner, Carammia, Epp, Noble, Rey, Yildirim, et al. 2017).
It means even if non-democratic or hybrid regimes were interested in solving policy problems, they would be less competent to do so. A case in point is local environmental pollution. In an autocracy, the media may not report it, NGOs may be limited or forbidden to protest against it and even citizens may have fewer venues to raise their objection. The probability of noticing the problem by the government is much lower than in a free regime. (We return to this theoretical framework below when presenting our hypotheses.) Baumgartner, Carammia, Epp, Noble, Rey, and Yildirim (2017) analysed the rival hypotheses of institutional efficiency and informational advantage by comparing free, partially free and not-free periods in the histories of Russia, Turkey, Brazil and Malta. Institutional efficiency means that a more limited separation of powers in not-free regimes makes it possible for the latter to react more quickly to changes in the environment. The notion of informational advantage refers to the theory that free regimes can collect information about the socio-economic environment more effectively. Therefore, kurtosis is higher in free regimes than in not-free or partlyfree regimes. The study by Baumgartner and his co-authors found support for this latter hypothesis which also serves as our third hypothesis.
We derive our final hypothesis from one of the few studies focusing on the CEE region, Sebők and Berki (2018), which covers over 155 years of Hungarian budgetary history. Their investigation lends support to the theory of punctuated equilibrium and they provide empirical evidence for the validity of the informational advantage hypothesis, which states that democracies will show a lower level of kurtosis than other political regimes. Nevertheless, the highest levels of punctuation were associated with "partly free" (as opposed to "not free" or "free"), which, in their view, creates an anomaly pointing to the further investigation of in-between regimes. This hybrid anomaly which states that kurtosis is the highest in hybrid regimes serves as our fourth hypothesis.
Historical context and political institutions
The general punctuation theory posits an "empirical law" (Baumgartner et al., 2009) whichaccording to its proponentsshould hold regardless of context. The theoretical reasoning behind the progressive friction hypothesis is similarly universal: information processing faces bottlenecks as we move closer from agenda setting to decision-making as the bounded rationality of top-level office holders is a reasonable expectation regardless of whether they operate in democracies or autocracies. In the case of the other two hypotheses, it is at least conceivable that they may take on alternative interpretations depending on the specific circumstances of their application. In this section, we illustrate their logic as they concern the three distinct regime types covered in this study. In this, we present the institutional context in which we can elucidate our core quantitative results to be presented below.
One of the most evident instruments of the exercise of this supervisory role is parliamentary questions (Martin and Vanberg 2014: 440). Among the various types of parliamentary questions, interpellations have played a preeminent role in the operations of the Hungarian parliament over multiple political regimes. Interpellations are typically a major tool in the hands of the opposition for holding the government accountable, which is why it is interesting to compare their role with the function they fulfil in regimes that do not have a parliamentary opposition.
The frequency of amendments of the Standing orders governing interpellations was surprisingly low in the period investigated (Sebők, Molnár, and Kubik 2017). The roots of the institution of interpellation can be traced back to the end of the 18th century. It was included informally, without being expressly enshrined, in the Standing orders starting in 1848, and it was formally established in 1868 (Palasik 2017: 171). Although the legislatures in the partly democratic eras between 1944 and 1949, the non-democratic period between 1949 and 1990 and in the democratic one since 1990 differ fundamentally in terms of their respective roles, the formal institution of interpellations was a staple of parliamentary procedures. After a brief hiatus, the institution was immediately restored with the end of World War II in Hungary, in 1944, albeit in a limited form.
The full text of interpellations had to be introduced at least a session day before their presentation. MPs could ask one interpellation by day, and the content of interpellation could contravene the foreign interests of Hungary. 15 minutes were allocated to present the interpellation, and a vote was held immediately after the answer of the minister assigned to answer the question. The usage of parliamentary questions flourished in the short-lived, democratic post-war period (more than 100 were introduced by year on average).
Nominally, the tradition was retained in the emerging Socialist autocracy with a new Standing orders adopted in 1950, but it was visibly hollowed out. It failed to include detailed regulations reflecting interpellations (it did not specify, for example, when interpellations needed to be submitted, when they could be presented in the plenary, how a response was to be given, etc.), and the National Assembly was not given the option of rejecting a minister's answer (it could merely put it on the agenda). It is important to point out that the minister's obligation to respond was retained, as well as the representative's right to offer a rebuttal, along with the plenary's vote on the minister's response. Subsequently, the range of institutions that could be subjected to interpellations was expanded. The new rules also specified several other details: how interpellations were to be submitted in writing and presented orally, and how the response was to be provided.
The next amendment concerning interpellations occurred in 1967, and this allowed for the interpellation of state secretaries and mandated that responses which had been voted down would be referred to the parliamentary committees for further debate. A 1972 amendment opened up the possibility of interpellating the president of the Supreme Courtit constituted an overt rejection of the principle of the separation of powers. Following the elections of 1985, as a result of which some 'spontaneous candidates' won seats in the National Assembly, the legislature began to assume greater autonomy (thus, for example, on November 26, 1988 it took the unprecedented step of vetoing a decree law of the Presidential Council), and as a result the right of interpellation was restricted. The deadline to submit an interpellation was more restrictive, and the range of issues that could be addressed in an interpellation was limited to unlawful actions, the failure to perform legally mandated procedures, and so-called "ineffective" laws. The latter restriction was repealed in 1989. 1 In line with the separation of powers that was established as a fundamental structural framework during regime transition, from 1990 on the scope of persons who could be interpellated was restricted to the Council of Ministers (the cabinet), its members and the chief prosecutor. In addition to requiring that interpellations be submitted four days in advance, the 1994 amendment also specified detailed rules on how these could be presented in the plenary; the timeframe for interpellations; the rules concerning the substitution of officials who were being interpellated and the committee reports drafted in response to rejected answers to interpellations. The 1997 amendment also extended the right to present at least some of the interpellations they had drafted to representatives who were not affiliated with a parliamentary faction (yet at the same time it slightly shortened the time allotted to answers and rejoinders, which, we note, should have no effect on the underlying topic distribution).
The most recent (restrictive) amendments of the right to interpellation in our period under investigation occurred in 2010 and 2012. At the end of the former year, the right to interpellate the chief prosecutor was removed, and in the latter year the guarantee of the non-partisan representatives' right of interpellation was struck from the rules (it was later eased). 2 The exclusion of the chief prosecutor shrank the opposition's toolkit to highlight cases of government corruption. At the same time, interpellations were ever frequently used by government MPs to praise policy decisions or echo talking points from communication campaigns against various "enemies" of the state from migrants to "Brussels".
The theoretical takeaway from this brief comparison of the usage of interpellations over the three regimes in question points toward the key role of limitations on this institution as means of effectively channelling popular pressures to the political agenda. In Socialist autocracy, informal rules allowed for the arbitrary exclusion of certain agenda items. Restrictions on the number of parliamentary questions per week may contribute to a less diverse agenda as secondary topics are neglected altogether. In the illiberal hybrid regime, the strategic misuse of interpellations for the purposes of government campaigns will create an issue centralisation (and, consequently, higher punctuation as these priorities are changed) atypical of liberal democracies.
Turning to legislation, in the era of Socialist autocracy MPs had a minimal influence on law-making. Only six laws accepted between 1949 and 1990 were introduced by MPs who were not members of cabinet (and three of those were 1 For the texts of the Standing orders before 1990, see: https://library.hungaricana.hu/hu/collection/ogyk_ hazszabaly/, Last accessed: 20 May, 2019. introduced in transition year of 1990). The Council of Ministers, the Presidential Council, the MPs, the committees (from 1972) and party groups (from 1989) had the right to initiate laws (Kukorelli 1989: 86). Yet the National Assembly adopted a relatively few classical "laws"the legislative function was overtaken by the Presidential Council which assumed legislative powers in-between formal sessions of parliament.
This practice was rooted in the setup of the political system of the Soviet Union (Skilling 1952: 210). Most members of "elected" bodies held no real power, which was wielded by executive committees (such as the Politbüro) or presidential councils (Little 1980: 235). The socialist Constitution (Act 20 of 1949) declared that during the breaks of plenary sessions, the Presidential Council elected from the MPs had all rights of the National Assembly (except for modifying the Constitution). The Presidential Council was not allowed to make laws but was very much entrusted with adopting so-called decree laws which had the same effect (but had to be expost approved by the plenary sessionwhich almost always happened in a unanimous fashion - (Romsics 2010: 338).
This situation changed in the years before regime change. The 10th Law of 1987 forbade the Presidential Council to adopt decree laws in policy areas which had to be regulated by the National Assembly, thus the number of adopted "regular" laws increased. Furthermore, for the first time since the late 1940s, the 1985-1990 legislature was the first one to feature "spontaneously" elected MPs (Kukorelli 1988: 10). This was a sign of a disintegrating state party system and an advent of democratic practices in parliament (Bihari 2005: 392-393).
A more diverse party system ushered in a legislative agenda which was now unfettered from its previous formal and informal limitations. For 20 years of liberal democracy, this became the new normal, only for illiberalism to restrict opposition rights and almost completely eradicate adopted laws which had been originally proposed by opposition MPs. In sum, the roots of the hybrid anomaly are situated in the managed nature and hollowing out of post-2010 democracy in Hungary.
Data and methods
The quantitative analysis of this article is related to data for Hungarian policy agendas covering the period between 1945 and 2018. The Hungarian case offers a unique extension of the core research direction of the Comparative Agendas Project in multiple dimensions. The 73-year long period covered in this study is amongst the longest one in the extant literature. We also compare three distinct regime types -Socialist autocracy, liberal democracy and an illiberal hybrid regimea unique combination. This setup also allows for leveraging regime variety in a distinctive way. It is also the first such study for the Central Eastern European region which, at the same time, makes use of a diverse collection of data sources.
While in and of themselves the results of this analysis refer to a single case, the external validity of the conclusions is reinforced by the fact that they tie into a growing body of research on policy-making in non-democratic polities. We also note that the descriptive statistics presented (including the punctuation metrics to be presented below) does by no means constitute a causal analysis. Yet an intra-case comparison of political regimes and time periods, as well as the inter-case comparisons with studies employing similar metrics do contribute to a deeper understanding of policy dynamics beyond liberal democracies.
With these qualifications in mind, in this article, we rely on conventional statistical means of capturing punctuations in policy time series. This standard approach is based on the density function of year-on-year or electoral cycle-on-cycle changes in the issue attention allocated to different policy topics (the Comparative Agendas Project codebook lists 21 of those from education to defencewe return to this point below). 3 "Fat-tails", at either or both tails, of these density functions are generally associated with punctuated equilibrium. In statistics, these deviations from the normal distribution are captured by a so-called kurtosis indicator, with L-kurtosis (LK) being widely considered to be the best measure available 4 (Breunig 2006: 20).
We test these propositions in the context of Hungarian politics and public policy following World War II in order to provide more geographical, political and socioeconomic breadth to extant research. Data were collected under the aegis of the Hungarian Comparative Agendas Project (Boda and Sebők 2019). In the event, four data series were selected covering a wide range of political processes 5 and with each of these datasets representing a specific phase of the friction process as postulated by the hypotheses. Table 1 presents the data sources.
Our datasets have been classified based on the conventions of the Comparative Agendas literature which focuses on the position of these venues in the policy cycle. This means all policy agendas data can be defined as part of the input, process or output phase of the policy cycle. (see Baumgartner et al. 2009). We can investigate data on interpellations, laws and decree laws and budget proposals and final accounts (or outlayssee Table 1). The allocation of these sources is wellestablished in the literaturehere, we only highlight one potentially controversial classification. The output phase of the policy process is associated with budgets. Although state budgets are generally adopted as laws, there are special requirements and special rules to adopt them. The cost of their adoption is significantly higher than the adoption of "regular" laws (Baumgartner et al. 2009, 610). Some budget laws also reflect policy outcomes (as opposed to policy outputs which is what laws are in this literature). Besides budget proposals adopted by the legislature prior to the fiscal year in questions final accounts provide an overview of actual budget outlays which explains their separation for laws which do not have such retrospective application. 3 For more information on the CAP policy topic codebook, see Bevan (2019). 4 As Breunig and Jones (2010: 107) explain, the main disadvantage of using kurtosis in our field of research is that it is very sensible for extreme values. At the same time, LK is less sensitive and can be reliably computed for small number of cases. The LK score is computed as the fourth L-moment of a distribution ranging from 0 to 1 (higher number means higher level of kurtosis). 5 The availability of data sources was shaped by the unfolding work plan of the Hungarian CAP project. Beyond the four data sources featured in the paper media data, in the form of newspaper front pages, had also been considered. While work is in progress on the coding of 74 years of media data, this version of the paper cannot yet rely on this source.
We used these sources to calculate data on issue attention changes in each policy domain based on government cycles (four or five years, depending on the period in question 6 ). Table 2 provides an overview of the original datasets (without the exclusion of a few extreme values). 7 All four datasets were compiled in the Hungarian Comparative Agendas Project (cap.tk.hu). The number of observations for each dataset reflects the different composition and coding level of the underlying data sources. Our unit of analysis is the electoral cycle 8 and we used the proportion of interpellations and laws of the given policy domains by cycle (calculated for example as the macroeconomics topic share of 1994-1998/that of 1990-1994 minus 1).
We set our unit of analysis in electoral cycles instead of the more widely used years due to the fact in the socialist period only a handful of interpellations were presented in parliament per year. While in 1973 there were 18, in 1975 there was only 1 and in 1976 no vote was held on interpellations. Since one can only Although the term of legislatures was theoretically set at four years from 1945 to 1985, in practice their actual length often varied. The parliaments elected in 1945 and 1947 were dissolved in midterm due to the political strategy of the Communist Party. The parliament elected in 1953 was extended because of the state of emergency related to the revolution of 1956. The parliament elected in 1958 and 1975 was not dissolved after four, but five years. This situation was legalised in 1983, when the term of the legislatures was legally extended to five years. The four-years term was restored in 1989 by the regime change. 7 It is important to note that we excluded 3-5 extreme values for each dataset for a total of 16 observations. The inclusion of these values would have significantly skewed our results. As for interpellations, the environment-related ones' change from the electoral cycle 1971-1975 to 1975-1980, the transportation and social policy-related ones' change from the electoral cycle 1963-1967 to 1961-1971 and the migration-related ones' change from the electoral cycle 2010-2014 to 2014-2018 were omitted. As for laws, the foreign trade-related ones ' change from 1958-1963 to 1963-1967 and the culture-related ones' changes from the electoral cycle 1953-1958 to 1958-1963 and from 2006-2010 to 2010-2014 were omitted. As for budget authority, the changes of agriculture-related expenditures from the electoral cycle 1949-1953 to 1953-1958 and the changes of environment, energy and foreign trade-related expenditures from the electoral cycle 1985-1990 to 1990-1994 were omitted. Finally, for budget outlays, the changes of agriculturerelated expenditures from the electoral cycle 1949-1953 to 1953-1958 and from 1958-1963 to 1963-1967, and the changes of environment, energy and foreign trade-related expenditures from the electoral cycle 1985-1990 to 1990-1994 were excluded. 8 The term electoral cycle refers to the formal periodisation of the Hungarian National Assembly (Parliament). It is a technical term and does not imply that the "elections" resulting in a new intake of MPS were free or fair. meaningfully calculate PET-style punctuation metrics based on bigger and more diverse counts (as for low numbers most policy topics would not yield a valid ratio), we opted to aggregate all data to electoral cycle to make them comparable. More specifically for budgetary data, we used the average proportion of budget expenditures by major topic in every electoral cycle. This makes the data comparable and resolves the methodological problems caused by inflation and the different lengths of electoral cycles (since we compare issue attention shares, no nominal data is directly used). We classified every budget and law to an electoral cycle based on the exact date of when they were enacted. Interpellations were assigned to the cycle when they were presented. Cycle/cycle changes differ due to the unequal distribution of zero-value observation pairs (when for a policy topic, no observation is recorded for the given electoral cycle).
The interpellations dataset contains interpellations performed in the National Assembly between 1945 and 2018 (from the 1945-1947 to the 2014-2018 electoral cycle). An interpellation is a type of oral question (also submitted in a written form). It is a classical means of parliamentary supervision ensured to the opposition. Interpellations can be asked by any MP to any member of the Government concerning issues belonging to the portfolio of the given ministry (except for the Communist era, whendue to the principle of the unity of the branches of power leaders of the judicial system could also be interpellated). No interpellations were asked during the Stalinist era of 1949-1953, thus the changes for 1953-1958 are compared to the agenda of the 1947-1949 electoral cycle.
Our database concerning laws and decree laws adopted by Parliament from the electoral cycle 1944-1945 to the electoral cycle 2014-2018 (no laws were adopted in 1944 in the first cycle in question). National representative bodies (parliaments) of Soviet-occupied parts of CEE adopted many aspects of the Soviet Union's political system (Skilling 1952: 210), including a division of labour in which not all MPs participated in law-making. This task was also assigned to "executive committees" or "presidential councils" which "filled in" for parliament during the long breaks between plenary sessions (Little 1980: 235). Therefore, we included the decree laws issued by the "Presidential Council of the People's Republic" (which existed between 1949 and 1989) in our datasets besides "regular" laws. These decree laws had the same effect as laws: they could nullify or amend each other.
The data on budget proposals and outlays comes a dataset which contains information about Hungarian adopted budgets and final accounts for the period 1947-2013. Each entry was coded by an automatic text classification algorithm for policy content, along with other variables of interest. We calculated the average proportion of major policy topics in budgets by electoral cycle, and we investigate the level of change regarding these averages. It is important to mention that for some years' we had a missing data problem, 9 which led us to omit them from our calculations.
In our baseline scenario, we divided the total time period covering the 1945 through 2018 into three subperiods. The first one is the era of "Socialist autocracy" era. Although a Soviet-style constitution was only adopted in 1948/49, we also included the short preceding period of limited parliamentarism (1944)(1945)(1946)(1947)(1948)(1949) in this 9 Official budget data was unavailable for the following years: 1945-1946, 1950, 1952-1953, 1956-1958, 1970, 1980, 1982, 1989-1990. first era as it was hallmarked by ever-increasing Soviet influence. We also list here the transition electoral cycle between 1985 and 1990. Taken together, the years between 1945 and 1990 are characterised as "not free". The second era of our baseline scenario is the democratic post-regime change era between 1990 and 2010. Finally, we differentiate from this previous period the first two electoral cycles of the "Orbán regime" (from 2010 to 2018) which is widely regarded to be a hybrid regime as opposed to a completely free democracy.
Given the ambiguities surrounding regime categorisations, we also calculated scores for each of our hypotheses for alternative classifications of some electoral cycles of debated political nature. Therefore, first we calculated separate scores the "restricted" Socialist period covering 1949-1985 (this excludes the limited democracy of the 1940s and the transition period of the late 1980s). Second, we also analysed the post-regime change period from 1990 to 2018 as a whole (thus including the post-2010 "Orbán-regime").
Third, we investigated alternative candidates for hybrid regimes including the 1944-1949, the 1985-1990 and 2010-2018 periods. While these periods were utterly different (for example in 1985 only candidates of the ruling party could run in the elections, although they had to compete with each other in every electoral district), a common thread in many historical analyses regarding these years that they were neither fully non-democratic, nor fully democratic.
Empirical analysis
In this section, the empirical results regarding policy change density functions are presented for each of the four data series. Most cycle-on-cycle changes were incremental (close to zero) with a few extreme values of over 5 (an increase of 400% over 100% in the previous cycle). The distribution of the degree of changes shows similar characteristics for the four dataset types.
As for interpellations, our dataset had 6547 initial observations elements for the period 1945-2018. Using these data, we calculated the cycle-to-cycle percent change in issue attention resulting in 306 observations across the 21 major topics of our policy codebook. The data, as witnessed by the shape of the histogram in Figure 2, yields support for PET. The data shows a high peak close to 0 and a fat tail to the left, while the right tail is long and flat.
In the case of laws and decree laws, our initial database consisted of 5963 laws and decree laws. Our calculations of the cycle-on-cycle percentage change resulted in 351 observations. The histogram is quite similar to that of interpellations', although it has a slightly longer right tail and shows a steeper decline for values below zero (see Figure 2).
Finally, we investigated the topic distribution of 116,313 line items covering the period between 1947 and 2013. This resulted in 287 observations for cycle-on-cycle changes. The respective histogram affirms a distribution that is in line with PET. Here, we see a longer right tail of the distribution, as well as a few extreme negative changes (see Figure 2).
Our first hypothesis is related to the frequency distribution of cycle-to-cycle policy output changes in various policy domains. Also called the general punctuation hypothesis, it states that "output change distributions from human decision-making institutions dealing with complex problems will be characterised by positive kurtosis". Table 3 presents the kurtosis and LK results for the complete period between 1945 and 2018. Our data offers clear support for H1. K and LK values for each dataset are significantly above the respective value of the Gaussian distribution (0, 12), with even the lowest score (for interpellations) surpassing this baseline value by 100%.
The second hypothesis states that "kurtosis values will increase as one moves from input to process to output series". In our analysis, we found evidence supporting this "progressive friction" hypothesis. The element of progressive friction is evidently present in the process--output conversion: LK values for the output series are at least 0.06 higher than any value for the previous phase. Punctuations related to budgeting (both budget authority and outlays) are bigger than in any other datasets and significantly surpass the value associated with the Gaussian distribution. This result adds further evidence to the growing literature on the outstanding relevance of punctuated equilibrium in the field of fiscal policy.
The third hypothesis of "informational advantage" states that the level of punctuation is higher in less democratic regimes. For this hypothesis, our data shows a mixed picture. With the exception of budget outlays, punctuations are more pronounced in the socialist than in the democratic period (see Columns 2 and 3 in Table 4). Finally, the fourth hypothesis regarding the "hybrid anomaly" states that the level of punctuation is the highest during hybrid regimes. This was supported by evidence only for the output side of the policy process while the punctuation of hybrid and democratic regimes was lower than those of the communist regime for the process phase datasets.
We also calculated LK scores for alternative regime classifications (Table 5). H1 and H2 hold for these re-categorisations as well with the exception of the two budgetary data in the "core socialist" period and the combination of various partly free regimes (see Column 4). For the third hypothesis of the "informational advantage", the inclusion of the post-2010 period in the democratic era does not alter the results. The LK for 1990-2018 is very close to the values for the 1990-2010 period and are lower than those for both the socialist and the core socialist period, once again with the exception of budget outlays. Full post-regime change Partly free (1944-1949, 1985-1990, 2010-2018) Normal distribution Ns are different to those presented in Table 2 because of the omission of extreme values.
Discussion
Our primary analysis lent support for the general punctuation and the institutional friction hypotheses. In this, a long time series of Hungarian data corroborate the findings of the USA and Western European literature. Generally speaking, the comparative results of Baumgartner et al. (2009: 612) are in line with our results. The application of the PET framework is validated in that all LK scores for Hungary significantly exceed the related value of the normal distribution. These values also fall into the clusters described by Baumgartner and his coauthors when it comes to the elements of the input-process-output scheme (see Table 6). These results speak to the universal relevance of H1 and H2 in policy dynamics, regardless of the key contextual factor of political regimes. We have also found evidence for the informational advantage hypothesis, once again in line with the available comparative research.
The score for Hungarian laws and decree laws is similar to the figures in the other three countries. For interpellations and budgetary data, they are somewhat lower. But for the process phase at least, the difference falls within the narrow range of 5-10 basis points. The difference in budgetary results may be the result of the electoral cycle-based method of accounting in the Hungarian case instead of the internationally explored year-on-year changes. All in all, the Hungarian data fit well in with existing calculations for Belgium, Denmark and the USA (see Figure 3).
A second result that is worth further discussion was related to the fourth hypothesis. It is important to note that the hybrid anomaly thesis was based on very limited evidence (a single paper on Hungarian budgeting). Another crucial element to note is that we could only rely on three years for budgetary data for the hybrid period and that our analysis of these data was based on a yearly accounting while interpellations and laws were aggregated to electoral cycles due to a lack of sufficient yearly data. Our mixed result for H4 underlines the importance of gathering new evidence for any PET-related hypothesis from various domains (countries, eras, agenda types) until general conclusions can be reached.
Our third point of discussion directly flows from this problem and it is related to data sources and methodology. As we have seen, the results from particular data series may not be a seamless match the progressive friction pattern (especially for budget proposals and outlays). While in some cases, information of the placement of individual datasets in the friction process is subsumed in the averages for the three phases of stick-slip dynamics their separate listing carries much methodological value. In this respect, it is important to note that the choice of data series in studies of progressive friction, both in our paper and in its precursors, is influenced by data availability. For each phase of the friction process in the study by Baumgartner et al. (2009) the number of data series used ranged between 1 and 7 which, in turn, assigns a different weight to specific data sources in any given phase average. Furthermore, even in comparable phases (e.g. in the policy process phase) the actual content of data may differ from country to country. In the aforementioned paper, besides the common core of bills and laws, the selection included hearings for the USA and government agreements for Belgium. None of these latter modules were featured for other country cases (even as hearings are similar to interpellations in that they may touch on multiple issues and are primarily the means of the opposition parties at the time). The length and exact period of time series also showed remarkable variance. This is not to say that comparative studies of PET and institutional friction do not offer a contribution to the literature. In fact, our results corroborate the "general" adjective in the general punctuation thesis: our overall results are independent on the specific institutional arrangements of Hungarian political regimes. Having said that, there is a fair chance that the inclusion of new data series in the averages of the three phases of friction would modify the specific cumulated LK scores for each phase. Our conclusion from this discussion, therefore, is that the validity of comparative results regarding the progressive friction hypothesis has to be buttressed by a transparent presentation of the underlying datasets.
Conclusion
The dual aim of this article was to conceptualise policy dynamics for nondemocratic polities and to extend the punctuated equilibrium framework to hitherto under-studied regime types, periods and regions. The Hungarian case covering the years between 1945 and 2018 offered a unique opportunity to compare policy stability and change in three distinct regimes: socialist autocracy, liberal democracy and the illiberal hybrid regime.
We set out to investigate two standard hypotheses of the literature on punctuated equilibrium and institutional friction in public policy as well as two less widely used ones on punctuated equilibrium in different political regimes. We found empirical support for the general punctuation hypothesis for Hungary: output change distributions from human decision-making institutions, as measured by interpellations, laws and budgetary data, are characterised by positive kurtosisa result echoed in the comparative literature.
We arrived at similar results when it comes to the progressive friction hypothesis. The informational advantage was also supported, similarly to other papers in the comparative literature. Finally, we reached ambiguous results when it comes to the specificities of hybrid regimes: only budget-related scores were higher than those for the Socialist period.
We conclude our analysis by returning to the substantive issue at hand: the interrelated nature and potential structure of the various venues of public discourse and decision-making. The general idea of slip-stick dynamics is centred around the notion of frictionor institutional friction in the case of politics. This is clearly present when it comes to the dissimilarities between various forms of policy agendas from media and public opinion to on the one hand, and policy outputs, such as budgets, on the other hand.
At this point, we could not rely on "input" phase data, such as media or public opinion polls. And a definite answer to this question is also elusive at the current state of research. Recent studiessuch as the 7-country comparison conducted by Vliegenthart et al. (2015) yielded mixed results, which also pointed towards a key role of domestic political systems. Furthermore, case study evidence from Hungary (Boda and Patkós 2018) highlights the reverse dynamics of agenda setting by the government. In any case, the gradual build-up of empirical evidence from new settings (regimes, periods and regions) is the only way towards generalisable findings related to policy dynamics beyond liberal democracies.
|
2022-01-28T16:43:05.436Z
|
2022-01-25T00:00:00.000
|
{
"year": 2022,
"sha1": "a66df5c15f1c511368fef33e8a9e5d0e3e20b4f0",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E217D1958563502B613644338A8AFFF8/S0143814X21000143a.pdf/div-class-title-punctuated-equilibrium-and-progressive-friction-in-socialist-autocracy-democracy-and-hybrid-regimes-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "7ca319209bad5a673535247773730d903e5b457a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
122243560
|
pes2o/s2orc
|
v3-fos-license
|
Solving three types of satellite gravity gradient boundary value problems by least-squares
The principle and method for solving three types of satellite gravity gradient boundary value problems by least-squares are discussed in detail. Also, kernel function expressions of the least-squares solution of three geodetic boundary value problems with the observations {Гzz}, {Гxz, Гyz} and {Гxx − Гyy, 2Гxy} are presented. From the results of recovering gravity field using simulated gravity gradient tensor data, we can draw a conclusion that satellite gravity gradient integral formulas derived from least-squares are valid and rigorous for recovering the gravity field.
Introduction
The GOCE (Gravity field and steady-state Ocean Circulation Explorer) satellite will be launched by the ESA (European Space Agency) in the later half of 2007 to explore the Earth's gravity field with high accuracy and resolution [1] . Therefore, the research on theories and methods for solving the gravity field using SGG (satellite gravity gradient) observations is very important.
The methods for solving the gravity field using SGG observations is usually divided into two classes: space-wise approach and time-wise approach. Rummel derived the solution of uniquely and overdetermined GBVP (geodetic boundary value problems) of multi-observations with constant radius approximation [2] ; Rummel and Gelderen, given the relation between disturbing potential and its second derivatives and with the same approximation, proposed the solution of GBVP by least-squares, which is applicable on the condition that the relation between the disturbing potential function and the observations in spectral domain only depends on the degree of their spherical harmonic expansions [3,4] . Luo derived the pseudosolution to satellite gradiometry boundary value problems using the pseudo-solution theory of overdetermined GBVP [5] . Li derived rigorous integral formulas and corresponding rigorous kernel functions of solving the gravity disturbance, the disturbing potential, the gravity anomaly and the deflection of the vertical [6] .
The principle for solving GBVP by least-squares is studied in the paper. The integral formulas and rigorous kernel functions with respect to three types of gravity gradiometry boundary value problems are given. The simulation and testing results prove the validation and strictness of this method.
where g is the boundary function, which is scalar, vector or tensor; D is the linear differential operator and T is the disturbing potential function. From Eq.(1) the boundary function g can be expressed as a linear function of the disturbing potential are Hilbert spaces of function on the unit sphere. Using the definition of adjoint operator and the spherical harmonic series of function, we can write: where l λ is the singular value of the operator D and not equal to zero; Based on the orthogonality of spherical harmonic function and Eq.(2) and Eq.(3), the potential coefficients lm c and the scalar coefficients lm g of the observation g are: Then, the observation Eq.(1) in spectral domain is: It is supposed that there are more than one kind of observations. The spectral domain observation equations are: where i is the number of observation types. We can get the solution of observation Eq.(7) by least-squares principle:
LS solutions of three types of satellite gravity gradient boundary value problems
Three types of the gradiometry observations on the spherical surface with height h can be expressed as an infinite series, and the relation between the coefficients of the series and the disturbing potential spherical harmonic series can be expressed with their corresponding singular values. The spectral observation equations are: and R is the mean radius of the Earth. This is an overdetermined boundary value problem with three types of observations. It is difficult or even impossible to derive the rigorous expressions of the kernel function [4] . This paper only considers the uniquely determined GBVP corresponding to the three observations. We can derive the disturbing potential integral expressions of three observations on the boundary surface by Eq.(9): where s R is the radius of satellite reference orbit surface; d sin d d Q Q σ θ θ λ = ; and PQ ψ is the spherical surface distance between the computational point P and the moving point Q. The variables in the formulas are real dimension observations. Based on the formula from Reference [7], the closed expressions of the kernel functions are: From Eqs. (12)-(17) we can see that the solutions of three types of GBVP, which are similar to the solution of the Stokes boundary value problem, are in the form of integrality and need known observations and their kernel functions on the integral boundary surface for the real computation. If we have continuous observations on a full boundary surface, we could es-timate the disturbing potential on the boundary surface or out of the boundary surface.
Data simulation
The disturbing gravity gradient tensor in the local north-oriented coordinate system (the X-axis directed north, Y-axis west and Z-axis radially outwards), which is the grid point value with a resolution of 1 1 × on the spherical surface with height 250 km, is simulated with the spherical harmonic synthesis method [5] . The gravity field model for simulation is EGM96 with the maximum degree 300, which is regarded as the real gravity field. In order to make simulated observation data more actual, the observation noise has been simulated. For simplicity, the different components of simulated gravity gradient tensor is regarded as having the same accuracy and is superimposed the zero-mean white noise with standard deviation 3×10 -3 E. The accuracy of the diagonal components of gravity gradient tensor satisfying Laplace's equation is summarized in Table 1. In Table 1, the non-noised disturbing gravity gradient tensor simulated with the spherical harmonic synthesis satisfy Laplace's equation with the accuracy 10 -12 E, which can be ignored with respect to the observation accuracy 10 -3 -10 -4 E of the gradiometer. The simulated noised disturbing gravity gradient tensor satisfies Laplace's equation with the accuracy 10 -3 E, which is consistent with the order of the simulated noise.
Results and analysis
The 1 China. The spherical surface height is 250 km. The results are given respectively with and without interpolation (the cubic spline interpolation is used in the paper) in the spherical surface integration, and the statistical results of the differences in the disturbing potential between the results from simulated data and EGM96 are given in Table 2.
In Table 2, the maximum absolute value of the differences in the disturbing potential between the results solving from 1 1 × non-noised observation { } zz Γ and EGM96 is 0.09 m 2 /s 2 . The maximum RMS is 0.053 m 2 /s 2 , and the corresponding equipotential surface height transformed by Bruns' formula is 0.6 cm in East China. Although the results in the west are not as good as those in the east, their accuracies are of the same level and the accuracy of the equipotential surface height is better than cm level.
From Table 2, it is clear that the accuracy of the results with interpolation in the spherical surface inte-gration is improved greatly, and better than the one with direct integration with 1-2 order. Theoretically, if the non-noised SGG observations cover the spherical surface fully and continuously, the estimated results should be consistent with the one from the model. But in reality, we cannot obtain globally continuous observations. So, computing the spherical surface integration using those discrete observations with one resolution will cause integral discrete errors in the results. We can densify the obser-vations using interpolation method to reduce the integral discrete errors. In the paper, the cubic spline interpolation is adopted to interpolate observations in the 1 1 × area with 1, 3, 5, and 9 interpolated points.
The 1 1 × grid point disturbing potential values are estimated in the west. The RMS of the differences between these results with different number of interpolated points and EGM96 is summarized in Table 3. In Table 3, although the interpolation can clearly improve the accuracy of the solutions, the accuracy is not continually improved with the increasing number of interpolated points. The solutions with 3, 5, and 9 interpolated points have nearly the same accuracy, which illustrates that the interpolation will cause error in the computation, and the more interpolation points possibly cause more interpolation error.
We also estimate the 1 1 × grid disturbing potential values in the longitude 75 -95 and the latitude 25 -45 of West China from the 1 1 × grid point noised disturbing SGG observations. The statistical results compared to the results from EGM96 are given in Table 4 and illustrated in Fig.1. From Table 4 and Fig.1, the differences between the results solved from noised observations superimposed the zero-mean white noise with standard deviation 3×10 -3 E and the one from EGM96 are mostly in the range of .192 m / 2 s , and the corresponding equipotential surface height is about 2.1 cm, which reaches to centimeter level although the accuracy is lower than the one solved from the non-noised observations.
Conclusions
The theory and method of solving GBVP corresponding to the three types of the observations { },{ , } zz x z y z Γ Γ Γ and { , 2 } xx y y x y Γ Γ Γ − by leastsquares is discussed in the paper. To validate the correctness and possibility of the method, we simulate grid point disturbing gravity gradient tensor on the spherical surface of the satellite height, and add the zero-mean white noise to observations. The testing results show that the satellite gravity gradient integral formula derived from least-squares is valid and rigorous for recovering the gravity field. At the same time, the proper interpolation method should be applied to improve the space resolution of the observation data for reducing the integral discrete error. It should be mentioned that the simulation of satellite orbit and altitude was not considered in the paper, and to actually realize the object of centimeter level geoid, this method should be tested further using the in situ satellite gravity gradient observations.
|
2019-04-20T13:02:59.566Z
|
2007-01-01T00:00:00.000
|
{
"year": 2007,
"sha1": "30fb97839a7098c6dc0e344bca2c5f695c34198e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1007/s11806-007-0073-5?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e635d42a69b3a08da3ea83a8e526064187542070",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
268609404
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative ultrasound (QUS) in the evaluation of liver steatosis: data reliability in different respiratory phases and body positions
Liver steatosis is the most common chronic liver disease and affects 10–24% of the general population. As the grade of disease can range from fat infiltration to steatohepatitis and cirrhosis, an early diagnosis is needed to set the most appropriate therapy. Innovative noninvasive radiological techniques have been developed through MRI and US. MRI-PDFF is the reference standard, but it is not so widely diffused due to its cost. For this reason, ultrasound tools have been validated to study liver parenchyma. The qualitative assessment of the brightness of liver parenchyma has now been supported by quantitative values of attenuation and scattering to make the analysis objective and reproducible. We aim to demonstrate the reliability of quantitative ultrasound in assessing liver fat and to confirm the inter-operator reliability in different respiratory phases. We enrolled 45 patients examined during normal breathing at rest, peak inspiration, peak expiration, and semi-sitting position. The highest inter-operator agreement in both attenuation and scattering parameters was achieved at peak inspiration and peak expiration, followed by semi-sitting position. In conclusion, this technology also allows to monitor uncompliant patients, as it grants high reliability and reproducibility in different body position and respiratory phases.
Introduction
The most common chronic liver disease is liver steatosis or fatty liver, which affects 10-24% of the general population [1].Non-alcoholic fatty liver disease (NAFLD) is a chronic disease related not only to alcohol consumption but also to diabetes, hyperlipidemia, toxins, drugs, or genetic diseases [2,3].
NAFLD is diffusing in the general population in two steps, the first step consists of metabolic syndrome, where insulin resistance induces liver parenchyma to stock fat developing liver steatosis [4,5].
The second step provides the progression of liver steatosis and steatohepatitis (NASH), characterized by inflammation and chronic damage that may evolve into liver fibrosis and end-stage liver disease [6].Late or delayed diagnosis without any lifestyle change may increase the risk of liver fibrosis or cirrhosis in the general population with consequent costs on the healthcare system, but even in patients after liver transplantation [7][8][9][10].
Liver biopsy has the limit of being an invasive procedure and it allows the examination of only a selected parenchyma, but it has been abandoned in clinical practice [11][12][13].
Magnetic resonance imaging proton density fat fraction (MRI-PDFF) enables accurate, repeatable, and reproducible quantitative assessment of liver fat over the entire liver parenchyma, achieving high accuracy and sensitivity [18].The diagnostic power of MRI-PDFF allows the detection even of the 5% of microscopic fat, so it has a higher sensitivity to detect early, but fundamental changes in liver fat content than liver biopsies [19,20].
The major concern about the wide diffusion of MRI-PDFF is represented by its costs, limited diffusion, and patient compliance [21][22][23].Besides standard protocols, radiomics has already been proposed to be a useful tool in the management of several pathologies [24][25][26].
More in detail, most recent studies are also proposing machine learning-based models to analyze liver parenchyma, but there are still few studies validated in clinical practice [15,27,28].
Considering the huge diffusion and reproducibility of the liver US, ultrasound software has been enriched by tools dedicated to hepatic fat quantification [29].
Despite it is well known that a high percentage of fat determines the brightness of liver parenchyma on US images, we have to underline that this is related to the scatter and to the attenuation of the ultrasound wave due to the amount of fats [30,31].
In particular, B-Mode ultrasound allows to assess the grade of liver steatosis through: the evaluation of echogenicity of the liver compared to the renal cortex.Furthermore, the right liver attenuation with diaphragm visualization and the visualization of intra-hepatic vessels are commonly used in clinical practice [32].
The qualitative assessment of the morphology or the brightness of liver parenchyma has now been supported by quantitative values obtained from tissue microstructure characterization [33] through quantitative ultrasound (QUS) techniques.
There is only a few evidence about the inter-operator reliability of fat quantification tools in clinical practice [34].There is also a lack of evidence of its reliability in selected categories of patients, in different body positions and respiratory phases [35].
So this study aims to demonstrate the reliability of quantitative ultrasound (QUS) in assessing liver fat volume measurements, to confirm the inter-operator reliability in the respiratory phases, but even in different body positions, in order to follow up uncompliant patients.
Tissue attenuation imaging (TAI, Samsung Medison)
Tissue attenuation is due to the energy loss of an ultrasound wave when it passes through a tissue.Attenuation depends on tissue features and wave frequency.When the percentage of liver fat is higher also the attenuation increases [36].
Attenuation coefficient (AC) has been calculated with several methods proposed by different vendors [37][38][39].AC showed high reliability to detect liver fat, and to estimate the grade of liver steatosis, compared to liver biopsy and MRI-PDFF as reference standard [40].
In our study, AC is calculated by a parameter, the tissue attenuation imaging (TAI, Samsung Medison), that indicates the slope of the ultrasound central frequency downshift along the depth.
Tissue scatter distribution imaging (TSI, Samsung Medison)
Backscattering refers to the ultrasound energy reflected from a tissue, and it is represented by echogenicity or brightness.In particular, liver brightness means the backscattering has increased.The scattering of ultrasound also creates images with speckle patterns.The different patterns have been described by a statistical distribution.In particular, the Nakagami distribution correlates backscattering to the percentage of liver fat.In our work, Nakagami distribution is calculated by a parameter, tissue scatter distribution imaging (TSI, Samsung Medison), which calculates the concentration and the distribution of the ultrasound scatterers [41,42].
TSI has been validated through a comparison with liver biopsy and MRI-PDFF as reference standards.
Materials and methods
This is a retrospective study conducted on a prospectively collected database study conducted at the University of Molise between November 2022 and April 2023.The patients were admitted to an abdominal ultrasound examination for other reasons, at the University of Molise, Campobasso, Italy.
All patients signed an informed consent to publish their anonymous clinical data.
• No history of chronic liver disease.
• No habitual alcohol consumption.
• Chronic liver disease or alcohol addiction.
• Lack of compliance.
We studied echogenicity and composition of liver parenchyma.
For each patient beyond the US exam, we provided a dataset of clinical data: gender, age, body mass index (BMI), complete blood count, bilirubin and alanine aminotransferase (ALT), and aspartate aminotransferase (AST) levels.
Patients were divided into three subgroups according to their body mass index: Subgroup 1 includes normalweight patients with 18.5 < BMI < 24.99 kg/m 2 ; subgroup 2 includes overweight patients with 25 < BMI < 29.99 kg/ m 2 ; and subgroup 3 includes obese patients with BMI > 30 kg/m 2 .
The ultrasound exam was performed on the right lobe with Samsung RS85 Prestige, with a single convex transducer 1-7 MHz convex transducer (CA1-7S) and completed with quantitative ultrasound (QUS) imaging: tissue attenuation imaging (TAI) and tissue scatter distribution imaging (TSI).
The examinations were performed by 2 expert radiologists.A total of 10 measurements were recorded in different liver segments, in particular V, VI, VIII, and VII segments.
We included in the study the highest value of TAI and TSI found by each physician.
For each patient, a mean of 10 measurements was recorded using four different methods: • Method 1: normal breathing at rest.• Method 2: peak inspiration.
Patients were asked to inhale, hold their breath, exhale, or breathe quietly.Then, patients were asked to move to a semi-seating position.
Operators 1 and 2 conducted the US with the same machine settings.
Patient subgroups during examination are reported in Figs. 1, 2, and 3.
Statistical analysis
Cohen's Kappa values were calculated to identify rates of inter-rater agreement between two different radiologists.Data were expressed as agreement in percentage, Cohen's Kappa value, standard error, and Z.The measure of the agreement below 0.0 means poor agreement, 0.00-0.20 slight agreement, 0.21-0.40fair agreement, 0.41-0.60moderate agreement, 0.61-0.80substantial agreement, and > 0.80 almost perfect agreement [44,45].Considering that measurements were performed in different methods as normal breathing, peak inspiration, end-expiration, and semi-seating position, one-way ANOVA with Bonferroni correction was performed separately for both experts.Statistical significance was at p ≤ 0.05.Statistical analyses were performed with STATA SE 16.1 StataCorp LLC software.Most of the patients were male (27/45, 60%).In particular, most of the obese patients were male (10/15, 75%).
The mean values of TAI and TSI recorded by the operators during normal breath were 0.718 ± 0.026 (operator 1), 0.755 ± 0.236 (operator 2) and 92,654 ± 1465 (operator 1) 92579 ± 2549 (operator 2).TAI and TSI values are both expressions of a population with a mean grade of steatosis 1. Inter-operators agreement in this phase was low, 15.56% for TAI and 2.22% for TSI.
During forced inspiration, the mean values of TAI and TSI recorded by the operators were 0.728 ± 0.023 (operator 1), 0.741 ± 0.0219 (operator 2) and 93.67 ± 1.809 (operator 1) 94.33 ± 1.84 (operator 2).Both the TAI and TSI values are coherent and belong to steatosis grade 1.In this respiratory phase, the inter-operator agreement was higher both for TAI and TSI measurement, 48.89% and 37.78%, respectively.
Inter-operator agreement calculated with K-Cohen test showed the lowest K values in the measurement of TAI and TSI during quiet breath (K = 0.137 and K = 0.0115, respectively).
ANOVA test showed a significant statistical difference among operators only in the TSI measurements; therefore, quiet breath strongly influenced TSI value rather than TAI.
Results of statistical analysis are summarized in Tables 2 and 3.
Discussion
This study aimed to evaluate the impact on TAI and TSI values of the breathing cycle, chest movement, and body position, to validate the reliability of QUS.This validation allows operators to monitor even hospitalized or uncompliant patients in the most appropriate position or respiratory phase to overcome the limitations due to several artifacts.As for all new technologies the reliability and reproducibility of QUS are not completely tested.After a statistical analysis, the ultrasound quantification of liver fat is confirmed to be reliable even in normalweight patients, even in overweight patients, and even obese patients.The evaluation of attenuation and scattering achieved a high agreement among the operators, especially during the peak inspiration and peak expiration, but also a satisfying agreement in the semi-sitting position.The study conducted by Sendur et al. reported that inspiration and expiration do not significantly influence the results in patients with BMI > 25 kg/m 2 , while a significant difference in the attenuation coefficient in the BMI < 25 kg/m 2 subgroup was found.Our study confirmed the reliability among the respiratory phases in overweight patients (BMI > 25 kg/m 2 ), but also among the operators.In addition, there was not a significant difference in patients with BMI < 25 kg/m 2 among the operators, among different respiratory phases.Also, in the BMI < 25 kg/m 2 subgroup, there was a stronger agreement at peak inspiration and expiration, in spite of quiet breath.Concerning the different methods evaluated in our study, TAI measurement did not show any statistically significant difference among the respiratory phases, while TSI did, due to the higher variability in the quiet breath.This was also probably due to a higher sensitivity of the method to the thorax movement.
Affordability, portability, and wide availability are some of the many advantages of ultrasound in clinical practice, in comparison with other imaging techniques.Therefore, US tools could be efficiently used to diagnose and follow-up liver steatosis.Comparing our data with the reference standard reported by Sendur et al., in our dataset there was neither underestimation nor overestimation of steatosis grade attributed to the patients among different methods [34,46].
Anyway in this study, we focused on demonstrating the reliability of TAI and TSI measurements inter-operators in a stratified population composed of normal, overweight, and obese patients.
The main limitation of our study is the unavailability of an MRI-PDFF to compare the results.To overcome this limit we introduced a control group of normal-weight patients not affected by liver steatosis.TAI and TSI measurements, in fact, gradually increased from the normal-weight group to obese group.
The importance of detecting liver steatosis is already assessed as it affects 90% of obese patients [47].
Because the grade of injury can range from fat infiltration to cirrhosis, early therapy must be set [48,49] and monitored to lose weight, rather than muscle mass [50][51][52].
Nowadays, the most effective treatments are represented by bariatric surgery in young patients [53], to avoid the development of metabolic syndrome and liver failure.It should be underlined, the high risk of liver steatosis also after liver transplants [54,55], so QUS may represent a safe and efficient tool to monitor the results of bariatric surgery or the health of the liver graft.
Even if bariatric surgery might be a challenging surgical procedure and it might be considered an invasive treatment, especially for young people, the advent of minimally invasive surgery has changed surgical scenarios allowing faster recovery, lower blood loss, and lower risk of major complications [56][57][58][59][60][61][62][63][64].
Thanks to the low risk of complications, several studies are now introducing a combined treatment with both bariatric surgery before liver transplantation, as NAFLD is a metabolic condition that may persist in damaging the graft [65][66][67].
The combined treatment can be helpful in adult or elderly patients, even if a consensual physical performance is needed and further studies are needed to standardize the procedure [68,69].
In these groups of patients who have undergone bariatric surgery or liver transplantation, the importance of follow-up to monitor liver fat is outstanding and there is the need to quantify the percentage of liver fat to avoid sub-optimal treatments [70].
In addition, as liver fibrosis can benefit weight loss, bariatric surgery is starting to be considered also in compensated patients [71].
Future investigation will focus on the implementation and validation of the shear wave parameter to evaluate and monitor liver fibrosis.
Future studies will concern the evaluation of attenuation and scattering in a prospective cohort of patients undergoing bariatric surgery and weight loss.
Fig. 1
Fig. 1 TAI and TSI measurement in a patient with BMI < 25 kg/m 2 .QUS shows no evidence of liver steatosis (Grade 0)
Fig. 2
Fig. 2 TAI and TSI measurement in a patient with BMI 25-30 kg/m 2 .QUS shows evidence of mild liver steatosis (Grade 2)
Fig. 3
Fig. 3 TAI and TSI measurement in a patient with BMI > 30 kg/m 2 .QUS shows evidence of severe steatosis (Grade 3)
Table 1
Reference standard for steatosis grade quantification
Table 2
The means of TAI and TSI values taken by each operator and the agreement between the measurements (K Cohen test)
|
2024-03-23T06:18:55.619Z
|
2024-03-21T00:00:00.000
|
{
"year": 2024,
"sha1": "42812e12eb7bf9138ec3a8e881f659eb3d7de799",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4a9ed1f76f0a6844e2bcffb9665c743c9f39b4ca",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233300652
|
pes2o/s2orc
|
v3-fos-license
|
Identification and Characterization of Ceratocystis fimbriata Causing Lethal Wilt on the Lansium Tree in Indonesia
Bark canker, wood discoloration, and wilting of the duku tree (Lansium domesticum) along the watershed of Komering River, South Sumatra Province, Indonesia first appeared in 2013. The incidence of tree mortality was 100% within 3 years in badly infected orchards. A Ceratocystis species was consistently isolated from the diseased tissue and identified by morphological and sequence analyses of the internal transcribed spacer (ITS) and β-tubulin regions. Pathogenicity tests were conducted and Koch’s postulates were confirmed. The fungus was also pathogenic on Acacia mangium, but was less pathogenic on mango. Partial flooding was unfavourable for disease development. Two described isolates (WRC and WBC) had minor variation in morphology and DNA sequences, but the former exhibited a more pathogenic on both duku and acacia. The ITS phylogenies grouped the most pathogenic isolate (WRC) causing wilting of the duku tree within the aggressive and widely distributed ITS5 haplotype of C. fimbriata.
The duku (Lansium domesticum Corr.), also known as the langsat and the kokosan is a tropical lowland fruit tree native to western Southeast Asia, from Borneo in the east (Indonesia) to peninsular Thailand in the west. It occurs wild and cultivated in its native countries and is one of the most widely cultivated fruits (Techavuthiporn, 2018;Yaacob and Bamroongrugsa, 1991). Duku is among the most popular local fruits in Indonesia. In 2017, the total number of harvested duku trees in Indonesia was 2.4 million trees, with a total yield of 138.4 metric tons (Badan Pusat Statistik-Statistics Indonesia, 2018). The most famous cultivars are grown in South Sumatra (duku Palembang and duku Komering) due to their sweet flavour combined with a subacid taste and having few seeds, or even being seedless. In South Sumatra, duku is mainly grown as a backyard or garden tree in combination with other native fruit trees along the watershed of the Musi, Komering, Ogan, Lematang, and Rawas rivers.
Lethal disease has rarely been evident on duku trees growing in the wild or cultivated orchard areas. Anthracnose caused by Colletotrichum gloeosporioides, appearing as brownish spots on the fruit bunch and often resulting in premature fruit drop and post-harvest losses, is commonly evidenced throughout the tropics (Yaacob and Bamroongrugsa, 1991). Corky bark disease, which makes the bark become rough and corky and flake off, often resulting in little to no fruit production has been reported on dukus in tropical USA (Keith et al., 2013;Whitman, 1980). In Hawaii, a corky bark canker is associated with an Ascomycete fungus, Dolabra nepheliae, and insect larvae of Araecerus sp. (Coleoptera: Anthribidae) and Corticeus sp. (Coleoptera: Tenebrionidae) feeding under the loosened bark (Keith et al., 2013).
During early January 2014, massive mortality of duku trees along the watershed of the Komering River in Ogan Komering Ulu (OKU) District was reported by most local and some national newspapers. In total, more than 2,000 trees of the most popular cultivar, duku Komering, died. The symptoms first appeared during the early rainy season of October 2013. Most of the trees that died were predisposed due to partial flooding to a depth of about 20 cm for about one month from the end of December 2013 to January 2014. However, some affected trees were found growing on non-flooded sites, indicating an infectious disease. In this study, we describe a new bark canker and wilting associated with massive mortality of duku trees in Indonesia, illustrate morphological and molecular-based identification of the pathogen, and describe the pathogenicity of the causal fungus on duku trees and other hosts. Disease progress and spread for 5 years is also discussed.
Materials and Methods
Disease incidence and isolation of the causal agent. Incidence of diseased trees was assessed in 2014 and 2017 at eight duku orchards in OKU District of South Sumatra. In each orchard, five 10 × 10 m plots starting from the centre of the diseased trees were selected. The trees were recorded as infected if any part of the shoot or stem showed disease symptoms. Twenty diseased duku trees were randomly selected from the affected orchards. Sections of the discolored wood from the stem were cut, wrapped in a paper towel and transported to the laboratory for examination. Isolation of the fungal pathogen was performed from discolored wood that had been surface-sterilized with 70% ethanol for 30 s and 1% NaOCl for 2 min. Small sections (5 × 5 mm) from the margin of discoloration were placed on a malt extract agar (MEA) amended with 50 µg/ml streptomycin in Petri dishes. Another subset of surface-sterilized wood sections was wrapped between carrot slices to bait for Ceratocystis spp. (Brito et al., 2019;Moller and DeVay, 1968). Baiting was also performed by inserting diseased tissue into freshly harvested cacao pods and cucumber fruit in an attempt to isolate Phytophthora. Initial identification and cultural characteristics. Initial identification was performed based on morphological characteristics of teleomorphs and anamorphs. Isolates were characterized from 2-week-old cultures grown on 2% MEA. One hundred measurements of each teleomorph and anamorph structure from each representative isolate were made with an Olympus microscope and an OptiLab camera system (Yogyakarta, Indonesia). The average (mean) and standard deviation (SD) of measurements were computed and presented as mean±SD. Morphological characteristics were compared with Ceratocystis isolates from A. mangium (Tarigan et al., 2011) and sweet potato (Engelbrecht and Harrington, 2005). DNA isolation, PCR, and sequence analyses. Two representative isolates (WRC and WBC), isolated from the diseased duku trees were further used for DNA sequence analysis. DNA was isolated from mycelia cultured at 27°C for 7 days in malt extract broth (Difco Laboratories, Sparks, MD, USA) in plastic Petri dishes. Total DNA was extracted using bead-beating technology (MO BIO Laboratories, Carlsbad, CA, USA) and the silica spin filter method (Geneaid, Taipei, Taiwan) according to the manufacturer's instructions. DNA concentration and purity were measured spectrophotometrically. The ITS1/5.8 S rDNA/ITS2 (internal transcribed spacer, ITS) region of Ceratocystis isolates was amplified by PCR, using ITS1 (forward: 5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (reverse: 5′-TCCTCCGCTTATTGATATGC-3′) (White et al., 1990). The β-tubulin gene (TUB) region was amplified by PCR, using βt1a (forward: 5′-TTCCCCCGTCTC-CACTTCTTCATG-3′) and βt1b (5′-GACGAGATC-GTTCATGTTGAACTC-3′) (Glass and Donaldson, 1995). PCR reaction mixtures consisted of 1 μl of each primer (10 mM), 15 μl of REDiant 2× PCR Master Mix (1st BASE, The Gemini, Singapore), 3 μl of DNA template (2-10 ng) and 10 μl nuclease-free water to make up 30 μl total volume reactions. PCR was performed using Thermal Cycler (SureCycler 8800, Agilent, Santa Clara, CA, USA) with a 5-min 95°C denaturation step followed by 35 cycles of 30 s denaturation at 95°C, 30 s annealing at 56°C for ITS and 55°C for TUB, and 40 s extension at 72°C, followed by a final extension of 5 min at 72°C. Negative controls (without template DNA) were applied in each assay. The PCR products of ITS and TUB regions were sequenced at 1st BASE, Co., Ltd. (Kuala Lumpur, Malaysia).
Identification of isolates was accomplished by BLAST searches of the ITS and TUB sequences on the GenBank database (http://www.ncbi.nlm.nih.gov). BLAST identification suggested that both isolates belonged to the species Ceratocystis fimbriata. Phylogenetic analyses were performed to identify the species of Ceratocystis most closely related to the Lansium isolate from Indonesia. β-tubulin datasets were generated using ex-type and ex-paratype sequences representing species in the Latin American (LAC) and Asian clade of the C. fimbriata species complex (Barnes et al., 2018;Fourie et al., 2015;Oliveira et al., 2015). The β-tubulin sequences (Table 1) were aligned using the online software MAFFTv.7 (Katoh et al., 2019) with the best alignment strategy was automatically selected by the software. Sequence alignments were manually edited in MEGA X (Kumar et al., 2018). There were 34 aligned Barnes et al. (2018) datasets ( Supplementary Fig. 1) and the sequences were used for phylogenetic tree construction using a maximum parsimony (MP) analysis under PAUP 4.0b10 (Swofford, 2002). To determine relatedness of isolates from duku with known C. fimbriata populations, the ITS sequence was manually aligned with known ITS haplotypes as designated by Harrington et al. (2014) (Supplementary Fig. 2) and phylogenetic analyses were performed. Representative sequences of ITS haplotypes of C. fimbriata as designated by Harrington et al. (2014) and ITS sequences of accession numbers KF878326, KF650948, AM712445, AM292204, MF033455, EU588656, KC261853, which most closely matched with isolates from duku, were used in the analyses. C. variospora (accessions AF395683) was used as the outgroup taxon. There were 35 ITS sequences in the dataset (Table 1) and the sequences were initially aligned using MAFFTv.7 (Katoh et al., 2019) and then manually adjusted and trimmed in MEGA X (Kumar et al., 2018) (Supplementary Fig. 3). The relationships between ITS sequences of isolates from L. domesticum and other representative genotypes of the C. fimbriata sensu stricto (Harrington et al., 2014;Oliveira et al., 2015) were analysed using genetic distance matrices, unweighted pair group method with arithmetic means (UPGMA), and 1,000 bootstrap replications under PAUP 4.0b10 (Swofford, 2002).
Pathogenicity tests. Two isolates identified using DNA sequence data were used to test for pathogenicity. Pathogenicity tests were conducted on 1-year-old duku (Lansium domesticum var. domesticum) seedlings grown in a partially flooded and in a non-flooded nursery. Seedlings were grown in 20 cm diameter plastic pots containing a mixture of topsoil and compost under a 25% shading net. The pots from the flooded nursery were placed in a tray filled with tap water, which was maintained to a depth of 2-3 cm. Pathogenicity was also tested on 3-month-old acacia (A. mangium) and 6-month-old mango (Mangifera indica cv. Arumanis) seedlings. Preliminary tests showed that stem inoculations with a mycelial plug were ineffective unless the bark was wounded. Therefore, wound inoculation was used throughout the experiments. Wounds were made by puncturing three points on the bark to a 3-mm depth using a sterile 28 g needle, and a 2 × 2 mm agar plug taken from an actively growing colony on 2% MEA was placed in the wound with the mycelium downward. This was covered with a section (10 × 10 mm) of wetted tissue paper and wrapped with clear tape to reduce contamination and desiccation. The inoculum along with the wrapping plastic was removed at 3 days post-inoculation. Each isolate was injected into 10 seedlings for each flooded and non-flooded group of seedlings. For uninoculated controls, wounded bark was wrapped with sterile MEA plugs. Whole experiments were repeated twice and data were pooled after verifying the variance homogeneity using the Levene test.
Disease severity was assessed 20 days post-inoculation based on the length of wood discoloration. Sections were cut from the margins of lesions, surface-sterilized, and plated on MEA or inserted into a carrot dish to re-isolate the inoculated fungus to complete Koch's postulates. Fungal identity was verified by colony, anamorph, and teleomorph morphology.
Results
Field observations and symptom development. Diseased trees were characterized by wilting of some twigs or branches, followed by defoliation and dieback. In most cases, total plant wilt or death was observed within 6 months from the first appearance of wilt ( Fig. 1A and B). Bark canker was eventually found on heavily infected trunks or dead trees (Fig. 1D). Scraping the bark down to the wood along the wilted side of the trunk up to the branch revealed extensive areas of discolored tissue ( Fig. 1E and F). The discolored wood typically had a streaked appearance, turning a uniform dark brown with age and could be found beneath the outermost layers of sapwood (Fig. 1E) and in some cases, discoloration extended to the heartwood (Fig. 1F). All diseased trees had been attacked by squirrels (Fig. 1C) and lesions appeared to originate from surrounding beetle entry/exit holes (Fig. 1E) on the peeled-off bark, indicating the involvement of a wound pathogen. The disease was observed along the watershed of the Komering River, including Lubuk Batang (OKU District) and Rasuan (OKU Timur District), all in South Sumatra Province of Sumatra. Affected trees ranged from young ( < 5 years) to old ( > 50 years) in age. Disease incidence and severity were highest in Lubuk Batang Lama, where the disease first appeared. The disease progress both in term of incidence and severity was fast. All trees (100%) from (Table 2). In the 2019 field observation, the disease was found to have sporadically killed duku trees in Ogan Komering Ulu Timur (OKUT) District (within 100 km of the disease origin). Squirrel attacks were not found on the recently infected trees. Disease was not found in other duku orchards of South Sumatra in OKI, PALI, and Muara Enim districts. There was no appearance of squirrel scratches in those disease-free orchards.
Culture characteristics and morphology. Fungi typical of genus Ceratocystis were consistently isolated from direct plating of diseased wood on to both MEA and carrot slices. Colonisation of Phytophthora on diseased wood was not detected by baiting using cacao pods and cucumber fruit. Ceratocystis isolates from L. domesticum trees were typical of Ceratocystis spp. in the C. fimbriata sensu lato species complex, having characteristic olive-green colonies and the typical banana-fruit odour. They had globose to sub-glo-bose ascomata with long necks and typical divergent ostiolar hyphae at their tips (Fig. 2). Teleomorph and anamorph structures were produced within 2 weeks on MEA cultures. Two isolates (WRC and WBC) were described and both had ascospore (4-7 × 3-5 µm), cylindrical conidia (14-25 × 4-5 µm), and aleuroconidia sizes (11-16 × 7-11 µm) within
Fig. 3. Phylogenetic tree generated from maximum parsimony analysis of the β-tubulin sequences showing the relationship between
Ceratocystis fimbriata from Lansium tree in Indonesia (marked in bold) and other species in the Latin American and Asian clade of the C. fimbriata species complex. The strain numbers, host genera, countries of origin, and species are given for the representatives of each isolate. Species names considered to be synonyms of C. fimbriata sensu stricto are in parentheses (Harrington et al., 2014;Oliveira et al., 2015). C. variospora was used as the outgroup taxon. Bootstrap values greater than 50% obtained after a bootstrap test with 1,000 replications are indicated on appropriate nodes. the range of those of C. fimbriata sensu stricto neotype BPI 595863 (Engelbrecht and Harrington, 2005). Both isolates produced a barrel-shaped (doliform) conidia (8-10 × 6-8 µm) in chain (Fig. 2).
Sequence analyses. WBC and WBC isolates had differences in two bases of ITS sequence (99.6% similarity), but had a 100% similarity in the TUB sequence. BLAST searches of the ITS region of WRC (MT229127) and WBC (MT229128) identified both sequences with the GenBank deposits for C. fimbriata with 100% of similarity and query coverage. A similar BLAST result was obtained with the TUB sequence (MW013766 and MW013767 for WBC and WBC, respectively) and confirmed the assignment to C.
fimbriata with 100% of similarity and query coverage. MP analyses for the b-tubulin resulted in single most parsimonious tree of 84 steps (Fig. 3), with a homoplasy index = 0.036, consistency index = 0.964, rescaled consistency index = 0.979, and retention index = 0.944. Ceratocystis isolates from Lansium in Indonesia reside in the LAC of C. fimbriata sensu lato and they are phylogenetically clustered closely with ex-type and ex-paratype of C. manginecans and C. fimbriata. C. manginecans is considered synonym or conspecific of C. fimbriata sensu stricto (Harrington et al., 2014;Oliveira et al., 2015).
Manual alignment of the ITS sequences with previously described ITS genotypes (Harrington et al., 2014) grouped the isolates into ITS5 and ITS6z haplotype of C. fimbriata Fig. 4. Dendrogram generated by unweighted pair group method with arithmetic means showing the genetic relatedness of representative the internal transcribed spacer (ITS) rDNA genotypes (sequences) of the Ceratocystis fimbriata sensu stricto. The GenBank accession numbers, strain numbers, ITS haplotypes, host genera and countries of origin are given for the representatives of each haplotype. Isolates from Lansium domesticum in Indonesia were marked in bold. The ITS haplotypes of C. fimbriata are numbered following the numerical designations of Harrington et al. (2014). C. variospora was used as the outgroup taxon. Bootstrap values greater than 50% obtained after a bootstrap test with 1,000 replications are indicated on appropriate nodes. Scale bar indicates genetic distance. Suwandi et al. 132 for WRC and WBC, respectively. The WRC showed 100% similarity with other ITS5 haplotype of C. fimbriata isolated from tea tree (KF650948), taro (AM712445), pomegranate (AM292204) in China; from eucalyptus (KF878326) in Zimbabwe; from acacia (MF033455) in Vietnam; and from acacia (EU588656) in Indonesia. WBC had 100% similarity with member of ITS6z haplotype of C. fimbriata isolated from Hypocryphalus mangiferae (KC261853) in Oman. UPGMA analysis clustered both isolates from L. domesticum within a single group consisted of both ITS5 and ITS6 haplotypes (Fig. 4).
Pathogenicity test. In pathogenicity tests, initial symptoms appeared as water-soaked brown lesions on the wound site within 3 days after inoculation. The lesions remained small at inoculation sites on bark, but scraping the bark down to the wood revealed extensive areas of discolored xylem tissue upward and downward from the inoculated site (Fig. 5A). Upward extension of xylem discoloration from the inoculation site was more extensive (P < 0.0001) than downward extension on duku seedling inoculated with WRC. However, no significant difference (P ≥ 0.05) between upward and downward discoloration extension was exhibited by WRC on acacia and mango and by WBC on all hosts (Table 3). This kind of discolored xylem was similar to a typical symptom of diseased trees in the field. The WRC isolate was more pathogenic on duku seedling than WBC as it induced significantly (P < 0.05) longer lesions and caused more (P < 0.05) plant wilt and death (Fig. 5A). Plant wilt and death was observed within 20 days postinoculation and later the wilting incidence gradually increased. Regrowth of lateral shoots was observed on wilted plants. The control plants, inoculated with MEA, remained asymptomatic and had only a trace of xylem discolouration (less than 5 mm in length) at the wound site (Table 3). Partial flooding of duku seedling did not significantly (P = 0.163) affect extension of the xylem discoloration, but plant mortality by WRC was lower (P < 0.05) than on nonflooded seedling (Table 3). Fungus with the same morphological characteristics was re-isolated from diseased wood of inoculated seedlings, but not from any of the control plants.
Ceratocystis isolates also induced xylem discolouration and wilt symptoms on inoculated A. mangium seedlings (Fig. 5B), similar to that observed on duku seedlings. Xylem discoloration on acacia developed faster than on duku and was equally extensive (P ≥ 0.05) for both upward and downward expansion (Table 3). Plant wilt and death was observed earlier on acacia compared to duku with half the WRC-inoculated acacia dying within 20 days post-inoculation. Similar to what was observed on duku seedlings, the WRC isolate caused significantly (P < 0.05) longer lesion and more death on acacia and therefore, proved to be more pathogenic than WBC (Table 3). Ceratocystis isolates were also pathogenic on mango (M. indica), but did not induce wilting symptoms (Fig. 5C). Mycelial plug inoculation on stems of mango resulted in wood discoloration similar to the symptoms on duku and acacia (Fig. 5C), but with less expansive discoloration (Table 3).
Discussion
This study presents the first report of C. fimbriata associated with massive mortality of L. domesticum trees in South Sumatra, Indonesia. This fungus was shown to be pathogenic by producing expansive wood discoloration and causing lethal wilt on inoculated duku seedlings similar to that found in the field. Fungus with the same morphological characteristics was easily re-isolated from diseased wood of inoculated seedlings, suggesting fulfilment of Koch's postulates. Inoculation experiments on acacia seedlings suggested that the pathogen was also pathogenic there by producing more expansive wood discoloration, bark canker, wilting symptoms, and plant death. Ceratocystis isolates from duku proved to be less pathogenic on mango, as less wood discoloration was induced, without wilting and plant death.
The ITS rDNA sequence of the most pathogenic isolate, WRC (MT229127), had an identical sequence to the isolates of C. fimbriata from tea tree (KF650948), taro (AM712445), and pomegranate (AM292204) in China; from eucalyptus (KF878326) from Zimbabwe; from acacia (MF033455) in Vietnam; and from acacia (EU588656) in Indonesia. All these isolates were confirmed belong to ITS5 haplotype of C. fimbriata (Harrington et al., 2014;Li et al., 2016). Some of these isolates were previously identified as C. acaciivora (Tarigan et al., 2011) and subsequently reconsidered as C. manginecans (Fourie et al., 2015), but Oliveira et al., (2015) considered those cryptic species to be synonyms or conspecifics of C. fimbriata sensu stricto. The ITS5 haplotype is an aggressive genotype of C. fimbriata causing a lethal wilt disease of economically important plants worldwide. This genotype represented the native C. fimbriata populations in Brazilian forest plantations of Eucalyptus spp. (Harrington et al., 2014Li et al., 2016). This ITS haplotype was also found infecting Acacia spp. and its original host, Eucalyptus spp. in China, Indonesia, South Africa, Thailand, Uruguay (Harrington et al., 2014), Zimbabwe (Jimu et al., 2015) and Vietnam (Trang et al., 2018). The member of this Eucalyptus population of C. fimbriata cause the wilt epidemic on kiwifruit in Brazil (Ferreira et al., 2017). In China, the ITS5 genotype has been considered to be introduced from Brazil through Eucalyptus cuttings and reported to cause epidemics on pomegranate, loquat, and taro Li et al., 2016), and tea tree (Xu et al., 2019). The less pathogenic isolate, WBC, is grouped as ITS6z, a minor haplotype derived from a single haploid strain of C2759 (CBS 135868). The C2759 was originated from Dalbergia sissoo in Pakistan and its single-ascospore culture yielded many different haplotypes with the ITS7b as the major genotype (Harrington et al., 2014). WBC had 100% similarity with other member of ITS6z haplotype (type Y = KC261853) of C. fimbriata isolate CMW13582 originated from the bark beetle, H. mangiferae in Oman (Naidoo et al., 2013). The ITS7b is a common ITS genotype of C. fimbriata from Oman, Pakistan, and Indonesia that previously described as C. manginecans (Harrington et al., 2014;Oliveira et al., 2015). Many isolates in Asia and Oman have mixed ITS sequences due to crosses between the ITS5, ITS6, and ITS7b genotypes (Oliveira et al., 2015). In this study, Ceratocystis isolates from Indonesia (ITS5 and ITS6z) and members of ITS7b haplotype (CMW13851 and CMW23634 from Oman and Pakistan, respectively) are grouped into a single phylogenetic cluster of C. fimbriata sensu stricto based on partial β-tubulin sequence. It is likely that the population of C. fimbriata causing disease on duku and acacia in Sumatra is a combination of ITS5, ITS6, and ITS7b, with the ITS6z a result of crossing of these haplotypes. Morphological characteristics showed that the pathogen belonged to the species C. fimbriata (Engelbrecht and Harrington, 2005). Both Ceratocystis isolates from duku (WRC and WBC) had a similar morphology to C. fimbriata sensu stricto neotype BPI 595863 (Engelbrecht and Harrington, 2005), except for doliform conidia that were absent on BPI 595863. Phylogenetic analyses based on the ITS and β-tubulin regions showed conclusively that Ceratocystis isolates causing bark canker and lethal wilt on duku tree in Indonesia is identified as C. fimbriata sensu stricto. There were two ITS genotypes of C. fimbriata associated with disease on Lansium tree in Indonesia, one consistent with that found in Oman and Pakistan on the mango bark beetle and Dalbergia (and other hosts) and a second sequence found in China, Indonesia, Vietnam and Brazil on various hosts, including acacia. C. fimbriata has been known to infect a wide variety of annual and perennial host plants throughout the world. In Indonesia, diseases caused by C. fimbriata are considered to be of minor importance due to non-lethal and sporadic infestation. The fungal infection has long been noted to cause a non-lethal disease known as mouldy rot on the trunk of rubber trees (Tayler and Stephens, 1929). The role of fungal infection as the primary causal agent of the disease has been dismissed since mouldy rot is considered an advanced stage of a physiological disorder induced by excessive tapping and ethylene overstimulation (Putranto et al., 2015) and the disease can be eliminated by treatment with non-fungicidal biostimulants (Suwandi et al., 2018). In the last decade, disease incited by C. fimbriata has been one of the most destructive and economically important diseases on acacia plantations in Indonesia, shortly after an outbreak on the industrial forest plantations throughout the world (Roux and Wingfield, 2009). Outbreaks of Ceratocystis disease have forced the replacement of thousands of hectares of A. mangium plantations in eastern Sabah, Malaysia (Brawner et al., 2015). In Indonesia, Ceratocystis infection has contributed to 2% mortality by the fourth rotation of A. mangium in Sumatra, Indonesia (Hardie et al., 2018). Pathogens causing lethal wilt of duku belong to ITS haplotype 5, which represented C. fimbriata populations from forest plantations of Acacia spp. and Eucalyptus spp. Pathogenicity tests also confirmed that A. mangium is more susceptible than the original host (duku tree), suggesting the establishment of C. fimbriata pathogenicity on acacia as the main host. Similar disease symptoms caused by Ceratocystis infections were found to be endemic on acacia and eucalyptus plantations located about 30 km away from the site of study. It is likely that population of C. fimbriata pathogenic on acacia plantation could extend their host range to native fruit tree such as Lansium and cause a serious threat to the neighbouring fruit tree species. The hostrange extension by the ITS5 haplotype of C. fimbriata to the susceptible neighbouring plants occurred in Brazil, in which the genotype from eucalyptus showed strong aggressiveness on taro (Harrington et al., 2011) and caused epidemic on grapevine (Ferreira et al., 2017). Similar host extension by the ITS5 haplotype also occurred in China, in which the eucalyptus population caused epidemic on pomegranate, loquat, and taro Li et al., 2016), and tea tree (Xu et al., 2019).
All sampled diseased trees had been previously attacked by squirrels and lesions appeared to originate from surrounding beetle entry/exit holes on peeled-off bark from squirrel scratches, suggesting the involvement of the wild vertebrate as the wound creator and beetles for fungal spore dispersion. Fungal feeding insects, such as H. mangiferae, have been suggested to be associated with the rapid distribution of C. fimbriata in Oman and Pakistan (Al Adawi et al., 2013). Squirrel attacks on either diseased or healthy duku trees were found only during the disease outbreaks in 2013-2014 and these attacks were likely due to the limitation of squirrel feed sources in the field. All affected orchards had grown duku in a monoculture. Pathogenicity tests supported the idea that partial flooding was not likely to predispose duku trees to Ceratocystis infection as the disease did not develop well under partial flooding. Recent field observations in areas near the disease origin suggested that the disease spreads sporadically with limited mortality. Squirrel attacks were not found on recently infected trees, suggesting the possible involvement of the wild vertebrate wounds on the massive disease spread in duku orchards. Vertebrate-incited wounds, such as those from squirrels and monkeys, are considered to contribute to the spread of Ceratocystis wilt on A. mangium plantations (Brawner et al. 2015;Hardie et al. 2018;Nasution et al., 2019).
Conflicts of Interest
No potential conflict of interest relevant to this article was reported.
|
2021-04-20T06:16:23.905Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7755c1a6c1a38a2d9ea581c8d1c44cbe19359ce3",
"oa_license": "CCBYNC",
"oa_url": "http://www.ppjonline.org/upload/pdf/PPJ-OA-08-2020-0147.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1f887b4dfbd30be977c439aed58d9f84a5656ec",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.