text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Assessing the Impacts of Urbanization-Associated Land Use / Cover Change on Land Surface Temperature and Surface Moisture : A Case Study in the Midwestern United States
Urbanization-associated land use and land cover (LULC) changes lead to modifications of surface microclimatic and hydrological conditions, including the formation of urban heat islands and changes in surface runoff pattern. The goal of the paper is to investigate the changes of biophysical variables due to urbanization induced LULC changes in Indianapolis, USA, from 2001 to 2006. The biophysical parameters analyzed included Land Surface Temperature (LST), fractional vegetation cover, Normalized Difference Water Index (NDWI), impervious fractions evaporative fraction, and soil moisture. Land cover classification and changes and impervious fractions were obtained from the National Land Cover Database of 2001 and 2006. The Temperature-Vegetation Index (TVX) space was created to analyze how these satellite-derived biophysical parameters change during urbanization. The results showed that the general trend of pixel migration in response to the LULC changes was from the areas of low temperature, dense vegetation cover, and high surface moisture conditions to the areas of high temperature, sparse vegetation cover, and low surface moisture condition in the TVX space. Analyses of the T-soil moisture and T-NDWI spaces revealed similar changed patterns. The rate of change in LST, vegetation cover, and moisture varied with LULC type and percent imperviousness. Compared to conversion from cultivated to residential land, the change from forest to commercial land altered LST and moisture more intensively. Compared to the area changed from cultivated to residential, the area changed from forest to commercial altered 48% more in fractional OPEN ACCESS Remote Sens. 2015, 7 4881 vegetation cover, 71% more in LST, and 15% more in soil moisture Soil moisture and NDWI were both tested as measures of surface moisture in the urban areas. NDWI was proven to be a useful measure of vegetation liquid water and was more sensitive to the land cover changes comparing to soil moisture. From a change forest to commercial land, the mean soil moisture changed 17%, while the mean NDWI changed 90%.
Introduction
Cities in America have experienced urbanization in a variety of types, shapes, and sizes since the 19th century.With the introduction of transportation techniques, public and private transportation revolutions and a newly developed cultural value of living reshaped the spatial distribution of cities [1].The physical and socioeconomic distinctions between urban and suburban areas became blurry.The common scenario of urbanization was that the commercial land spread along major highways from the center of the cities to the suburbs, and the residential land replaced the farmland at the periphery [2].During the urbanization process, natural land cover, such as vegetation, exposed soil, and standing water were replaced with anthropogenic materials such as concrete, metal, and asphalt.Along with the change in land use and land cover (LULC) were modifications in surface energy and water balance, which resulted in the urban heat island (UHI) phenomenon and the unique characteristics of urban runoff.
The relationship between UHI effect and LULC changes has been examined to understand the impact of LULC changes on surface thermal properties [3].In Weng et al. [4], the relationship of Land Surface Temperature (LST) and vegetation abundance were investigated through various scales.The results showed a stronger relationship between LST and vegetation fraction than NDVI in different spatial resolutions and different land use types.The greatest negative correlation between surface temperature and the vegetation abundance indicator occurred at the resolution level of 120 m, which was roughly the length of a city block.A city block is a basic unit in an urban area, and the characteristics of the landscape and the level of energy exchange is relatively similar in a block.In the examination of the spatial pattern of surface temperature in transects, a higher spectral variability occurs when the proportion of different land cover types is distributed more evenly, lower spectral variability occurs when less land cover types were found in a transect or one land cover type occupied the majority of the surface.Similarly, Yuan and Bauer [5] investigated the relationship between LST and percent impervious surface areas, and found a strong linear relationship in all seasons.Therefore, the percent impervious surfaces provide a complementary metric to the Normalized Difference Vegetation Index (NDVI) for analyzing LST over all seasons.Weng and Lu [6] demonstrated that the sub-pixel technique is an effective approach to classify urban LULC and characterize the impact of changes of LULC on LST.Sub-pixel technique is also a solution for mixed pixel problems in 30m resolution images.Weng and Lu [7] further tested the sub-pixel technique and vegetation-impervious surface-soil models to characterize urban landscapes, and proved that the combination of the two is an alternative approach to quantify the spatial and temporal changes of urban landscape composition.Carlson [8] formed the Temperature-Vegetation Index (TVX) space by plotting LST and vegetation fractions and showed the variation of surface moisture availability in TVX space.The advantages of the TVX method are that the surface moisture and evapotranspiration can be generated easily, and no ancillary atmospheric or surface data, or land surface model is needed.In addition, since the pixel distribution itself is used to set the conditions, it is not sensitive to atmospheric correction and surface parameters.The disadvantage is that the determination of the warm edge is subjective.In Amiri et al. [9], the TVX space was constructed to demonstrate the pixel trajectory of pixels due to LULC changes.The results showed that the TVX method could be applied to monitor changes in biophysical parameters due to LULC changes.Carlson and Arthur [2] illustrated how LST, surface moisture availability, fraction impervious surface, and urban-induced surface runoff responded to urbanization in a TVX space.Jiang and Islam [10] used TVX space and an extension of the Priestley-Taylor equation to estimate surface evaporation over heterogeneous areas.The results showed that this approach is more reliable and easily applicable for evaporation estimation where ground based data are not available.Owen et al. [11] and Carlson and Arthur [2] calculated soil moisture by the Soil-Vegetation-Atmosphere Transfer (SVAT) model, relating the moisture availability to the TVX space to assess the impact of urbanization on microclimate and hydrology.Sun and Kafatos [12] further suggested that the TVX space may not hold true in cold seasons based on a study in the South Great Plains.
Despite the progress mentioned above, the understanding of surface moisture change caused by urban LULC changes is still limited.Urban runoff was found to be highly dependent on regional climate, season, and preceding moisture condition [13].Thus, the applicability of using runoff to assess the surface moisture condition may be limited in arid and semi-arid areas, and in the seasons with little precipitation.High thermal conductivity and high thermal inertia of wet soil made soil moisture a significant component in the variation of LST [14].The effectiveness of using soil moisture that derived from the surface energy balance models to evaluate the urbanization impact deserves further exploration, because various vegetated and soil areas were replaced by impervious surfaces during the urbanization process.Additionally, Gao [15] suggested that Normalized Difference Water Index (NDWI) may be used as a measure of vegetation liquid water.This finding provided an alternative way to assess surface moisture in urban areas.
In this study, the performance of the biophysical parameters such as vegetation fraction, soil moisture, NDWI, and LST to monitor the impact of urbanization on land surface characteristics were examined by employing three representative areas in Indianapolis, United States, between 2001 and 2006.Soil moisture refers to near surface soil moisture estimation and correlates well with 0-5cm depth soil moisture field data.It is volumetric soil moisture, which is the ratio of the volume of water and the total volume of soil, water, and air.Surface moisture does not equal soil moisture.It is the wetness availability over all types of surfaces.In this study, surface moisture was represented by soil moisture and NDWI.In the TVX space, when there is less vegetation, there is higher land surface temperature, and vice versa.At a fixed vegetation level, when there is high soil moisture, there is lower land surface temperature.Evaporation fraction is related to both soil moisture and surface temperature: higher moisture and higher temperature results in higher evaporation.
Parameters such as soil moisture and LST have the ability to characterize changes induced by urbanization-associated LULC changes.This type of impact assessment is significant because the focus of the existing literature has been exclusive of whether and how urbanization has a clear positive or negative effect on the environment [16].Based on the land cover data from the National Land Cover Database (NLCD), three representative types of LULC changes in the Midwest were selected, including: (1) from cultivated to residential land; (2) from forest to commercial land; and (3) from open space to commercial land.The responses of LST, vegetation cover, and moisture availability to these changes were investigated by using the TVX method.First, surface temperature and soil moisture parameters were derived in the TVX space; then, the changes in surface temperature and moisture and their relationship between different land cover classes were examined in the TVX space, LST-soil moisture space, and LST-NDWI space, respectively.By performing these tasks, this study attempts to address the following research question: how LCLU changes within urban areas would alter surface temperatures, vegetation cover, and surface moisture conditions.
Study Area
The study area, Marion County, Indiana, was located in the Midwestern part of the USA (Figure 1).The county seat is Indianapolis, the capital and largest city in the state of Indiana.The distribution of fractional vegetation cover for the whole study area is lower vegetation cover in central urban areas, and higher vegetation cover outside of the circle of interstate highway in agricultural fields and forests.The city is located on a flat plain, which makes it possible for the city to expand in all directions.According to the NLCD 2006, the conversion to developed land from 2001 to 2006 mainly took place in the suburban areas, specifically, between the circle of Interstate Highway 465 and the county boundary, with the largest changes in the southern and eastern fringes.Sparse land cover change in terms of density took place in the urban areas within the circle of Highway 465.According to the NLCD 2001-2006 land cover change data, 3.65% of the total land cover was changed from 2001 to 2006 in Indianapolis, IN, among which, 51% was changed from cultivated land to developed land, 31% was changed from a lower level of developed land to a higher level of developed land, and 3.6% was changed from forest to developed.Three small study areas representing the three typical urbanization-associated LULC changes were selected.The resultant land cover type was "developed" in the three small areas in 2006.In terms of intensity, the land cover types were developed with high and medium intensity in Areas 2 and 3.For developed open space, low intensity and medium intensity development occurred in Area 1.
Methodology
The workflow is shown in Figure 2. First, the Landsat TM data was acquired and was pre-processed to correct the atmospheric effects.Second, NDVI and NDWI were calculated.Third, LST was retrieved by radiative transfer equation, and fractional vegetation cover was computed by Linear Spectral Mixture Analysis (LSMA).Fourth, the TVX space was formed by the scatter plot of LST and fractional vegetation cover, and soil moisture was calculated with air temperature.LST, fractional vegetation cover, soil moisture, NDWI, and percent of imperviousness was compared between the images acquired in 2001 and 2006, and the impacts of LULC change due to urbanization were analyzed.The land cover change data contained only the pixels identified as changed between NLCD2001 Land Cover Version 2.0 and NLCD2006.The weather conditions of two weeks before the image acquisition day were compared: the total precipitation for the two weeks were 84.1 mm and 83.5 mm, the average of daily maximum temperature was 24.16 ºC and 28.02 ºC , and the average daily minimum temperature was 13.01 ºC and 17.39 ºC, respectively.Thus it was assumed that antecedent soil moisture conditions on the two image acquisition days were similar.This assumption was made based not only on the weather conditions, but also considered the characteristics of urban surface and evaporation.Since our study area is in the urban setting, evaporation over impervious surfaces did not follow the evaporation characteristics that have been found in soil and vegetated areas.Ramamurthy and Bou-Zeid [17] suggested that substantial contribution from impervious surfaces to urban evaporation usually occurs in the first 48 hours after precipitation.Since the amount of precipitation was below 1mm within 48 hours prior to both image acquisition dates, the urban evaporation conditions of the two dates were assumed to be similar.Furthermore, since impervious surfaces increase urban surface runoff, most precipitation in the urban areas would become runoff instead of evaporation.Therefore, the impact of precipitation on urban evaporation cannot be as significant as that on natural surfaces.Atmospheric correction was applied prior to image processing.The at-sensor radiance was converted from calibrated Digital Number (DN), according to the referenced values from Chander et al. [18]: (1) where L is the at-sensor radiance, G is band-specific rescaling gain factor, B is band-specific rescaling bias factor, and is the quantized calibrated pixel value (DN).The method for atmospheric correction for bands 1-5 and 7 was adopted from Song et al. [19].This simplified dark object subtraction method assumed no atmospheric transmittance loss and no diffuse downward radiation at the surface.The radiative transfer equation can be written as: where is the path radiance, is the irradiance received at the surface, is the atmospheric transmittance from the target toward the sensor, is the fraction of the upward radiation back-scattered by the atmosphere to the surface, and is the surface reflectance.The variable was neglected according to Song et al. [19].The equation for calculating can be expressed as: ( where is the downwelling diffuse irradiance (W/m²/µm), is the exoatmospheric solar constant (W/m²/µm), the atmospheric transmittance in the illumination direction, and the solar zenith angle.
Assuming 1% surface reflectance for the dark objects, the path radiance was estimated as: where the minimum DN value for each scene, which was selected as the darkest DN with at least 1000 pixels for the entire image.
NDVI and Fractional Vegetation Cover Calculation
The Normalized Difference Vegetation Index (NDVI) was calculated using the atmospherically-corrected at-surface reflectance of Near IR band and Red band of TM data.Linear Spectral Mixture Analysis (LSMA) was applied to derive fractional vegetation cover.First, the Principle Component Analysis (PCA) was applied to all the spectral bands, and the two highest ranked components were selected and plotted.Second, the endmembers were selected.According to Johnson et al. [20], the potential endmembers lay at the vertices of these PCA band scatter plots.The three selected endmembers were vegetation, bare soil, and impervious surfaces.Details about the selection of end-members and estimation of the fraction were discussed in Weng et al. [21].
LST Computation
After converting the DN of the Thermal IR band in equation ( 1), the scene-specific atmospheric correction for the Thermal IR band was applied using the following equation from Coll et al. [22]: where and are the upwelling radiance and downwelling radiance, respectively.is the radiance received by the satellite sensor.is transmittance, and is emissivity.The transmittance, upwelling radiance, and downwelling radiance were calculated using the NASA atmospheric correction calculator [23] with a series of atmospheric profiles interpolated to particular dates as inputs for the radiative transfer model.Hence, the atmospherically-corrected radiance was converted to LST in Equation ( 6): where is LST in Kelvin, is the atmospherically corrected radiance.and are TM thermal band calibration constants, valued 607.76 W/ m sr μm and 1260.56Kelvin, respectively.The LST was then converted to Celsius.The computation of emissivity values followed the procedures of [24,25].
Soil Moisture Computation
Soil moisture was retrieved based on the surface energy balance modeling [10,26].First, the extension of the Priestley-Taylor parameter [10] was estimated in the TVX space; second, based on the results, the evaporative fraction was calculated; finally, the volumetric soil moisture was computed based on its relationship with evaporative fraction [26].
The detailed description of the TVX space can be found in [2].In this case, φ is the extension of the Priestley-Taylor parameter α.In Figure 3, point E represents the pixel with the highest temperature and the lowest φ value, and point D is the lowest temperature and the highest φ value.Point A has the highest φ value, because it is on the wet edge.In the similar triangles ABC and ADE, the condition AC/AE = BC/DE can be obtained.Therefore, φ can be written as: (7) where T is the maximum scaled LST, T the minimum.φ is the maximum φ value, and φ the minimum.T and φ are the LST and φ values of pixel i. LST was transformed to scaled LST before the TVX space was formed.The equation for transforming LST to scaled LST was written as: * (8) Then the evaporative fraction can be estimated by the following equation: where φ is the extension of the Priestley-Taylor parameter α, and ∆/ ∆ γ is the air temperature control parameter [10].In this study, the maximum value of φ (under the wet surface condition) was assumed to be 1.26, and the minimum was 0 (under the dry surface condition).∆/ ∆ γ was estimated using a linear function with air temperature [27].
The method to calculate soil moisture from evaporation fraction was adapted from Lee and Pielke [28]: where θ is volumetric soil moisture, EF is evaporative fraction, and is volumetric soil moisture at field capacity.Silt loam and clay loam were commonly found in our study area.Based on the soil-water characteristics of different soil types in Lee and Pielke [28], was assumed to be 0.3.
NDWI Calculation
The NDWI was developed to depict open water present in remotely sensed images by using Near IR and Visible Green light to enhance water and to eliminate the presence of soil and terrestrial vegetation features [29].Instead of focusing on the open water features, Gao [15] suggested that NDWI can also be an indicator of vegetation liquid water as a supplement for NDVI.Gao's NDWI was defined in Equation (11), where ρ represents the reflectance.The lab results by Gao [15] showed that as leaf layer increased, NDWI increased, suggesting that NDWI was sensitive to the total amount of liquid water in stacked leaves.In the range from 0 to 0.15 of NDWI, NDVI was saturated while NDWI remained sensitive to liquid water in green vegetation.The spatial distribution of NDVI and NDWI were similar.
According to Gao [15], the calculation of NDWI was based on the Landsat TM data.Since the Landsat TM sensor did not cover the wavelength of 1.24 μm, it was replaced by the Landsat band 5 (1.55-1.75µm).Thus was calculated using Equation ( 12): where 4 and 5 represent the reflectance for band 4 and band5.
Impact of Urbanization-Associated LULC Changes in Three Selected Areas
The biophysical parameters in 2001 and 2006, and the percentage of changes from 2001 to 2006, using Area 1 as an example, are shown in Figure 4.As shown in Figure 4a,b, land cover types of Area 1 were changed from cultivated crops, deciduous forest, and open water to medium-density and low-density developed land in the area from 2001 to 2006. Figure 4c shows the new housing development in the southeast side of the city in 2001.Some houses and units had been built, and impervious surface was sparse and scattered.In Figure 4d, the impervious surface pixels were connected, which indicated that impervious surfaces had spread out and covered the majority of the area in 2006.The percentage of impervious surface was largely under 80%, with only a few pixels having a value higher than 80%.According to a highresolution aerial photo, those pixels were located at either the turning of the street, or the intersection of streets and driveways.The spatial pattern of imperviousness showed the characteristics of a typical suburban residential neighborhood.Compared to the urban residential area, it usually had less dense single-family detached homes with trees and lawns, and curved roads.Generally, the distribution of LST was consistent with the changes in land cover and the impervious surface fractions.In 2001 (Figure 4i), the cultivated land in the middle of Area 1 yielded relatively high temperatures because of the lower vegetation cover (Figure 4f).Forests and highly vegetated cultivated land in the northeast and the lake in the northwest yielded the lowest LST.Developed land at the southwest yielded the highest LST.In 2006 (Figure 4j), LSTs of forest patches and agricultural land remained low, while LSTs of the developed area became high.
Soil moisture decreased from 2001 to 2006.The result ranged from 18.2% to 22.0% with a mean of 20.1% in 2001 and from 17.6% to 21.5% with a mean of 19.6% in 2006 in Area 1.The areas with higher LST yielded lower soil moisture.Table 1 shows that the standard deviation increased from 0.048 to 0.058 for scaled LST, and from 0.006 to 0.008 for soil moisture.The increase of standard deviations indicated that urbanization contributed to the variations of LSTs, suggesting the possible increase in LST heterogeneity.
The distributions of NDWI shared a similar pattern to fractional vegetation cover.Dense vegetated areas yielded high NDWI values, while sparsely vegetated areas had low values.However, the mean NDWI increased from 0.081 to 0.089 from 2001 to 2006, which was contradictory to the results of vegetation cover and soil moisture.This was because the vegetation cover was not completely removed in the urbanization process: grass and small trees were planted around the houses and along the roads.As a measure of vegetation liquid water, NDWI largely depended on vegetation abundance and type, which determined the capability of vegetation for holding water, especially in leaves.Therefore, NDWI value of cultivated land was not necessarily higher than residential land.Relatively low value of NDWI in 2001 was consistent with the low vegetation fraction value of the same year.
Moreover, as Gao [15] pointed out, NDWI cannot remove the background soil reflectance completely.Wet soil with green vegetation yielded higher NDWI than dry soil with green vegetation in almost the entire range of vegetation fractions.An aerial photo taken on 26 July 2006 captured a few ponds in Area 1.Since these ponds can increase the soil moisture around the ponds, consequently higher NDWI was detected.Open water was removed from the study by Gao [15]; however, in this study, the existence of open water might be one possible reason for the relatively high NDWI values.Open water such as ponds in residential areas was included due to its potential use to track land use and land cover change and the associated impact on surface moisture conditions.
Similar observations were made for Area 2 (forest to commercial land) and Area 3 (open space to commercial land).Table 1 summarized the mean and standard deviation of each parameter, and the difference between the mean value of 2001 and 2006 for each parameter.Area 2 and Area 3 were part of the commercial land at the intersection of Highway 465 and West 86th Street.The shopping mall, restaurants, the hospital, the preschool, and large parking lots represented the major land use in this area.Area 2 and Area 3 showed sharper changes of all the biophysical parameters compared to Area 1.The mean value of vegetation fractions decreased more than 50%; the mean value of scaled LST increased more than 50%; the mean value of percent impervious surface increased around 700%; the mean value of soil moisture decreased more than 15%; and the mean value of NDWI decreased more than 90%.In 2006, the mean value of NDWI in Area 3 was negative.As a measure of vegetation liquid water and background soil reflectance, the negative NDWI value indicated that almost all of the land surfaces were modified to impervious surfaces, and that nearly no vegetation, wet soil, or pond remained.
Land Cover Types and Their Surface Characteristics
Each land cover type has a unique signature of a combination of biophysical parameters.To compare the differences of biophysical parameters in each land cover type, the alteration of land cover from green space to impervious surface resulted in a significant increase of LST and a moderate decrease of soil moisture (Figure 5).Soil moisture was not significantly related to urban land cover change, but small changes in soil moisture largely affect evapotranspiration and thus the surface energy budget [11].This was according to the results of the three selected areas.A test covering the whole study area is also desirable.The TVX space for the whole study area needs to be examined in detail to better understand the impact of urbanization on the selected areas (Figure 6).The most noticeable change in the TVX space was that the shape of the scatterplot appears "shorter" and "wider" in 2006 than in 2001.This change suggests that the dense vegetated area decreased, and temperature difference between wet edges and dry edges became larger.The pixels in the urbanization process migrated from the upper-left corner to the lower-right corner, which was consistent with our findings shown in the three selected areas in Figure 7.To view individual panels in Figure 7 separately, the distance and slope of different land cover changes on certain biophysical parameters were compared.To view the three figures together, the impact of LULC changes on three biophysical parameters were compared.Figure 7a-c shows the pixel trajectories in the TVX space, temperature-soil moisture space, and temperature-NDWI space for the three selected areas, respectively.The starting and ending points of the vectors were located at the average value of selected parameters.The length of the trajectories shows the degree of alteration, and the slope of the vectors shows the change rate.Generally speaking, the pixels moved from densely vegetated, high moisture, and low temperature conditions, to sparsely vegetated, low moisture, and high temperature conditions.Forest to commercial land had the largest degree of alteration, and cultivated to residential land yielded the smallest alteration.The change rates in LST-soil moisture space were almost the same, which indicated that the change rate of soil moisture between different land cover types was constant.On the contrary, in the LST-NDWI space, the change rates of NDWI differ between the three types of land cover change.NDWI decreased faster from forest to commercial land than from open area to commercial land, while increasing slightly from cultivated to residential land.
Discussion
The major contribution of this study was to examine the impacts of LULC changes on the thermal variations as well as the patterns of soil moisture by utilizing two Landsat images.In addition to the investigation, the changes of surface temperature in TVX space, the LST-soil moisture space and LST-NDWI space were also explored to assess the impacts of LULC changes.The results pinpointed that the rate and path of temperature and moisture changes among land cover types were very different.Soil moisture and NDWI were both tested as measures of surface moisture in the urban areas.NDWI was proven to be a useful measure of vegetation liquid water.Compared to soil moisture, NDWI was more sensitive to the land cover changes.The impact of vegetation on LST was due mainly to its transpiration.To some degree, vegetation liquid water reflected the moisture condition of the environmental setting, and the liquid content in leaves may directly affect the amount of transpiration, and, consequently, LST variations.NDWI has its advantage in monitoring the moisture in both urban and suburban areas, because it can reflect both phenological cycles of vegetation and the background soil reflectance.Its relationship with LST needs to be tested in the future.In addition, other methods to measure the surface moisture condition of impervious surface are desirable.
Furthermore, soil moisture was found not to change significantly with the LULC changes.This was consistent with [11], which suggested that surface moisture availability was not significantly related to changes in urban land cover.Although the change of surface moisture was limited, its impact on evapotranspiration was substantial.Furthermore, evaporation fraction has been related to urban runoff [2].Thus, evaporation fraction can be another important parameter to assess urban microclimate and hydrology.Its relationship with soil moisture and urban runoff can be utilized to further investigate the characteristics of impervious surfaces in the urban areas.
The evaporation and soil moisture were successfully captured using their relationship with LST in the TVX space.Based on the experiment with the three selected areas, the soil moisture showed reasonable results along with other parameters.Although it still needs to be validated with in situ data, this method provides a useful and simple way to derive evaporation and soil moisture directly from satellite images and apply it in urban environment studies.At this time, we do not have ground measured data of soil moisture.By comparing our estimation with a previous study on actual soil moisture [26], it is found that our results fell within the reasonable range.The methodology developed in this study can also be applied to thermal images from other sensors.As long as the land surface temperature data is available, the soil moisture can be estimated in the TVX space.In the Landsat Data Continuity Mission, the thermal data from Landsat 8 is useful for the continuity of surface temperature and moisture assessment.Landsat 8 collects image data by two Thermal IR bands, which allows atmospheric correction of the thermal data using a split-window algorithm [30,31].
It is assumed that air temperature was the same for the whole study area.The ground stations in two airports showed the records were quite similar: on June 16, 2001, the air temperature difference was 1 K; on 1 July 2006, the air temperature was the same.Therefore, the temperature records from one of the airports (Indianapolis International airport) was used because the airport and the three selected areas were close and had similar land cover types nearby.The temperature record from Eagle Creek Airpark Airport was not used because it is located close to a big water body: Eagle Creek Reservoir; therefore, the record may be affected by the amount of evaporation from the lake.A sensitivity analysis has been conducted.When there is a 5 °C difference in the air temperature, the difference between Evaporative Fraction is 0.04, and the difference between soil moisture is 0.01.Since the air temperature difference in the whole study area is less than 1 °C, its impact on soil moisture estimation can be ignored.
Conclusions
In this study, we assessed the impact of urbanization-associated LULC changes on surface temperature and moisture availability in three selected areas within Marion County, Indiana, USA, between 2001 and 2006.The significant achievement is that we applied the methods that developed in forest and agricultural areas to urban surface moisture estimation.The selected areas exemplified land cover changes from cultivated to residential land, from forest to commercial land, and from open area to commercial land.Fractional vegetation cover, LST, soil moisture, NDWI and the percentage of impervious surface were chosen as the parameters to monitor the urbanization associated changes.The results showed that, compared to the area with LULC change from cultivated to residential land, LULC change from forest to commercial land altered the surface temperature and moisture more intensively.For example, compared to the area changed from cultivated to residential land, the area changed from forest to commercial altered 48% more in fractional vegetation cover, 71% more in LST, and 15% more in soil moisture, compared to the initial state.According to the two images, the general change patterns of pixels in response to the LULC changes were from low temperature, dense vegetation cover, high surface moisture condition to high temperature, sparse vegetation cover, low surface moisture condition in the TVX space, T-soil moisture space, and T-NDWI space.These change patterns were found in the whole study area, as well as in the selected study areas.The study is unique because soil moisture and NDWI were both tested as measures of surface moisture in the urban areas.NDWI was proven to be a useful measure of vegetation liquid water.Compared to soil moisture, NDWI was more sensitive to the land cover changes.For example, for a change from forest to commercial land, the mean soil moisture changed 17%, while the mean NDWI changed 90% relative to the initial state.Future work can be directed by adding urban morphology into the estimation of urban surface moisture conditions.
Figure 1 .
Figure 1.Three sample areas for assessing the impact of urbanization in the City of Indianapolis, USA.The vegetation fraction was produced by the Landsat TM image that acquired on 16 June 2001.(a) The whole study area, (b) Area 1 is characterized by conversions from cultivated to residential lands, (c) Area 2 and Area 3 represent changes from forest to commercial, and from open area to commercial respectively.
Figure 2 .
Figure 2. The flowchart of the study.The blue polygons with white text are the resultant biophysical parameters that were compared between 2001 and 2006.
Figure 3 .
Figure 3. Method to interpolate φ for each pixel.φ is the value for each random pixel in the TVX space.In the similar triangles ABC and ADE, AC/AE = BC/DE.φ can be calculated using the values of , , , and .
Figure 4 .
Figure 4. Land cover, the percentage of impervious surface, scaled vegetation fraction, scaled Land Surface Temperature (LST), soil moisture, and Normalized Difference Water Index (NDWI) [15] in 2001 and 2006, and the percentage of changes of each parameter from 2001 to 2006.Land cover change and percent of imperviousness were acquired from National Land Cover Database (NLCD), while other parameters were generated from this study.
Figure 5 .
Figure 5.The characteristics measured by scaled vegetation fraction (Scaled Fr), scaled LST, soil moisture, NDWI and percentage of imperviousness for different land types in three sample areas on 17 June 2001 and 1 July 2006.
Figure 6 .
Figure 6.Scatterplot of scaled LST (x axis) versus scaled fractional vegetation cover (y axis) for Landsat TM images that was acquired on 17 June 2001 and 1 July 2006.Compared to the shape of the scatter plot in 2001(upper), 2006 one (lower) became "shorter" and "wider", which indicated the general trend of the surface condition changed to lower vegetation cover, lower moisture availability, and higher temperature.The scaled LST was transformed from LST by Equation (8) using maximum, minimum, and average LST.The scaled fractional vegetation cover was transformed by the same method.
Figure 7 .
Figure 7. (a) Pixel trajectories in the TVX space, (b) Temperature-soil moisture space, and (c) Temperature-NDWI space from 17 June 2001 to 1 July 2006.Cultivated to residential was represented by Area 1, Forest to commercial was represented by Area 1, and open area to commercial was represented by Area 3.
Table 1 .
The mean and standard deviation of scaled vegetation fraction (Scaled Fr), scaed LST, soil moisture, NDWI, and percentage of impervious surface in 2001 and 2006.
Cultivated to Residential (sample size 3264 pixels) Area2 Forest to Commercial (sample size 192 pixels) Area3 Open area to Commercial (sample size 225 pixels)
Note: The difference column was calculated by subtracting mean value of 2001 from that of 2006 for each parameter and each land cover conversion type. | 8,031.8 | 2015-04-20T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Agricultural And Food Sciences"
] |
Integrative analysis of DNA methylation in discordant twins unveils distinct architectures of systemic sclerosis subsets
Background Systemic sclerosis (SSc) is a rare autoimmune fibrosing disease with an incompletely understood genetic and non-genetic etiology. Defining its etiology is important to allow the development of effective predictive, preventative, and therapeutic strategies. We conducted this epigenomic study to investigate the contributions of DNA methylation to the etiology of SSc while minimizing confounding due to genetic heterogeneity. Methods Genomic methylation in whole blood from 27 twin pairs discordant for SSc was assayed over 450 K CpG sites. In silico integration with reported differentially methylated cytosines, differentially expressed genes, and regulatory annotation was conducted to validate and interpret the results. Results A total of 153 unique cytosines in limited cutaneous SSc (lcSSc) and 266 distinct sites in diffuse cutaneous SSc (dcSSc) showed suggestive differential methylation levels in affected twins. Integration with available data revealed 76 CpGs that were also differentially methylated in blood cells from lupus patients, suggesting their role as potential epigenetic blood biomarkers of autoimmunity. It also revealed 27 genes with concomitant differential expression in blood from SSc patients, including IFI44L and RSAD2. Regulatory annotation revealed that dcSSc-associated CpGs (but not lcSSc) are enriched at Encyclopedia of DNA Elements-, Roadmap-, and BLUEPRINT-derived regulatory regions, supporting their potential role in disease presentation. Notably, the predominant enrichment of regulatory regions in monocytes and macrophages is consistent with the role of these cells in fibrosis, suggesting that the observed cellular dysregulation might be, at least partly, due to altered epigenetic mechanisms of these cells in dcSSc. Conclusions These data implicate epigenetic changes in the pathogenesis of SSc and suggest functional mechanisms in SSc etiology. Electronic supplementary material The online version of this article (10.1186/s13148-019-0652-y) contains supplementary material, which is available to authorized users.
Background
Systemic sclerosis (SSc or scleroderma) is a rare multisystem, connective tissue disease characterized by cutaneous and visceral fibrosis, immune dysregulation, and vasculopathy. Patients are commonly classified into two main clinical subsets on the basis of the extent of skin thickening: limited or restricted cutaneous SSc (lcSSc) and diffuse or widespread cutaneous SSc (dcSSc). The etiology of SSc remains elusive. The low concordance rate in monozygotic twins and relatively modest genetic burden suggest a substantial role for epigenetic or environmental factors in SSc susceptibility [1,2]. Environmental factors (e.g., nutrition, behavior, stress) can influence methylation and other epigenetic marks that result in phenotypic change and disease [3]. Thus, epigenetic variation may play an important role in SSc risk.
DNA methylation is a chemical modification of cytosine bases generally associated with transcriptional repression when at regulatory elements such as promoters and enhancers [4,5]. Nevertheless, the precise relationships between DNA methylation and gene expression are complex and poorly understood [5][6][7][8]. The correlation between DNA methylation and gene expression can be positive or negative and is tissue-specific and context-specific, in that the local DNA sequence and genomic features largely account for local patterns of methylation [4,[9][10][11]. In addition to its potential to affect an individual's susceptibility to SSc, changes in the methylation of DNA may occur secondarily to SSc and may consequently influence disease progression. There is compelling evidence that DNA methylation plays a role in the pathogenesis of autoimmune diseases, and multiple epigenome-wide association studies revealed the existence of differentially methylated regions associated with, for example, systemic lupus erythematosus (SLE) [12][13][14][15][16][17], rheumatoid arthritis [18][19][20][21][22][23][24][25][26], or psoriasis [27][28][29][30][31][32]. In SSc, differentially methylated genes were reported in an X chromosome analysis of peripheral blood mononuclear cells [33] and in one genome-wide DNA methylation analysis in dermal fibroblasts [34]. Disease-discordant monozygotic twins offer the ideal study design to investigate the association of DNA methylation with a disease, as it minimizes confounding due to genetic heterogeneity, sex-, age-and early-life environmental effects [35,36].
To our knowledge, no genome-wide investigation of DNA methylation in whole blood from discordant twins has been reported in SSc. We first conducted epigenomic profiling to investigate the association between DNA methylation variation and SSc. Next, we conducted tissue-specific regulatory annotation and integration with available data from DNA methylation and gene expression profiling studies, with the goal of gaining insights into the potential molecular mechanisms underlying SSc development and/or progression.
Subjects
A total of 27 twin pairs discordant for SSc were used for this study (Table 1). All subjects have been previously described in detail [2]. As reported [2], patients were classified based on published criteria [37]. Both twins had to be living to participate in the study. Only samples of self-reported European ancestry were used for this study. The majority of twin pairs were female (n = 26, 96%), and approximately two thirds (n = 19, 70%) were monozygotic. The mean age of diagnosis was 43 years, and average disease duration from disease onset (first symptom attributable to SSc) was 8.8 years. We had 17 twin pairs with complete organ involvement data. Among these, the most frequent organ system involvement consisted of Raynaud's phenomenon in 17 twin pairs (100%), joint or tendon in 15 (88%), gastrointestinal in 11 (65%), digital ulcers in 7 (41%), lung in 4 (24%), and renal involvement in 2 (12%). Among the 27 patients, the most common SSc-associated serum autoantibodies were anticentromere (n = 7, 26%), anti-RNA polymerases (n = 7, 26%), anti-topoisomerase I (n = 4, 15%), anti-U1 RNP (n = 3, 11%), and anti-U3 RNP (n = 3, 11%). For the disease subset analyses, 15 pairs with lcSSc and 9 pairs with dcSSc were used. Genomic DNA was extracted from whole blood from all 27 pairs of twins as previously described [2].
Zygosity testing
Twin zygosity was initially assayed using DNA fingerprint analysis as we described [2]. In addition, zygosity was confirmed by the analysis of 11 short tandem repeat (STR) autosomal markers using the GenomeLab Human STR Primer Set kit on a CEQ8000 Genetic Analysis System (Beckman Coulter, Fullerton, CA) or 15 autosomal STR markers using the AmpFLSTR Identifiler PCR Amplification Kit on a 3500 Genetic Analyzer (Applied Biosystems, Foster City, CA). The manufacturer's protocols were followed for both systems with one exception: separations on the 3500 Genetic Analyzer were performed with POP7 on a 50-cm array.
DNA methylation assay and data analysis
Genomic DNA (1 μg) from each individual was treated with sodium bisulfite using the EZ 96-DNA methylation kit (Zymo Research, USA), following the manufacturer's standard protocol. Genome-wide DNA methylation was assessed in the Genomics Research Core at the University of Pittsburgh using the Illumina Infinium Human-Methylation450 BeadChip (Illumina, USA), which interrogates over 485,500 CpG sites that cover 99% of RefSeq genes (including the promoter, 5′ UTR, first exon, gene body, and 3′UTR), as well as 96% of CpG islands and island shores. Arrays were processed using the manufacturer's standard protocol. Location of individuals on arrays was randomized to minimize potential confounding (e.g., batch effects). Sample files and expression IDAT files were imported into GenomeStudio Software v.1.9 (Illumina, USA) for primary evaluation of the data. This included initial quality control checks and calculating the relative methylation level of each interrogated cytosine, which is reported as a β-value given by the ratio of the normalized signal from the methylated probe to the sum of the normalized signals of the methylated and unmethylated probes. A negative β-value indicates hypomethylation (i.e., decreased methylation) in the affected SSc twins relative to the unaffected, while a positive β-value indicates hypermethylation (i.e., increased methylation) in the SSc twins relative to the unaffected twins. The data were observed for quality, and a cluster analysis was conducted, using the SNP content, to ensure twins were pairing correctly. Using GenomeStudio, it was noted that the data contained no large batch effects.
After initially inspecting the data with GenomeStudio, the data was opened with the R package ChAMP [38]. When loading the data, probes were dropped if they had a bead count less than 3, if the probed CpG was also an SNP, or if they did not meet a detection p value of 1 × 10 −5 (detection p-value is the confidence that a given transcript is expressed above the background defined by negative probes). A total of 447,254 CpGs were used for analysis. The data were then normalized with the same ChAMP package using a BMIQ normalization method. MDS plots based on the 1000 most variable methylation sites were created as a result of the normalization process. These were examined for clustering, and it appeared that samples from individuals of differing ethnicities were clustering together, so some samples were removed to make a more homogenous group that clustered closely together. Singular value decomposition (SVD) was then applied to the matrix to obtain the most significant components of variation. These components were observed in a heat map showing the association between the principal components and the biological factors. To adjust for these batch effects, the ChAMP package employs "ComBat" which uses empirical Bayes methods to correct for technical variation. With the data normalized and batch effects adjusted for, the β-values were outputted to a table for analysis.
A paired t test was computed for each CpG site to test the null hypothesis that the mean difference of β-values for each set of twins is zero (μ = 0). Completing a matched analysis with the paired t test allows us to remove the confounding effects of chronological age, genetic background, ethnicity and admixture, sex, and similarity of the epigenome at birth. All data for monozygotic twins (n = 19) were analyzed first followed by a replication of that analysis with the data for dizygotic twins (n = 8). A meta-analysis of the two separate analyses was then performed using METAL [39] to get a single p value for each CpG site. False discovery rate (FDR) p values were then calculated for each site, and top results were evaluated. Since no differentially methylated cytosine was identified with FDR-corrected p < 0.05, unadjusted p values are reported. Only cytosines showing suggestive differential methylation (p < 10 −04 ) in the meta-analysis between the affected and unaffected twin pairs are reported.
Monozygotic twins exhibit increased DNA methylation differences with age [40]. In order to address the effect of age on DNA methylation variation in this study, we cross-referenced our results (all cytosines with suggestive differential methylation (p < 10 −04 )) against the 490 and the 353 differentially methylated CpG sites associated with age reported by Bell et al. [41] and Horvath [42], respectively.
Despite the limited statistical power, an exploratory analysis was computed comparing all twins positive for each of the following clinical features: (1) lung involvement, (2) anticentromere autoantibodies (ACA), and (3) anti-RNA polymerases autoantibodies (anti-RNP), to the twins negative for these criteria. No CpGs met the threshold for suggestive differential methylation (p < 10 −04 ) for any of these clinical features.
Pathway analysis
Ingenuity Pathway Analysis (IPA) software (https://www. qiagenbioinformatics.com/products/ingenuity-pathwayanalysis/) was used (release date 16 March 2016) to investigate the pathways and functions enriched with the molecules corresponding to the top differentially methylated genes. IPA uses an extensive database of functional interactions that are drawn from peer-reviewed publications and manually maintained. Core Analyses were performed using default settings to identify the top canonical pathways, diseases and biological functions, physiological systems, networks, and upstream regulators. For each comparison of the DNA methylation analyses, a total of 200 molecules (i.e., gene products) corresponding to the top differentially methylated cytosines were used as input for IPA Core Analyses. Specifically, the top 200 genes corresponding to the top differentially methylated cytosines were used as input into IPA's Core Analysis; the products of these genes were used by IPA as molecules to predict, for example, downstream biological processes or diseases affected by the data, or upstream molecules which may be causing the observed changes in the data.
Regulatory annotation eFORGE v1.2 (http://eforge.cs.ucl.ac.uk/) [43] was used to identify if the associated CpGs were enriched in cell-specific regulatory elements, namely DNase I hypersensitive sites (DHSs) (markers of active regulatory regions) and loci with overlapping histone modifications (H3Kme1, H3Kme4, H3K9me3, H3K27me3, and H3K36me3) across available cell lines and tissues from the Roadmap Epigenomics Project, BLUEPRINT Epigenome, and ENCODE (Encyclopedia of DNA Elements) consortia data. In addition to predicting disease-relevant cell types, eFORGE can also assess cell-composition effects of heterogeneous tissues by detecting tissue-specific DHS and histone modification enrichment based on genomic location.
The differentially methylated cytosines (p < 10 −04 ) in each disease subset (153 in lcSSc, 266 in dcSSc, and 155 in all twins) were entered as input of the eFORGE analysis (Additional file 2). Each set of CpGs was tested for enrichment for overlap with putative functional elements compared to matched background CpGs. The matched background is a set of the same number of CpGs as the test set, matched for gene relationship and CpG island relationship annotation. One thousand matched background sets were applied. The enrichment analysis was completed for different tissues, since functional elements may differ across tissues. Enrichment outside the 99.9th percentile (−log 10 binomial p-value ≥ 3.38) was considered statistically significant (red in Additional file 2: Figure S2
Differentially methylated sites in whole blood from twins discordant for SSc
We performed genome-wide DNA methylation analysis in whole blood from 27 twin pairs discordant for SSc (Table 1). Monozygotic twins (n = 19) were analyzed first, followed by a replication with the data for dizygotic twins (n = 8). This manuscript reports the results of the meta-analysis of this discovery and replication sets. A total of 155 cytosines showed suggestive differential methylation (p < 10 −04 ) between the affected and unaffected twin pairs, most of which mapped to gene bodies (113, 73%) of 111 unique genes (Additional file 2: Table S1). We note that while a negative β-value (− 1 < β < 0) indicates hypomethylation (i.e., decreased methylation) in the affected SSc twins relative to the unaffected, a positive β-value (0 < β < 1) indicates hypermethylation (i.e., increased methylation) in the SSc relative to the unaffected twins. Results in monozygotic and dizygotic twin pairs were largely consistent (Additional file 2: Table S1). The levels of differential methylation between affected and unaffected twin were overall modest, with the largest difference observed in the IFI44L gene (β-value = − 0.12) (Additional file 1: Figure S1). Pathway analysis revealed a significant enrichment of molecules (i.e., gene products) involved in cancer, gastrointestinal disease, and organismal injury and abnormalities (Additional file 2: Table S2).
We also performed DNA methylation analyses in each disease subset. In the meta-analysis of 15 twin pairs discordant for lcSSc, 153 cytosines showed suggestive differential methylation (p < 10 −04 ) between the affected and unaffected twin pairs, most of which mapped to gene bodies (117, 77%) of 115 distinct genes (Additional file 2: Table S3). The differences of methylation levels were modest (β < 0.10). Pathway analysis showed a significant enrichment of cancer, endocrine system disorders, gastrointestinal disease, and organismal injury and abnormalities (Additional file 2: Table S4). A total of 266 cytosines showed suggestive (meta-analysis p < 10 −04 ) differential methylation levels in whole blood from the 9 pairs of twins discordant for dcSSc. The majority of these cytosines mapped to gene bodies (201, 76%) of 196 distinct genes (Additional file 2: Table S5). The largest differences in methylation levels were observed in the hypomethylated IFI44L (β = − 0.17) and DHODH (β = − 0.17) genes. The top molecules were enriched for cancer, gastrointestinal disease, and organismal injury and abnormalities (Additional file 2: Table S6). While there was virtually no overlap of molecules between subsets, with only 1% of common molecules (3/397), there was similar overall enrichment for genes in "cancer" and "gastrointestinal disease." Despite the limited statistical power, an exploratory case-case analysis was computed on the following clinical features: (1) lung involvement, (2) anticentromere autoantibodies (ACA), and (3) anti-RNA polymerases autoantibodies (anti-RNP). No CpGs met the threshold for suggestive differential methylation (p < 10 −04 ) for any of these clinical features.
Overlap of DNA methylation patterns with reported genetic association and DNA methylation studies We assessed the overlap between the regions our study unveiled (meta-analysis p < 10 −04 ) and over 40 regions with compelling evidence for genetic association with SSc [1]. The few regions of overlap include the HLA and IRF5 (Additional file 2: Table S7). Since aging can influence DNA methylation variation, we also assessed the overlap between our results and the CpG sites whose methylation levels are strongly correlated with chronological age [42,44]. Only one age-associated CpG (cg22432269) in the first exon of the CYFIP1 gene showed concomitant evidence of hypomethylation in lcSSc (p = 3.06 × 10 −06 ).
One genome-wide DNA methylation study has been reported in cultured dermal fibroblasts from SSc patients and controls [34]. As expected, given the different tissues profiled, the genes identified in each study are largely different. Of the 30 genes reported by Altorok et al [34] as common between dcSSc and lcSSc fibroblasts, only CACNA1C was also found among our top results (Additional file 2: Table S2). Additional file 2: Table S8 shows the six CpG sites common to both studies.
Since SSc and SLE are often considered related diseases, we also compared our DNA methylation findings in blood from SSc patients to cytosines reported as differentially methylated in blood (and blood cells) from SLE patients. These included 86 cytosines in naïve CD4 + T cells [45], 1082 CpGs in T cells, 264 CpGs in B cells, 168 CpGs in monocytes [15], 293 sites in neutrophils [46], 26,298 CpGs in PBMCs [47], and 44 cytosines differentially methylated in white blood cells from SLE patients [48]. A total of 76 CpGs differentially methylated in blood cells from both SSc and SLE patients are shown in Additional file 2: Table S9. Several cytosines were reported in multiple studies, notably those differentially methylated in the dcSSc subset, as well as all twins. CpG sites in the IFI44L and RSAD2 genes were consistently hypomethylated in several blood cell types [15,[45][46][47][48] or hypermethylated in the case of FNBP1 [15,47]. In the dcSSc subset, CpG sites in IRF5, INTS6, SULT1A1, and RPTOR were also hypomethylated in multiple blood subsets [15,46,47]. These differentially methylated sites shared in blood cells across related autoimmune diseases suggest their role as potential susceptibility or epigenetic blood biomarkers of autoimmunity.
Comparison of differential methylation to differential gene expression patterns
To explore the downstream effects of the differentially methylated CpG sites (meta-analysis p < 10 −04 ), the genes corresponding to these CpGs were compared to available data from published global gene expression profiling studies conducted in blood and its cellular subsets from SSc patients and healthy controls. A total of 1907 unique differentially expressed genes were compiled from 8 studies with publicly available results [49][50][51][52][53][54][55][56]. As shown in Table 2, 27 genes with differentially methylated cytosines (in Additional file 2: Tables S1, S3, and S5) have also been reported as differentially expressed in SSc patients. Consistent with the known complex relationships between DNA methylation and gene expression [4][5][6][7][8][9][10][11], for some genes, the relationship between DNA methylation and gene expression was inverse or negative (i.e., increased methylation with decreased gene expression), while for others, it was direct or positive (i.e., increased methylation results in increased gene expression).
Eight noteworthy candidates include IFI44L, where cg03607951 in the transcription start site was hypomethylated in all twins and showed the largest difference in methylation levels in dcSSc. This gene is overexpressed in blood in SSc patients [50,51,53] and hypomethylated in multiple blood SLE subsets [15,45,46,48]. The TLE3 gene showed two CpGs in the gene body (cg01666796, cg12349571) with consistent hypermethylation in all affected twins and is also overexpressed in PBMCs in SSc-PAH patients [49]. A CpG (cg22432269) in the first exon of the CYFIP1 gene showed the most significant hypomethylation in lcSSc concomitant with underexpression in PBMCs from lcSSc patients [51]. A CpG site (cg06580770) in the body of TNXB is hypermethylated in blood from SSc and concomitantly underexpressed in SSc patients [54]. Hypomethylated cg15346781 in the transcription start site of the RSAD2 gene is overexpressed in SSc [50,53] and hypomethylated in T and B cells from SLE patients [15]. Cg24312520 in the gene body of STAT3 was hypermethylated in dcSSc and overexpressed in PBMC from lcSSc and SSc-PAH patients [49,51]. A first exon cytosine (cg25330422) was reported as hypermethylated in blood cell subsets from SLE patients [15]. The transcription start site TNFRSF1A was both hypermethylated in dcSSc (cg26254667) and overexpressed in SSc and lcSSc [51,54]. Other gene body CpG sites (cg08418872, cg23752651) have also been reported as hypermethylated in SLE patients [15]. Lastly, cg17925829 in the transcription start site of the TYROBP gene was hypomethylated and the gene overexpressed in SSc [53,54]. Up-and down-refer to hypo-or hypermethylation, or over-or underexpression, respectively. Italics denotes methylation of different CpG sites in the gene. TSS1500 (TSS200), within 1500 bps (200 bps) from transcription start site; 5′UTR (3'UTR), 5′ (3') untranslated region. WBC: white blood cells. Skin fibrob: skin fibroblasts. Refs: references. *Age-associated CpG [42].
Regulatory annotation
To provide a broader biological interpretation of the DNA methylation results and better understand the functional role underlying the disease-associated CpG sites, we assessed whether these SSc-associated CpGs reside within regulatory regions across the genome in diverse tissues and cell types assayed in the ENCODE, Roadmap Epigenomics, and BLUEPRINT Epigenome Project datasets. The CpGs associated with SSc in all twins showed only a modest enrichment of H3K27me3, a mark of inactive genes, in primary B cells (Additional file 1: Figure S2 and Additional file 2: Table S10). The CpGs associated with lcSSc did not show enrichment of either DHSs or any histone mark in any tissue.
In contrast, the dcSSc-associated CpGs showed robust enrichment in DHSs across multiple tissues and cell types ( Fig. 1; Additional file 2: Table S11). In the EN-CODE data, the strongest enrichment was in multiple blood cell lines, predominately myeloid cells, but also the epithelium, heart, muscle, blood vessel, and connective tissue (Fig. 1, top panel; Additional file 2: Table S11).
In the Roadmap data, the greatest enrichment of DHS was in the blood, fetal tissues, and psoas muscle (Fig. 1, middle panel; Additional file 2: Table S11). In the hematopoietic primary cells of the BLUEPRINT project, inflammatory macrophages showed strong enrichment of DHSs (Fig. 1, bottom panel; Additional file 2: Table S11). This data provides evidence supporting the notion that CpGs identified in blood are also situated in known active regulatory regions in not only blood, but also other tissues and cell types. Overlapping with H3 histone methylation from the Roadmap Project revealed that the dcSSc-associated CpGs are strongly enriched for H3K4me1 marks, which are indicative of poised enhancers, across numerous tissues and cell types, most strongly in the blood, fetal tissues, psoas muscle, and skin (Fig. 2, Additional file 2: Table S11). An enrichment of H3K27me3, a mark associated with inactive gene promoters, was also detected in primary hematopoietic stem cells (Fig. 2, Additional file 2: Table S11). Collectively, this regulatory annotation data shows that, unlike the lcSSc-associated CpGs, many of the dcSSc-associated CpGs reside within DHS and multiple histone marks. This evidence of enrichment of regulatory regions supports their potential role in causal downstream effects on disease presentation.
Discussion
This study used a genome-wide integrative approach to identify differential DNA methylation in whole blood from twin pairs discordant for SSc. In addition to being the largest epigenomic study conducted in SSc to date, the unique study design minimizes confounding due to genetic heterogeneity and age-and early-life environmental effects by using disease-discordant twins [35,36]. As expected, given the sample size, we did not detect genome-wide significant differences in mean DNA methylation associated with SSc, which is largely consistent with other complex disease epigenomic twin studies [12,57,58]. The results revealed distinct DNA methylation patterns in SSc and its clinical disease subsets. The negligible overlap of molecules shared between the lcSSc and dcSSc subsets supports distinct epigenetic architectures in each disease subset. Despite clearly distinct blood methylation profiles, an enrichment of genes in "cancer" and "gastrointestinal disease" was observed in both dcSSc and lcSSc, although driven by different molecules. These results are consistent with the previously reported minimal common differentially methylated cytosines between lcSSc and dcSSc subsets in skin fibroblasts [34]. In addition, our analyses revealed negligible overlap between the methylation patterns in whole blood and those previously reported in skin fibroblasts [34]. Thus, although SSc is commonly considered a single disease, these results confirm others suggesting that SSc is a family of diseases with distinctly different subtypes.
The precise relationships between DNA methylation and gene expression are complex and poorly understood [4][5][6][7][8][9][10][11]. While DNA methylation at regulatory elements shows a negative correlation with transcription, the opposite has been observed at intragenic regions [5], illustrating that complex regulatory mechanisms that are dependent on the tissue and genomic architecture underlie the correlation between DNA methylation and gene expression. It is also possible that the low correlation between DNA methylation and gene expression levels may reflect high fluctuation of RNA levels, which can change from 1 h to the next [59]. In order to provide insights into the potential functional consequences of the methylation patterns observed, we compared our results to those of global gene expression profiling assays conducted in blood and its cellular subsets from SSc patients and healthy controls. This study unveiled several novel genes epigenetically dysregulated with reported changes in gene expression in blood from SSc patients. Most of these genes are involved in immune processes.
IFI44L, an interferon gene involved in defense response to viruses, is overexpressed in SSc blood tissues [50,51,53]. The CpG site unveiled in our study shows consistent hypomethylation in multiple blood cell subsets from SLE [15,[45][46][47][48]60] and Sjögren's syndrome patients [61,62]. Since SSc and SLE are often considered as sister diseases, reported DNA methylation similarities are not unexpected [63]. The consistent hypomethylation of IFI44L in blood from patients with several autoimmune diseases, together with its overexpression, corroborates the validity of our finding and suggests that differential methylation of IFI44L may serve as shared biomarker across these diseases.
Both RSAD2 and TYROBP showed hypomethylation and overexpression in SSc blood [50,53,54]. Both play roles in immune response, including type I IFN signaling pathway (RSAD2) and innate immunity (TYROBP). RSAD2 is consistently hypomethylated in blood cells [15,47]. Demethylation of the TYROBP gene is associated with a subset of T cells that accumulates and is associated with aging [64]. An age-associated CpG [42] in CYFIP1, a regulator of translation and cytoskeletal dynamics, showed hypomethylation with underexpression in SSc blood [51]. It is interesting to note the variation in methylation levels at sites associated with aging, as Fig. 1 Enrichment of dcSSc differentially methylated CpGs in DNase I hypersensitive sites among various cell and tissue types using ENCODE, Roadmap Epigenomics, and BLUEPRINT Epigenome projects data. Statistically significant enrichment outside the 99.9th percentile (−log10 binomial p value ≥ 3.38) is colored red on the vertical axis. Upper panel shows a marked myeloid cell enrichment in ENCODE data, with strong epithelium, heart, muscle, blood vessel, and connective tissue signals. Middle panel shows a more general pattern of enrichment, strongest in blood, fetal tissues, and psoas muscle in the Roadmap Epigenomics data. Lower panel shows enrichment for inflammatory macrophages in the BLUEPRINT Epigenome data premature activation of aging-associated molecular mechanisms is emerging as an important contributor to the autoimmune, vascular, and fibrotic pathogenesis of SSc [65]. Our findings, in conjunction with these reports, further lend support for the role of the innate immune response in the pathogenesis and/or progression of diseases such as SSc and a parallel between SSc and premature aging.
Differential methylation of several genes has been reported as associated with cancer [66][67][68][69]. These include TNFRSF1A, which plays a role in cell survival, apoptosis, and inflammation and was both hypermethylated and overexpressed in SSc blood [51,54]. TLE3 was also hypermethylated and overexpressed in SSc blood [49]. This gene product functions in the Notch signaling pathway to regulate the determination of cell fate during development. STAT3, a transcription activator with roles in many cellular processes such as cell growth, apoptosis, and response to cytokines and growth factors, showed hypermethylation and overexpression in SSc blood [49,51]. TNXB, which was hypomethylated in skin fibroblasts from dcSSc [34], was hypermethylated in our study and concomitantly underexpressed in blood from SSc patients [54]. This gene localizes to the MHC class III region and encodes a member of the tenascin family of extracellular matrix glycoproteins. It is involved in actin cytoskeleton organization, cell adhesion, and collagen fibril organization.
To aid in result interpretation, regulatory annotation of the top differentially methylated cytosines was conducted to predict disease-relevant cell types. Differential DNA methylations in regulatory regions such as DHS and histone marks have been associated with functional consequences [4,70]. We observed an enrichment of regulatory regions in the dcSSc subset that pointed to blood myeloid cells as the most highly enriched cell types, indicating a tendency for cell-composition-corrected dcSSc-associated DNA methylation changes to co-locate with myeloid cell DHSs and H3K4me1 marks (representative of enhancers). This contrasts with an enrichment in DHSs specific to T cells that was reported using cytosines differentially methylated in CD4+ T cell studies of SLE and Sjögren's syndrome [43]. This enrichment of methylated cytosines in regulatory regions in myeloid cells might underlie a dysregulation of these cells in dcSSc. Indeed, both monocytes and macrophages (cell types with the strongest enrichment) play a critical role in fibrosis [71]. The number of circulating monocytes is increased in SSc [72] and correlates with disease progression and severity [73,74]. The changes in methylation detected in dcSSc are thus impacting the function of regulatory elements in cell types with critical functions in fibrosis. Since these inflammatory cells are dysregulated in SSc, and DNA methylation changes can affect regulatory mechanisms, our findings suggest that DNA methylation might be a potential avenue to reverse their altered phenotype.
This study has a number of limitations. Despite the value of the twin-pair study design for epigenomic studies, our unique samples of middle-aged, European ancestry, largely female twin pairs are not representative of the general population. Thus, our results might not be generalizable to all patients. Further replication studies are warranted for the validation, justification, and generalization of our results. Another limitation is the lack of available RNA from the same samples to assess the functional effects of the variation in DNA Fig. 2 Enrichment of dcSSc differentially methylated CpGs in regions overlapping histone modifications in the Roadmap Epigenomics Project data. Statistically significant enrichment outside the 99.9th percentile (−log 10 binomial p value ≥ 3.38) is colored red on the vertical axis. Panel shows marked enrichment for a histone modification representative of enhancers (H3K4me1) in blood cells (monocytes, hematopoietic stem cells, natural killer cells), fetal tissues (lung fibroblasts, large intestine, small intestine, adrenal gland, muscle leg, thymus), psoas muscle, and skin fibroblasts. Enrichment for a histone modification representative of polycomb-repressed regions (H3K27me3) was seen in hematopoietic stem cells methylation. In an attempt to circumvent this limitation, we performed in silico integration with reported differentially expressed genes for functional validation of our results. Documenting that differentially methylated sites in our twin data also correspond to differences in gene expression in independent SSc samples forms corroborating evidence across genomic processes and cohorts. A further limitation is the lack of tissue specificity. We explored this issue by performing regulatory annotation of our results, but future work is needed to dissect the tissue specificity of epigenetic modifications in SSc. We cannot exclude the possibility that the differences between the disease subsets and enrichment of myeloid-related cells in dcSSc are driven by confounding cell-composition effects instead of true cell type-specific effects. However, whole blood lymphocytes are proportionally more abundant than monocytes, suggesting that the strong bias towards monocytes and macrophages is a cell type-specific effect. Multiple differentially methylated cytosines in our study were also found to be differentially methylated in a single blood cell type in SLE, suggesting that the associations we detected are not likely to be due to confounding by blood cell heterogeneity. These include, among others, loci in the IFI44L, RSAD2, IRF5, and RPTOR genes [46]. In spite of these limitations, these findings identify novel genomic regions in SSc in a unique cohort of discordant twins and highlight candidate genes for further research.
Conclusions
We identified multiple DNA methylation loci associated with SSc, including sites with concomitant evidence of altered methylation in blood cells of lupus patients and genes with concomitant evidence of differential expression in blood cells from SSc patients. Although this cross-sectional study cannot separate causality from response to disease, it identifies epigenetically modified genes and pathways that are important in SSc.
Our study hence provides support for using blood cells as a useful accessible tissue for epigenetic biomarker discovery. Our results show that DNA methylation sites in dcSSc patients are enriched for regulatory regions in cell types with key roles in fibrosis, implicating DNA methylation as a modulator of cell functionality. Coupled with the observation that dcSSc and lcSSc are epigenetically distinct disease subtypes, this suggests that the cellular dysfunction observed in dcSSc is, at least partially, due to an epigenetic dysregulation of myeloid cell types. Further, this suggests the possibility of using epigenetic regulation of cell functionality to prevent dysfunction or restore their balance in SSc. Regardless of causality, blood-based biomarkers have the potential to improve risk prediction and help guide treatment decisions. Our findings provide a foundation for further research to determine if the differentially methylated functional loci represent attractive targets for the treatment or prevention of autoimmune-and/or fibrotic-related diseases.
Additional files
Additional file 1: Figure S1. Visualization of absolute weighted βvalues in whole blood from twin pairs discordant for SSc in the UCSC genome browser showing differential methylation of the IFI44L gene. Table S1. Most significant differentially methylated cytosines in whole blood from twins discordant for SSc. Table S2. Most significant canonical pathways, upstream regulators, and diseases and biological functions in differentially methylated genes in whole blood from all twin pairs discordant for SSc. Table S3. Most significant differentially methylated cytosines in whole blood from twins discordant for lcSSc. Table S4. Most significant canonical pathways, upstream regulators, and diseases and biological functions in differentially methylated genes in whole blood from twin pairs discordant for lcSSc. Table S5. Most significant differentially methylated cytosines in whole blood from twins discordant for dcSSc. Table S6. Most significant canonical pathways, upstream regulators, and diseases and biological functions in differentially methylated genes in whole blood from twin pairs discordant for dcSSc. Table S7. Reported SSc-associated gene regions with differentially methylated CpGs in SSc subsets. Table S8. Differentially methylated CpG sites common to this study and to the report by Altorok et al. (2015). Table S9. Cytosines differentially methylated in this study that are also reported as differentially methylated in blood from SLE patients. Table S10. Most significant enrichment of top SSc CpGs overlapping cell type-specific regulatory elements. Availability of data and materials Normalized or raw intensity data of the HM450K BeadChips used during the current study is not yet publicly available because it was generated using private funds, but all data is available upon request from the authors on a collaborative basis.
Authors' contributions CFB designed and coordinated the study. CFB and TAM recruited the patients. TAM obtained the history and performed the physical examinations and disease subtyping of patients. CFB processed the blood samples and extracted the DNA. SH confirmed the zygosity of the twins. PSR, KDZ, and CDL analyzed the data. PSR and CFB wrote the manuscript. All authors were involved in critical review, editing, revision, and approval of the final manuscript. | 8,264.2 | 2019-04-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Strength and Deformation Characteristics of Fiber and Cement-Modified Waste Slurry
Using fiber and cement to modify waste slurry and apply it to roads is an effective way to recycle waste slurry. A new type of road material, fiber–cement-modified waste slurry (FRCS), was prepared in this study. The static and dynamic characteristics of the cement soil were studied using an unconfined compressive strength test and dynamic triaxial test. The results show that the optimum fiber content of FRCS is 0.75%. In the unconfined compressive strength test, under this fiber content, the unconfined compressive strength (UCS) of the FRCS is the largest, and the elastic modulus and modulus strength ratio are both the smallest, indicating that the tensile properties of the cement slurry have been enhanced. In the dynamic triaxial test, the hysteretic curve of the FRCS tends to be stable with the increase in the number of cycles, the dynamic elastic modulus of the FRCS decreases first and then increases with the increase in the dosage, while the damping ratio becomes stable after a rapid decline, and the fiber incorporation increases the cumulative strain of the soil–cement under low-stress cycles, indicating that the ductility of the FRCS is improved. In addition, a cumulative strain prediction model of the FRCS is established in this paper, which can provide a reference for the resource application of waste slurry in road engineering.
Introduction
With the rapid development of urban construction, a large amount of construction waste such as abandoned slurry is often generated during tunnel shield construction and bored pile construction [1,2].If not properly disposed of, it will have adverse effects on the environment and natural resources.Using waste slurry as roadbed filler has good prospects and is considered a potential substitute for sand or limestone filler in concrete or mortar [3].However, without curing measures, the strength of waste slurry after drying is low, and its mechanical properties often do not meet construction requirements.Direct use can lead to defects such as roadbed settlement [4].There are currently various methods for strengthening and utilizing waste slurry, and the commonly used method is chemical solidification treatment.Cement is added as a chemical stabilizer to the waste slurry to produce a cementitious hardening effect, in order to effectively improve its mechanical properties.Cement, as a commonly used inorganic material, has been widely used both domestically and internationally due to its simple manufacturing process, economic applicability, and low environmental requirements for the site [5][6][7].Using cement as a curing agent is beneficial for the resource utilization of waste slurry [8][9][10][11].However, waste slurry mixed with cement alone may have defects such as a low tensile strength, poor crack resistance, and significant cumulative deformation after bearing long-term cyclic loads [12,13].Moreover, the compounds in the waste slurry lack chemical activity, and the hydration effect is poor Polymers 2023, 15, 3435 2 of 18 after adding cement [14], which limits its promotion and use.Therefore, it is necessary to further reduce the defects of cement slurry to avoid damage to the road structure [15,16].
Adding polymer materials to reduce defects in building materials is an effective way to address them.Fibers, as common polymers, can be used as raw materials for fiber-cement soil, which is a rapidly developing new building material in recent years.Fiber-cement soil has also been a research hotspot for scholars in recent years.Numerous scholars have analyzed the strength and mechanical properties of fiber-reinforced cementitious soil through experiments such as unconfined compressive strength (UCS) testing [17][18][19], and the results show that fibers can effectively improve the strength, durability, and wear resistance of cementitious soil.At the same time, the infinite compressive strength and residual strength of cementitious soil samples with fibers added have also been improved to a certain extent [20].Polypropylene fibers are commonly used as reinforcing materials for cement-based materials [21], which can improve their mechanical and damage-resistance properties, delay the formation of cracks, and reduce crack width.Jiang et al. conducted a series of experiments to explore the effects of fiber content and fiber length on the strength of fiber-reinforced soil [22].The research results of Sukontasukkul et al. indicate that as the volume fraction of polypropylene fibers in cementitious soil increases, its toughness also increases [23].Ruan et al. showed through UCS and flexural strength tests that with an increase in fiber content, the UCS, residual strength, and flexural strength of fiber-reinforced cement mortar soil were significantly improved [24].Zaimoglu et al. studied the effect of randomly distributed polypropylene fibers (PP) and some additive materials such as borogypsum (BG), fly ash (FA), and cement (C) on the infinite compressive strength of soil.The results showed that PP can effectively improve their infinite compressive strength by combining with additives [25].Yang et al. explored the reinforcement and stability of loess using fibers as a reinforcement material and cement as a stabilizing material.The results showed that the addition of fibers gradually changed the fracture mode, from brittleness to ductility and then to plasticity [26].Zhang and others explored the feasibility of the application of polypropylene-fiber-reinforced cement stabilized soil.Through scanning electron microscope (SEM) analysis, the strengthening effect was attributed to the formation of hydration products such as ettringite, the bridging effect, and the increase in particle friction [27].Chen et al.'s experiment showed that the UCS of fiber-cement admixtures is related to the fiber content and length [28].Fiber-reinforced cement clay reaches its peak strength at a fiber content of 0.5%.If the fiber content continues to increase, the UCS will slowly decrease.Wang et al. used cement soil mixed with different amounts of polypropylene fibers and basalt fibers to investigate the changes in dynamic stress and dynamic elastic modulus with dynamic strain through dynamic triaxial tests.The results showed that the dynamic characteristics of cement soil were related to the experimental confining pressure, fiber type, and content.With an increase in the fiber content, the dynamic strength and dynamic elastic modulus of cementitious soil increase, while the dynamic deformation decreases [29].Wang et al. conducted triaxial unconsolidated undrained (UU) tests on polypropylene-fiber-cement-treated roadbed soil (PCS), and the results showed that for the same fiber mass content, the peak stress, residual stress, and peak stress-strain of PCS specimens gradually increased with increasing confining pressure, while the brittleness index gradually decreased [30].Di et al. conducted dynamic triaxial tests to analyze the effects of high water absorbent polymer (SAP) content, cyclic stress ratio, and loading frequency on SAP-cement-modified soil [31].Wang et al. conducted triaxial tests to determine the correlation between cumulative plastic strain (CPS) and the number of loading cycles, as well as the evolution of dynamic strength and critical dynamic stress (CDS) with different freeze-thaw cycles, in order to study the dynamic stability of fiber-adhesive-reinforced subgrade fill (RSF) under cyclic loading after freeze-thaw cycles [32].
Meanwhile, the addition of fiber polymers can to some extent improve the problem of road settlement.Excessive settlement and differential settlement of soft foundations are common in roads, and differential settlement can occur at the junction of bridge abut-ments and roads, as well as issues such as lateral pressure and displacement of bridge abutments [31].The road is repeatedly subjected to low-stress cyclic loading from traffic loads for a long time, during which the road will not undergo significant damage, but will experience a certain cumulative strain [33].Factors such as dynamic stress amplitude [34], confining pressure [35], and number of cycles [36] can all affect the generation and development of accumulated strain in the soil, and the bumps caused by deformation can affect the smoothness and safety of vehicle driving [37].At present, the research on the mechanical properties of FRCS mostly focuses on its static mechanical properties, but its application scenarios are mostly in areas that need to be subject to long-term cyclic low loads, such as roads.Under long-term cyclic loads, reinforced cement soil can limit the lateral displacement of the subgrade and reduce the settlement of the subgrade [38].
Therefore, research on the deformation characteristics of fiber-reinforced cement soil can be of great help for road engineering construction, but there is relatively little research on its cumulative plastic strain characteristics under cyclic loading.This research explores the static and dynamic performance and deformation characteristics of FRCS with different fiber contents and amplitudes through UCS tests and DT tests, in order to provide a theoretical basis for the application of waste slurry in road engineering.
Materials
The materials used in this experiment were waste slurry, cement, and polypropylene fibers.The waste slurry used in the experiment was taken from a construction site in Shaoxing City.The waste slurry after drying is shown in Figure 1; the porosity of the waste slurry was 73%.Its physical and mechanical indicators are shown in Table 1.The main chemical components of the waste slurry, obtained through XRF testing, are shown in Table 2, and the equipment model used was Axiosmax.The main chemical composition of the waste slurry was similar to clay; it did not have chemical activity.The cement used in the experiment was P.O 42.5 ordinary Portland cement, with a density of 3.0-3.15g/cm 3 and a specific surface area of 340-370 m 2 /kg.Its porosity was 50%, and its main chemical composition is shown in Table 3.The polypropylene fibers used in the experiment were produced by Shaoxing Fiber High tech Co., Ltd.They have good dispersibility, low cost, and a white color, as shown in Figure 1.Their technical indicators are shown in Table 4. water in sequence according to the mix ratio, and add corresponding amounts of polypropylene fibers.
(2) Mix thoroughly with a mixer, and after mixing evenly, place them in three layers into the mold and manually vibrate.(3) After vibration compaction and leaving to stand for 24 h, remove the mold and wrap it with cling film to prevent moisture evaporation.(4) Place the specimen in a constant temperature and humidity curing box for curing, with a curing temperature of 20 °C, a humidity of ≥90%, and a curing period of 7 or 28 days.The specific specimen preparation process is shown in Figure 1.
Test Scheme
The unconfined compressive performance of the FRCS was analyzed using different fiber contents, and the cumulative strain characteristics of the FRCS were explored by applying cyclic loads of different amplitudes.UCS tests were performed on FRCS samples
Specimen Preparation
Each sample was cylindrical, with a diameter of 39.1 mm and a height of a 80 mm.According to the specification for the mix proportion design of cement soil (JGJ/T233-2011) [39], the moisture content was set at 50%, and based on early testing, the ratio of waste slurry, cement, and water was set at a 1:0.1:0.5 mass ratio.The fiber content was proportioned according to the experimental plan.
The specimen preparation process was as follows: (1) Add waste slurry, cement, and water in sequence according to the mix ratio, and add corresponding amounts of polypropylene fibers.(2) Mix thoroughly with a mixer, and after mixing evenly, place them in three layers into the mold and manually vibrate.(3) After vibration compaction and leaving to stand for 24 h, remove the mold and wrap it with cling film to prevent moisture evaporation.(4) Place the specimen in a constant temperature and humidity curing box for curing, with a curing temperature of 20 • C, a humidity of ≥90%, and a curing period of 7 or 28 days.The specific specimen preparation process is shown in Figure 1.
Test Scheme
The unconfined compressive performance of the FRCS was analyzed using different fiber contents, and the cumulative strain characteristics of the FRCS were explored by applying cyclic loads of different amplitudes.UCS tests were performed on FRCS samples with fiber contents of 0%, 0.25%, 0.5%, 0.75%, and 1% of the dry waste slurry mass.Using strain-controlled loading, according to the Chinese Code Standard for Geotechnical Testing Method (GB/T 50123-2019) [40], the loading rate for the UCS testing was set at 1 mm/min.In the DT test, a sine wave as selected as the loading waveform, and it was loaded using stress as the control method.Twenty points were collected for each cycle, and the specific loading method is shown in Figure 2. By changing the amplitudes of 0.1 UCS, 0.2 UCS, and 0.3 UCS, cyclic loading was performed on the specimens with the optimal fiber content to investigate the effect of the amplitude on the dynamic performance of the FRCS.The specific test plan is shown in Table 5.The UCS test uses the TKA-WCY-1F fully automatic multifunctional unconfined compressive strength tester, and the DT test uses the dynamic triaxial tester produced by GDS Instruments in the UK.
Stress-Strain Curves
UCS tests were conducted on the FRCS with different PP contents, and stress-strain curves were obtained as shown in Figures 3 and 4. The stress-strain curves of the FRCS show almost the same trend under different PP contents, and can be divided into three
Stress-Strain Curves
UCS tests were conducted on the FRCS with different PP contents, and stress-strain curves were obtained as shown in Figures 3 and 4. The stress-strain curves of the FRCS show almost the same trend under different PP contents, and can be divided into three stages.(1) In the linear elastic stage, the stress increases approximately linearly with strain.(2) As the axial strain continues to increase, it enters the plastic stage, and the stress-strain relationship shows a nonlinear relationship with small fluctuations in the curve.(3) Entering the stress attenuation stage, the stress continuously decreases, and the specimen produces cracks or even failure.(2) As the axial strain continues to increase, it enters the plastic stage, and the stress-strain relationship shows a nonlinear relationship with small fluctuations in the curve.(3) Entering the stress attenuation stage, the stress continuously decreases, and the specimen produces cracks or even failure.
UCS
Figure 5 shows the comparison of the UCS of the FRCS samples with different PP fiber contents.It can be seen from the figure that the UCS of FRCS gradually increases with an increase in the fiber content and reaches its maximum, which is 6.1% higher than that of the specimen without fibers, at a 0.75% fiber content, which is similar to the conclusion in Chen et al. [41].At the same curing age, the UCS of the FRCS samples with different fiber contents increased in the early stage but decreased to varying degrees in the later stage.This is because as the fiber content increases, its dispersion uniformity in the cement decreases, and the aggregation phenomenon of fibers in the FRCS fails to form an effective spatial network structure.Additionally, certain voids are formed in the FRCS, and the stress transfer ability of the fibers in the soil under load decreases, ultimately leading to a decrease in the UCS of the FRCS.From the overall trend, the UCS of the FRCS containing fibers is higher than that of the FRCS without fibers.PP fibers are helpful in improving the strength of FRCS, and its effect on improving the strength increases with stages.(1) In the linear elastic stage, the stress increases approximately linearly with strain.
(2) As the axial strain continues to increase, it enters the plastic stage, and the stress-strain relationship shows a nonlinear relationship with small fluctuations in the curve.(3) Entering the stress attenuation stage, the stress continuously decreases, and the specimen produces cracks or even failure.
UCS
Figure 5 shows the comparison of the UCS of the FRCS samples with different PP fiber contents.It can be seen from the figure that the UCS of FRCS gradually increases with an increase in the fiber content and reaches its maximum, which is 6.1% higher than that of the specimen without fibers, at a 0.75% fiber content, which is similar to the conclusion in Chen et al. [41].At the same curing age, the UCS of the FRCS samples with different fiber contents increased in the early stage but decreased to varying degrees in the later stage.This is because as the fiber content increases, its dispersion uniformity in the cement decreases, and the aggregation phenomenon of fibers in the FRCS fails to form an effective spatial network structure.Additionally, certain voids are formed in the FRCS, and the stress transfer ability of the fibers in the soil under load decreases, ultimately leading to a decrease in the UCS of the FRCS.From the overall trend, the UCS of the FRCS containing fibers is higher than that of the FRCS without fibers.PP fibers are helpful in improving the strength of FRCS, and its effect on improving the strength increases with
UCS
Figure 5 shows the comparison of the UCS of the FRCS samples with different PP fiber contents.It can be seen from the figure that the UCS of FRCS gradually increases with an increase in the fiber content and reaches its maximum, which is 6.1% higher than that of the specimen without fibers, at a 0.75% fiber content, which is similar to the conclusion in Chen et al. [41].At the same curing age, the UCS of the FRCS samples with different fiber contents increased in the early stage but decreased to varying degrees in the later stage.This is because as the fiber content increases, its dispersion uniformity in the cement decreases, and the aggregation phenomenon of fibers in the FRCS fails to form an effective spatial network structure.Additionally, certain voids are formed in the FRCS, and the stress transfer ability of the fibers in the soil under load decreases, ultimately leading to a decrease in the UCS of the FRCS.From the overall trend, the UCS of the FRCS containing fibers is higher than that of the FRCS without fibers.PP fibers are helpful in improving the strength of FRCS, and its effect on improving the strength increases with an increase in fiber content.This may be because the waste slurry, cement, and fibers in the specimen produced a bonding force as they harden, allowing the fibers to transmit residual stress and delaying the crack expansion of the FRCS under load, relatively improving the strength of the specimen [42].
From Figure 5, it can also be seen that the UCS of the 28-day curing age specimen is significantly increased compared to the 7-day curing age specimen, being more than double the value.As the curing age increases, the UCS of the specimen increases significantly, which is due to the shorter hydration time of the FRCS cement during the 7-day curing period, resulting in fewer hydration products.However, as the age increases and the hydration reaction continues, a large amount of new minerals, such as calcium salts and aluminum hydrates, in the soil produce crystallization [43,44].The cement gradually dehydrates over time, resulting in more internal cementation, significantly improving the UCS of the FRCS.an increase in fiber content.This may be because the waste slurry, cement, and fibers in the specimen produced a bonding force as they harden, allowing the fibers to transmit residual stress and delaying the crack expansion of the FRCS under load, relatively improving the strength of the specimen [42].From Figure 5, it can also be seen that the UCS of the 28-day curing age specimen is significantly increased compared to the 7-day curing age specimen, being more than double the value.As the curing age increases, the UCS of the specimen increases significantly, which is due to the shorter hydration time of the FRCS cement during the 7-day curing period, resulting in fewer hydration products.However, as the age increases and the hydration reaction continues, a large amount of new minerals, such as calcium salts and aluminum hydrates, in the soil produce crystallization [43,44].The cement gradually dehydrates over time, resulting in more internal cementation, significantly improving the UCS of the FRCS.
Elastic Modulus
In the UCS test, the stress-strain curves of the FRCS specimens during the elastic deformation stage are positively proportional, and the proportional coefficient is called the elastic modulus E, as shown in Figure 6.The smaller the E of a material, the greater its elastic deformation, and the easier it is to recover the strain caused by stress.Therefore, materials with a lower elastic modulus are more conducive to improving the variability characteristics of road foundations.From Figure 6, it can be seen that the E of FRCS significantly increases from the 7 d to 28 d curing age samples, indicating that its stiffness is significantly increased through sufficient hydration reactions.At both 7 d and 28 d curing age, the samples' elastic moduli first decrease and then increase with an increase in the fiber content, and the E of the sample with 0.75% PP fibers reaches the minimum value.
Elastic Modulus
In the UCS test, the stress-strain curves of the FRCS specimens during the elastic deformation stage are positively proportional, and the proportional coefficient is called the elastic modulus E, as shown in Figure 6.The smaller the E of a material, the greater its elastic deformation, and the easier it is to recover the strain caused by stress.Therefore, materials with a lower elastic modulus are more conducive to improving the variability characteristics of road foundations.From Figure 6, it can be seen that the E of FRCS significantly increases from the 7 d to 28 d curing age samples, indicating that its stiffness is significantly increased through sufficient hydration reactions.At both 7 d and 28 d curing age, the samples' elastic moduli first decrease and then increase with an increase in the fiber content, and the E of the sample with 0.75% PP fibers reaches the minimum value.
Ratio of Modulus to Strength
The ratio of modulus to strength, η, is the ratio of E to the UCS at peak strength, which can describe the tensile performance of a material.The larger the η, the poorer the tensile performance of the material and the lower its ability to resist deformation, as shown in Equation (1).The calculation results are shown in Figure 7.
It can be seen from Figure 7 that after adding fibers, the η of FRCS decreases, and
Ratio of Modulus to Strength
The ratio of modulus to strength, η, is the ratio of E to the UCS at peak strength, which can describe the tensile performance of a material.The larger the η, the poorer the tensile Polymers 2023, 15, 3435 8 of 18 performance of the material and the lower its ability to resist deformation, as shown in Equation (1).The calculation results are shown in Figure 7.
The ratio of modulus to strength, η, is the ratio of E to the UCS at peak strength, which can describe the tensile performance of a material.The larger the η, the poorer the tensile performance of the material and the lower its ability to resist deformation, as shown in Equation (1).The calculation results are shown in Figure 7.
It can be seen from Figure 7 that after adding fibers, the η of FRCS decreases, and with the increase in fiber content, the η first decreases and then increases.The PP-0.75 specimen reaches the minimum value, and PP-1 then increases.As the curing age increases, the η of the FRCS gradually decreases.Taking PP-0.75 as an example, its η after 28 days of curing is 58% of that after 7 days, and the η has significantly decreased.Therefore, the appropriate fiber content and increasing the curing age can both reduce the η of FRCS, thereby improving its tensile performance.
Dynamic Characteristics
In the DT test, the specimen generates a dynamic stress-strain hysteresis loop under the action of cyclic stress during each cycle, and each hysteresis loop curve formed by it It can be seen from Figure 7 that after adding fibers, the η of FRCS decreases, and with the increase in fiber content, the η first decreases and then increases.The PP-0.75 specimen reaches the minimum value, and PP-1 then increases.As the curing age increases, the η of the FRCS gradually decreases.Taking PP-0.75 as an example, its η after 28 days of curing is 58% of that after 7 days, and the η has significantly decreased.Therefore, the appropriate fiber content and increasing the curing age can both reduce the η of FRCS, thereby improving its tensile performance.
Dynamic Characteristics
In the DT test, the specimen generates a dynamic stress-strain hysteresis loop under the action of cyclic stress during each cycle, and each hysteresis loop curve formed by it generally does not overlap, as shown in Figure 8.The smaller the area of the hysteresis loop of the specimen under the same stress cycle, the smaller the energy loss of the FRCS during this cycle.The dynamic elastic modulus and damping ratio can also be calculated through hysteresis loops.generally does not overlap, as shown in Figure 8.The smaller the area of the hysteresis loop of the specimen under the same stress cycle, the smaller the energy loss of the FRCS during this cycle.The dynamic elastic modulus and damping ratio can also be calculated through hysteresis loops.
Hysteretic Curves
By performing cyclic loading on the FRCS samples with different fiber contents at different amplitudes, hysteresis loops under the 50th, 250th, 500th, 750th, and 1000th loads
Hysteretic Curves
By performing cyclic loading on the FRCS samples with different fiber contents at different amplitudes, hysteresis loops under the 50th, 250th, 500th, 750th, and 1000th loads were selected, as shown in Figures 9-15.As the number of cycles increases, the hysteresis loop moves in the direction of increasing strain, indicating that the FRCS generates cumulative strain under the action of cyclic stress.It can be observed that the distance between the hysteresis loops decreases as the number of cycles increases, indicating that the cumulative strain rate gradually decreases to a stable state.
Hysteretic Curves
By performing cyclic loading on the FRCS samples with different fiber contents at different amplitudes, hysteresis loops under the 50th, 250th, 500th, 750th, and 1000th loads were selected, as shown in Figures 9-15.As the number of cycles increases, the hysteresis loop moves in the direction of increasing strain, indicating that the FRCS generates cumulative strain under the action of cyclic stress.It can be observed that the distance between the hysteresis loops decreases as the number of cycles increases, indicating that the cumulative strain rate gradually decreases to a stable state.In addition, as the number of cycles increases, the area of the hysteresis loop gradually decreases.The hysteresis loop of unmodified FRCS can be approximated as a closed crescent shape, and the development of hysteresis loops with different fiber dosages and amplitudes has a certain regularity.The development of hysteresis loops with changes in fiber content is shown in Figures 9-13.As the fiber content in the soil increases, the length of the hysteresis loops decreases and moves from the direction of the stress axis to the direction of the strain axis.The hysteresis loop of the specimen PP-0.75 reaches its extreme value, while the hysteresis loop of specimen PP-1 is the opposite.The variation in hysteresis loop development with amplitude is shown in Figures 13-15.It can be seen that as the amplitude increases, the hysteresis loop becomes narrower and longer.Under the same stress, the larger the amplitude, the smaller the strain.Its shape gradually becomes slender, indicating that the energy loss of the specimen gradually decreases after the start of cycling, and finally tends to a stable state.In addition, as the number of cycles increases, the area of the hysteresis loop gradually decreases.The hysteresis loop of unmodified FRCS can be approximated as a closed crescent shape, and the development of hysteresis loops with different fiber dosages and amplitudes has a certain regularity.The development of hysteresis loops with changes in
Dynamic Elastic Modulus
The dynamic elastic modulus Ed of the FRCS can be calculated based on its hysteresis loop [20], and the calculation results are shown in Figures 16 and 17 Figure 16 depicts the relationship between the Ed and the number of cycles for each fiber content.It can be seen that as the number of cycles increases, the Ed increases rapidly in the first 100 cycles, and then, although it still increases, it tends to flatten out.From the graph, it is found that the Ed of the FRCS sample with PP-0.75 changes relatively smoothly, indicating a slower rate of increase in Ed.This is because the specimen without fibers is a brittle material which is prone to damage inside the specimen when subjected to external forces.The fibers added will participate in the stress process of the soil and transmit internal stress, which can reduce the internal damage of the specimen.Therefore, the Ed of FRCS with fibers added increases slowly.Due to the use of a low cyclic stress ratio, the applied dynamic load is relatively small.When the number of cycles is constant, it is found that with an increase in fiber content, the Ed first increases and then decreases, and reaches its maximum for the sample with a PP content of 0.75%.Fibers can have a certain restraining effect on the development of cracks, and the dynamic performance of FRCS can be improved [45].
Damping Ratio
The damping ratio λ of FRCS can be calculated based on the hysteresis loop [20], and the calculation results are shown in Figures 18 and 19. Figure 17 shows the variation in the Ed of the FRCS with different amplitudes during cyclic cycles at a fiber content of 0.75%.It can be seen that the Ed of the specimens under different amplitudes significantly increased during the first 100 cycles of cyclic loading.This indicates that during the initial cycles of the FRCS, the Ed of the specimens increases rapidly due to compaction, and later tends to stabilize due to the fact that the specimens have already been compacted, and a more stable structure is formed between the waste slurry, cement, and fibers inside the specimens.The Ed tends to stabilize.When the number of cycles is constant, the Ed of the FRCS continues to increase and the rate of increase slows down as the amplitude increases.The Ed also shows an upward trend with the number of cycles, but the amplitude of the change is small, indicating that under low load cyclic action, the larger the amplitude, the stronger the ability of the FRCS to resist deformation, but the proportion of plastic strain generated will also increase, which is not conducive to the recovery of the internal structure of the soil after compression.
Damping Ratio
The damping ratio λ of FRCS can be calculated based on the hysteresis loop [20], and the calculation results are shown in Figures 18 and 19.
Damping Ratio
The damping ratio λ of FRCS can be calculated based on the hysteresis loop [20], and the calculation results are shown in Figures 18 and 19 Figure 18 depicts the variation in the λ of the FRCS samples with different fiber contents over the loading cycles.It can be seen that the amount of fibers added has a certain impact on the λ of the specimen, and as the amount of fibers added increases, the impact of dynamic load on the λ of FRCS continues to increase.The λ reaches its lowest value when the amount of fibers added is 0.75%, indicating that adding an appropriate amount of polypropylene fibers can enhance the compactness of FRCS, improve the overall bearing capacity, and facilitate the internal transmission of stress.Therefore, the energy loss caused during the bearing process is reduced, and λ decreases.Figure 18 depicts the variation in the λ of the FRCS samples with different fiber contents over the loading cycles.It can be seen that the amount of fibers added has a certain impact on the λ of the specimen, and as the amount of fibers added increases, the impact of dynamic load on the λ of FRCS continues to increase.The λ reaches its lowest value when the amount of fibers added is 0.75%, indicating that adding an appropriate amount of polypropylene fibers can enhance the compactness of FRCS, improve the overall bearing capacity, and facilitate the internal transmission of stress.Therefore, the energy loss caused during the bearing process is reduced, and λ decreases.
In Figure 19, it can be observed that the λ of the PP-0.75 specimen increases with the increase in amplitude, leading to more relative sliding and a wider rearrangement of particles inside the FRCS [46].Under cyclic loading, the internal pores of the specimen are compacted, resulting in a slower energy dissipation rate and a faster rate of reduction in λ.
Deformation Characteristics
Accumulated plastic strain refers to the relative slip of soil particles under cyclic loading, particle rearrangement, and dissipation of the accumulated energy of plastic strain, leading to deformation of the soil.As the number of cycles increases, the dissipation rate of the accumulated energy of viscosity gradually exceeds that of the accumulated energy of plastic strain.At this point, the soil is not damaged and the deformation tends to stabilize.In the daily use of roads, the pore pressure of soft soil in the foundation will first gather and then dissipate under the action of dynamic deviatoric stress under long-term low-stress cycling, during which the soil will generate a certain strain.The cumulative deformation of soil can be divided into three stages: rapid increase stage, gradually stabilising stage, and smooth stage [47]; see Figure 20.In Figure 19, it can be observed that the λ of the PP-0.75 specimen increases with the increase in amplitude, leading to more relative sliding and a wider rearrangement of particles inside the FRCS [46].Under cyclic loading, the internal pores of the specimen are compacted, resulting in a slower energy dissipation rate and a faster rate of reduction in λ.
Deformation Characteristics
Accumulated plastic strain refers to the relative slip of soil particles under cyclic loading, particle rearrangement, and dissipation of the accumulated energy of plastic strain, leading to deformation of the soil.As the number of cycles increases, the dissipation rate of the accumulated energy of viscosity gradually exceeds that of the accumulated energy of plastic strain.At this point, the soil is not damaged and the deformation tends to stabilize.In the daily use of roads, the pore pressure of soft soil in the foundation will first gather and then dissipate under the action of dynamic deviatoric stress under long-term low-stress cycling, during which the soil will generate a certain strain.The cumulative deformation of soil can be divided into three stages: rapid increase stage, gradually stabilising stage, and smooth stage [47]; see Figure 20.As shown in Figure 21, the cumulative plastic strain of FRCS with different fiber contents varies with the number of cycles.From the graph, it can be seen that the cumulative plastic strain of FRCS shows a growth pattern of first a rapid increase and then a slow As shown Figure 21, the cumulative plastic strain of FRCS with different fiber contents varies with the number of cycles.From the graph, it can be seen that the cumulative plastic strain of FRCS shows a growth pattern of first a rapid increase and then a slow increase with the increase in the number of load applications.With the increase in fiber content, the cumulative plastic strain first increases, then decreases, and then increases.When the fiber content is 0.75%, this is the turning point.
Cumulative Plastic Strain
As shown in Figure 21, the cumulative plastic strain of FRCS with different fiber contents varies with the number of cycles.From the graph, it can be seen that the cumulative plastic strain of FRCS shows a growth pattern of first a rapid increase and then a slow increase with the increase in the number of load applications.With the increase in fiber content, the cumulative plastic strain first increases, then decreases, and then increases.When the fiber content is 0.75%, this is the turning point.Figure 22 shows that the cumulative plastic deformation of FRCS under the same fiber content increases with the increase in amplitude, and the rate of growth also increases with the increase in amplitude.For foundations with the appropriate fiber content in the FRCS, the settlement of the roadbed can be alleviated.Figure 22 shows that the cumulative plastic deformation of FRCS under the same fiber content increases with the increase in amplitude, and the rate of growth also increases with the increase in amplitude.For foundations with the appropriate fiber content in the FRCS, the settlement of the roadbed can be alleviated.
Cumulative Deformation Prediction
From the previous test analysis, it can be found that the cumulative plastic strain of FRCS increases with an increase in the number of cyclic loading cycles, and the logarithm of the cumulative plastic strain and the number of cycles of cyclic loading satisfies a power function relationship, as shown in Equation (2).
where εd is the cumulative plastic strain in %, N is the number of cycles of loading, and a, b is a parameter related to the stress amplitude and fiber content.By using Equation (2) to fit and calculate the cumulative plastic strain curve of FRCS, the relevant parameters shown in Table 6 can be obtained, and the fitting curves are shown in Figures 23 and 24.It was found that the theoretical and experimental values have a small error, and this function can better describe the relationship between the cumulative plastic strain of FRCS and the number of cyclic loading cycles.
Cumulative Deformation Prediction
From the previous test analysis, it can be found that the cumulative plastic strain of FRCS increases with an increase in the number of cyclic loading cycles, and the logarithm of the cumulative plastic strain and the number of cycles of cyclic loading satisfies a power function relationship, as shown in Equation (2).
where ε d is the cumulative plastic strain in %, N is the number of cycles of loading, and a, b is a parameter related to the stress amplitude and fiber content.
By using Equation (2) to fit and calculate the cumulative plastic strain curve of FRCS, the relevant parameters shown in Table 6 can be obtained, and the fitting curves are shown in Figures 23 and 24.It was found that the theoretical and experimental values have a small error, and this function can better describe the relationship between the cumulative plastic strain of FRCS and the number of cyclic loading cycles.
Conclusions
The strength and deformation characteristics of FRCS were studied through unconfined compressive strength tests and dynamic triaxial tests, and the following conclusions were drawn: (1) The UCS and static elastic modulus of FRCS can be improved with an increase in the fiber content, and the optimal fiber content is 0.75%.With the increase in fiber content, both the UCS and static elastic modulus of the sample decrease first and then increase.Moreover, the appropriate amount of fibers can effectively reduce the modulus strength ratio of FRCS and improve its tensile properties.
(2) The hysteresis curve of FRCS tends to be stable with an increase in the number of cycles, and its shape and size are affected by the fiber content and the amplitude.The addition of fibers can reduce the Ed of the sample, and the Ed increases rapidly with the increase in the number of cycles and then tends flatten out.The Ed of FRCS first decreased and then increased with the increase in the fiber content, and the Ed of FRCS continued
Conclusions
The strength and deformation characteristics of FRCS were studied through unconfined compressive strength tests and dynamic triaxial tests, and the following conclusions were drawn: (1) The UCS and static elastic modulus of FRCS can be improved with an increase in the fiber content, and the optimal fiber content is 0.75%.With the increase in fiber content, both the UCS and static elastic modulus of the sample decrease first and then increase.Moreover, the appropriate amount of fibers can effectively reduce the modulus strength ratio of FRCS and improve its tensile properties.
(2) The hysteresis curve of FRCS tends to be stable with an increase in the number of cycles, and its shape and size are affected by the fiber content and the amplitude.The addition of fibers can reduce the Ed of the sample, and the Ed increases rapidly with the increase in the number of cycles and then tends flatten out.The Ed of FRCS first decreased and then increased with the increase in the fiber content, and the Ed of FRCS continued
Conclusions
The strength and deformation characteristics of FRCS were studied through unconfined compressive strength tests and dynamic triaxial tests, and the following conclusions were drawn: (1) The UCS and static elastic modulus of FRCS can be improved with an increase in the fiber content, and the optimal fiber content is 0.75%.With the increase in fiber content, both the UCS and static elastic modulus of the sample decrease first and then increase.Moreover, the appropriate amount of fibers can effectively reduce the modulus strength ratio of FRCS and improve its tensile properties.
(2) The hysteresis curve of FRCS tends to be stable with an increase in the number of cycles, and its shape and size are affected by the fiber content and the amplitude.The addition of fibers can reduce the Ed of the sample, and the Ed increases rapidly with the increase in the number of cycles and then tends flatten out.The Ed of FRCS first decreased and then increased with the increase in the fiber content, and the Ed of FRCS continued to increase with an increase in amplitude.With the increase in the number of cycles, the damping ratio of FRCS showed a linear rapid decrease, and then slowed down and stabilized in a small range.The Ed and the damping ratio of FRCS are optimal when the fiber content is 0.75%.
(3) With an increase in the fiber content, the cumulative plastic strain of FRCS increases first and then decreases under the same number of cycles, and the cumulative plastic strain reaches a maximum value when the fiber content is 0.75%.The cumulative plastic deformation and growth rate of FRCS with the same fiber content increase with an increase in amplitude.The cumulative strain of FRCS increases rapidly during the initial cycles, but becomes stable after more than 250 cycles.A cumulative plastic strain prediction model of FRCS is established, and the logarithm of cumulative plastic strain and cyclic loading times satisfy the power function relationship.
1 )
In the linear elastic stage, the stress increases approximately linearly with strain.
Figure 3 .
Figure 3. Stress-strain curves of FRCS with 7 d curing age.
Figure 4 .
Figure 4. Stress-strain curves of FRCS with 28 d curing age.
Figure 3 .
Figure 3. Stress-strain curves of FRCS with 7 d curing age.
Figure 3 .
Figure 3. Stress-strain curves of FRCS with 7 d curing age.
Figure 4 .
Figure 4. Stress-strain curves of FRCS with 28 d curing age.
Figure 4 .
Figure 4. Stress-strain curves of FRCS with 28 d curing age.
Figure 5 .
Figure 5. UCS of FRCS at different curing ages.
Figure 5 .
Figure 5. UCS of FRCS at different curing ages.
Figure 16 .
Figure 16.Changes in Ed with number of cycles under different fiber contents.
Figure 17 .
Figure 17.Changes in Ed with number of cycles under different amplitudes.
Figure 16 .
Figure 16.Changes in Ed with number of cycles under different fiber contents.
Figure 16 .
Figure 16.Changes in Ed with number of cycles under different fiber contents.
Figure 17 .
Figure 17.Changes in Ed with number of cycles under different amplitudes.
Figure 18 .
Figure 18.Change in λ with number of cycles under different fiber contents.
Figure 17 .
Figure 17.Changes in Ed with number of cycles under different amplitudes.
Polymers 2023 , 19 Figure 16 .
Figure 16.Changes in Ed with number of cycles under different fiber contents.
Figure 17 .
Figure 17.Changes in Ed with number of cycles under different amplitudes. .
Figure 18 .
Figure 18.Change in λ with number of cycles under different fiber contents.
Figure 18 .
Figure 18.Change in λ with number of cycles under different fiber contents.
Polymers 2023 , 19 Figure 19 .
Figure 19.Changes in λ with number of cycles under different amplitudes.
Figure 19 .
Figure 19.Changes in λ with number of cycles under different amplitudes.
Polymers 2023 , 19 Figure 20 .
Figure 20.Schematic diagram of the curve of the variation in axial strain with the number of cyclic loading cycles.3.3.1.Cumulative Plastic Strain
Figure 20 .
Figure 20.Schematic diagram of the curve of the variation in axial strain with the number of cyclic loading cycles.
Figure 20 .
Figure 20.Schematic diagram of the curve of the variation in axial strain with the number of cyclic loading cycles.
Figure 21 .
Figure 21.The cumulative plastic strain changes with the number of cycles under different fiber contents.
Figure 21 .
Figure 21.The cumulative plastic strain changes with the number of cycles under different fiber contents.
Figure 22 .
Figure 22.The variation in the cumulative plastic strain with the number of cycles under progressive loading.
Figure 22 .
Figure 22.The variation in the cumulative plastic strain with the number of cycles under progressive loading.
Figure 23 .
Figure 23.Cumulative plastic strain fitting results of FRCS under different fiber dosages.
Figure 24 .
Figure 24.Cumulative plastic strain fitting results of FRCS under different stress amplitudes.
Figure 23 . 19 Figure 23 .
Figure 23.Cumulative plastic strain fitting results of FRCS under different fiber dosages.
Figure 24 .
Figure 24.Cumulative plastic strain fitting results of FRCS under different stress amplitudes.
Figure 24 .
Figure 24.Cumulative plastic strain fitting results of FRCS under different stress amplitudes.
Table 2 .
Chemical composition of waste slurry.
Table 3 .
Chemical composition of cement.
Table 4 .
Main performance indicators of polypropylene fibers.
Table 6 .
Cumulative plastic strain prediction parameter values.
Table 6 .
Cumulative plastic strain prediction parameter values. | 10,484.6 | 2023-08-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Materials Science"
] |
Instantons and Entanglement Entropy
We would like to put the area law -- believed to by obeyed by entanglement entropies in the ground state of a local field theory -- to scrutiny in the presence of non-perturbative effects. We study instanton corrections to entanglement entropy in various models whose instanton effects are well understood, including $U(1)$ gauge theory in 2+1 dimensions and false vacuum decay in $\phi^4$ theory, and we demonstrate that the area law is indeed obeyed in these models. We also perform numerical computations for toy wavefunctions mimicking the theta vacuum of the (1+1)-dimensional Schwinger model. Our results indicate that such superpositions exhibit no more violation of the area law than the logarithmic behavior of a single Fermi surface.
Introduction
Recent breakthroughs in the study of many-body entanglement have led to a deeper understanding of phases of matter and their classification, driven in large part by the advances in the study of many-body entanglement. This viewpoint was crucial in moving beyond the Landau-Ginzburg paradigm, meeting with tremendous success in classifying symmetryprotected topological (SPT) phases and topological orders. It has also been used to help uncover entirely new phases, loosely known as quantum solids, and for the classification of quantum systems in [1], where the notion of "s-source renormalization" was introduced. Ssource renormalization is a renormalization group (RG) scheme that focuses on entanglement.
The key idea is to determine the excess entanglement that has to be incorporated into a manybody system to prepare the same phase with the number of sites doubled. The parameter s is the number of copies of the system at size L needed in order to produce a system of size 2L. Without going into details, suffice it to say that under this classification, matter belonging to different classes carry different amounts of entanglement. Matter whose entanglement entropy satisfies an area law is characterized by the value s = 1. It is only one of the many possible classes of matter.
The high-energy community, on the other hand, is predominantly concerned with continuum field theory. This has long been the primary framework used to study fundamental particles and their interactions, and continuum field theory is also one of the most successful tools in the study of condensed matter systems. This naturally leads to the question: what phases of matter can continuum field theory describe? Certainly, there are limits to what can be realized in field theory: for example, as is familiar to condensed matter physicists, not every lattice model has a continuum limit. An example is given by the Haah code [2], one of the "quantum solids" mentioned above, in which the jump in the ground state degeneracy between an even and an odd number of lattice sites is expected to preclude a continuum description. It is therefore natural to assume that the phases of matter describable by local field theory are highly restricted. In fact, it is argued in [1] that all local quantum field theories belong to the class s = 1, meaning that they must satisfy an area law. The argument purportedly works independently of whether the theory has a perturbative limit, and so it is important to subject this claim to scrutiny, particularly in regimes which have not received much attention from this perspective.
It is expected that, at any finite order in perturbation theory, the ground state entanglement receives only short-range corrections. Motivated by these considerations, we initiate here a study of contributions to entanglement entropy coming from non-perturbative effects in field theory, especially within the instanton formalism. As is well known, instantons can be used to capture tunneling effects, and therefore provide crucial information characterizing the ground state of a system, as well as the non-perturbative decay of a system from a false vacuum to its true ground state. Instanton contributions are accompanied by an integral over the instanton moduli space, leading to volume contributions to the partition function. Within the replica trick, the cancellation of the volume dependence between the partition function on the replica space, Z n , and the n th power of the flat space partition function, (Z 1 ) n , is required to recover the area law, making instantons a natural place to look for area law violations. The aim of this note is to investigate the effect of instantons on entanglement entropy in several contexts.
We begin in section 2 by outlining the computation of instanton contributions to entanglement using the replica trick. Probably the most popular tool for computing entanglement entropy in field theory, the replica trick introduces n copies of the ground state wavefunction, sews them together in a non-trivial way, analytically continues in n, and finally takes the n → 1 limit [3,4]. In the path integral formalism, this process boils down to computing the path-integral of the theory in a background with a conical singularity.
Our first application is to U(1) gauge theory in 2+1 dimensions, which has a dual description as the XY model. In the XY model, which is gapped, the entanglement across entangling surfaces much larger than the gap scale saturates to a simple area law. In the gauge theory description, the gap arises due to non-perturbative effects, and so if we want the qualitative behavior of entanglement at large scales to match between the two descriptions, we must include contributions to the entanglement entropy arising from instanton configurations. This system demonstrates the importance of instanton contributions for entanglement entropy, and provides a test of their computation via the replica trick. In section 4 we turn to entanglement entropy of the false vacuum in φ 4 theory in 1 + 1 dimensions. Upon integrating over classical solutions that break the replica rotation symmetry when the number of sheets n is an integer, one recovers an area law correction to the entanglement entropy. Section 5 examines a toy model for the vacuum of the Schwinger model in terms of a sum over Fermi surfaces. The results of both these sections are consistent with conventional wisdom.
We are thus led to the conclusion that non-perturbative corrections in field theory conform to the area scaling of ground states of local field theory. We further conclude that the time dependence of the entanglement entropy during non-perturbative vacuum decay obeys the Lieb-Robinson bound, i.e, the rate of change of entropy is bounded by an area law.
Let us note that instantons have featured in the literature in the context of supersymmetric Renyi entropy, where supersymmetric localization can be applied. (See, for example, [5,6].) In those studies, their contributions either happen to cancel out, or are sufficiently localized near the conical singularity that no modification of the area law is observed. Due to the fact that supersymmetric Renyi entropy carries rather different physics, and that instabilities of the type considered here do not arise in supersymmetric theories, our study offers a useful complement to these works.
Instantons, Entanglement, and the Replica Trick
Instantons are saddle points of the Euclidean path integral. Their contribution is typically suppressed by a factor of the form e −c/g , with g some coupling constant of the theory. For computations in quantum field theory, which are usually only under control at small coupling, these contributions are therefore much smaller than the perturbative corrections, if present. As a result, for most computable quantities, instanton contributions are not quantitatively important.
There are a couple of important exceptions to this rule. One is cases where some symmetry -usual supersymmetry -allows exact computations even at strong coupling. Another important exception is when instantons lead to qualitative changes in the behavior of physical quantities: thus, instantons play a crucial role in processes such as non-perturbative vacuum decay, where the perturbative answer is zero, and in the dynamics of 3d O(3) gauge theory, where instanton condensation induces a gap. Our primary interest, therefore, is in instanton corrections that lead to qualitative changes in the entanglement entropy. The threedimensional O(3) Higgs model (or lattice U(1) gauge theory) is a prime example of this, as instanton effects take a theory that is perturbatively gapless, and hence has entanglement entropy that depends non-trivially on system size, and gives it a gap, causing the entanglement entropy to saturate at the scale of the gap.
Our goal in this note is to compute the entanglement entropy when the entanglement region is the half-space, x 1 > 0, using the replica trick. We proceed by replacing the ddimensional Euclidean spacetime by the product of a cone in the (τ, x 1 ) plane and R d−2 . The conical background with angular surplus 2π(n − 1) has metric given by Denote by Z (n) the path integral on the cone geometry. When n is an integer, the Rényi entropy is given by the formula The entanglement entropy S EE can by computed by analytically continuing in n and taking the limit n → 1. Expanding in a power series in (n − 1), gives In the semi-classical limit, the path-integral should include, in addition to 1-loop perturbative effects, a sum over classical saddles. These saddles are the instanton solutions. The contribution from such instantons schematically takes the form I dµ I (s) e −S I det D . (2.5) Here I labels instanton sectors, s and dµ are the coordinates and measure on the instanton moduli space in sector I, and det is the determinant with zero modes omitted. For example, for a single instanton solution whose only moduli are the coordinates of its center x the measure takes the form dµ = d d x (S 1 /2π) d/2 , with S 1 the instanton action.
The moduli space arises when there are flat directions in the action, the simplest of which are due to translation invariance, and the measure arises from the integration over these flat directions -the zero modes of D. On the replica spacetime, translation invariance is broken, lifting the zero modes -i.e., there are no saddles localized about generic points in Euclidean spacetime. This does not however mean that such configurations should be dropped from the path integral: as instantons are separated from the conical singularity, they "forget" they are not on flat space and become arbitrarily close to saddle points. The same situation arises in the presence of instanton-anti-instanton configurations: these only become saddles asymptotically at large separations.
In theories that are infrared-free and gapless (at least before instanton effects are included), instanton effects are essentially captured by long-range free field configurations. The measure for a single instanton in this case is captured explicitly by the long-distance interactions with other instantons and with itself, which will play a rôle in section 3.
The second sort of situation we will discuss involves well-localized instantons in theories with a gap. In this case, the appropriate approach is to perform a sum over constrained saddles. In systems at weak coupling, for generic instanton configurations, the distance scale over which falloff occurs is of the order of coupling. As a result, when expanding the constrained saddles in the perturbation expansion, the integration region in which the effect of interactions (either with the singularity or with other instantons) is important, is small. 1 Moreover, the instanton action is large, which means that for generic configurations, the contributions to the measure coming from this source are much larger than that from such interactions.
The upshot is that at leading order in the perturbation theory expansion around instanton backgrounds, one can replace the small non-zero eigenvalue of D by zero, and for the purposes of the instanton measure treat the integration over instanton centers as being flat.
The U(1) gauge theory in (2+1)-d as a pedagogical example
We begin our study of instanton effects with gauge theories. Here we consider two important roles played by instantons in gauge theory. The first is in the structure of the ground state: while the field strength vanishes in vacuo, there can be pure gauge configurations that nonetheless are classified by a discrete number, the homotopy class of the gauge potential at infinity. To go from this infinite class of naïve vacua to the true vacuum requires a linear combination of such topological classes. The result is called the theta vacuum. The transition amplitude between different topological classes is captured by gauge instantons, which therefore play an important role in the structure of the true ground state of a gauge theory.
The second is confinement in 2+1 dimensions. Here, a theory whose classical behavior in the infrared is that of free U(1) theory can be gapped by the inclusion of instanton configurations carrying a non-trivial first Chern class with respect to the infrared U(1). As shown by Polyakov [7,8], while instantons are unlikely on small scales, at sufficiently large distances virtual instanton pairs proliferate and confinement results.
In both of these cases, it is reasonable to expect that instantons will play a role in the behavior of the entanglement entropy. In this section we will focus primarily on the second, because it must have an important qualitative effect on the entanglement entropy: when a theory has finite correlation length, the entanglement entropy is insensitive to the size of a sufficiently large system. In this case therefore, instanton contributions play a decisive role in obtaining the correct behavior, even at weak coupling. It is also a particularly nice case because we can compare directly to the dual description as an XY model. We end the section with some comments on the theta vacuum.
Free U(1) gauge theory in d = 3 and the XY model
Our first stop is instanton contributions in U(1) gauge theory 2 in 2+1 dimensions. Since our interest is in instanton effects, we will work in Euclidean signature. The Euclidean action is with equations of motion ∂ µ F µν = 0. It is useful to Poincaré dualize by writing F µ = 1 2 µνσ F νσ , so the equations of motion become In flat space, instantons are the localized solutions of the form with q ∈ Z. They have non-vanishing first Chern class S 2 c 1 (F ) = q on R 3 \ 0. 3 In pure U(1) gauge theory such configurations have infinite action, but in lattice gauge theory, or in the infrared limit of a non-Abelian gauge theory with maximally broken gauge group, such configurations describe the IR behavior of configurations with finite action. Their action is of the order Λ/e 2 , where Λ is the scale at which the ultraviolet behavior becomes important. The details of the UV completion are not important here, beyond the particular value of the instanton action.
These solutions have a simple interpretation in terms of the dual scalar field χ defined by F µ = ∂ µ χ. Compactness of the gauge group implies that χ is 2π-periodic. In this picture, the instanton background centered at x 0 is the response to a delta function source: (note the normalization of G). These configurations arise as a natural component of the duality between U(1) gauge theory and the XY-model. In the path integral, we must sum over general configurations with N instantons with all allowed values of the charge q; the behavior of the theory is dominated however by the instantons with the smallest charge q and their anti-instantons.
Instantons in the replica spacetime
Instanton solutions on the replica geometry are found the same way as in Euclidean space: requiring χ to be a classical saddle of non-trivial Chern class fixes it to take the form of the Green's function on the replica spacetime, G (n) . To evaluate the path integral, we should sum over multi-instanton configurations: with q a ∈ Z the charge of the a th instanton and x a its position. The action of this solution is where we have dropped a boundary term in the second line. The terms in this expression with a = b are UV divergent, corresponding to the action of a single instanton. In systems where the U(1) gauge field description only holds up to some UV scale (e.g. the Polyakov model or lattice gauge theory) this quantity is large but finite. We continue to denote these terms formally by G (n) (x a , x a ). While formally infinite, they have an important (and finite) n-dependence that we return to below. In the special case where n = 1, (3.6) reduces to the standard result The path integral follows from summing over all instanton solutions. Contributions from instantons with composite charge are more highly suppressed, so that the leading correction involves only instantons of the smallest possible instanton charge q. These take the form where we have absorbed the single instanton action, formally G(x, x), into a measure term ξ (n) a . The measure term, unlike in the flat case, does depend on the separation of the instanton from the replica singularity.
Comparison to the dual theory
The dual description is by a 2π-periodic scalar field χ with action Expression (3.8) at n = 1 is known to be recovered by expanding this in powers of M 2 [8].
In our case, d = 3 and M 2 = 8π 2 ξ (1) e 2 . This is the famous duality between the compact U(1) gauge theory and the XY model. From expression (3.8), we can see that the correspondence is unchanged for n > 1: the Green's function factor arises from the instanton interaction on one side of the duality, and from the 2-point function e iqaχ e iq b χ on the other. This implies that the partition functions for these two effective field theories coincide on the replica spacetime, and hence that their entanglement entropies match.
We will discuss in the next section how these contributions can be computed on both sides of the duality, and make an explicit computation of ∂ n Z (n)
Correction to the entanglement entropy
Instanton contributions to the entanglement entropy arise from the terms ∂ n G (n) | n=1 . Let us derive an expression for this quantity. Consider the replica metric where θ ∼ θ + 2π; at n = 1, the relation to the Euclidean coordinates τ = x 0 and x 1 is The motivation for these coordinates is that neither the infrared cutoff nor the periodicity of θ depend on n.) The Green's function at general n satisfies the relation with δ n (x, x ) the covariant delta function and (n) = ∇ 2 (n) . We define the following expansions: where ∂ 2 denotes the flat space Laplacian, andḡ the flat space metric in a general coordinate system. More precisely, [1] [1] . Multiplying both sides of (3.11) by √ g (n) and expanding in n, we obtain the relation This expression is true for any mass, but when the classical field theory is a free CFT we can apply CFT methods. This corresponds to expanding the dual theory (3.9) around the conformal point M 2 = 0. In this case [10], the O(n − 1) correction to the Green's function Here R = { y µ | y 1 > 0 and y 0 = 0}, the singularity is located at y 1 = y 0 = 0, and c is the connected CFT correlation function. The three point function can be obtained using the methods of [11], and for d = 3 takes the form (3.16) The details are relegated to the appendix. As a simple and explicit illustration of this method, we calculate in the next section the contribution for the XY model in the simpler case of 1+1 dimensions in both duality frames.
Explicit computation in 1+1 dimensions
The low-energy effective action of the XY model in (1+1)d is simply that of a free real compact scalar, with periodicity φ φ + 2π. This model has vortex solutions These are the (1 + 1)-d analogues of instanton solutions. As in (2 + 1)-d, vortex configurations have infinite action in free continuum field theory, but an appropriate UV completion can render their action finite. We assume this is the case. Since the classical theory is free, a general N -vortex solution is given by the sum of vortices centered at N different locations. As before, the multi-vortex configuration on flat space is written in terms of the two-dimensional Green's function of a massless scalar field, The vortices of the replica spacetime are obtained via the replacement G → G (n) . Computing as before, the N -vortex action on the replica geometry takes the form provided we require N a q a = 0. 5 As before, the coincidence terms ξ (n) = G (n) (x a ; x a ) of this expression are merely shorthand for the UV regulated values, but there are non-trivial IR contributions due to interaction with the conical singularity that are taken into account in what follows. The contribution from the N -instanton sector now takes the form The entanglement entropy can be derived straightforwardly using the perturbative result (3.14). The relevant derivative is where G [1] = ∂ n G (n) n=1 as before. Using complex coordinates z = re iθ we can write (3.14) in the form where G and G are shorthand for G [0] (z; x) and G [0] (z; x ). 6 Integrating by parts and using and evaluating the integral for |z| ≤ L (L |x|, |x |) 5 The action of a single vortex also has an IR divergence, but configurations with total vortex charge zero are well-behaved. 6 For z = x + iy, we take the measure d 2 z to mean dx dy.
gives a logarithmically divergent constant, together with a non-trivial finite contribution: (In this expression, x and x denote complex numbers.) The divergent contribution drops out in the zero-charge sector.
In terms of the flat space instanton partition function where ξ = ξ (1) , the instanton contribution to the entanglement entropy (2.4) may be written which can in principle be evaluated using (3.27). In this formula it is important that one interpret the coincidence terms G [1] which represents the contribution of the conical surplus to the measure.
We can check this perturbative computation by comparing with the dual theory, whose expansion in M 2 can be computed using CFT methods. The dual field χ of periodicity 2π has, at M 2 = 0, the action allowing us to write . The stress tensor is T zz = − 1 4π 2 K : ∂χ∂χ : . 7 Plugging 7 This is the standard field theory stress tensor, T µν = 2 √ g δS δgµν .
(3.33)
Evaluating the integral, we find that it matches the perturbative result (3.27). Therefore we see that, once the instantons for the replicated geometry are known, expanding their contribution in (n − 1) recovers the correct contributions to the entanglement entropy. Needless to say, here they obey the area law.
Comments on SU(2) instantons in 3+1 dimensions
Having discussed the simplest cases, let us offer some comments on non-abelian gauge instantons. The simplest instanton contributions to SU(2) theories are the BPST instantons [14], which satisfy the self-duality condition G =G, (3.34) where G = dA + A ∧ A is the SU(2) field strength andG its Hodge dual. 't Hooft [13] gave a simple ansatz for a family of flat space multi-instanton solutions in a singular gauge where ρ i , which sets the instanton size, and the instanton centers x i are moduli to be integrated over in the path integral.η aµν is the 't Hooft symbol W obeys two important properties: the multi-instanton solution is additive, and each term φ(x; x i ) in the sum is a Green's function for the 4-dimensional Laplacian. Being harmonic away from the instanton centers guarantees W is self-dual, while the particular property of the singularities allows them to be eliminated by a singular gauge transformation. This suggests a simple way to generalize such solutions to an n-sheeted background: replace φ(x; x i ) by the Green's function of the replicated space General considerations imply that the action of these particular solutions is unchanged from flat space, as the action of a self-dual configuration must be quantized in units of 8π 2 g 2 ; the moduli space is modified, but only in such a way as to compensate for the increased volume of the replica spacetime. Any non-trivial contribution from these instantons must therefore come from the one loop determinant, the computation of which is beyond the scope of the present paper. It is also worth noting that non-self-dual configurations play a role in the path integral. Since the action of such configurations is not quantized, it is natural to expect that in the replica spacetime, such configurations will give non-perturbative contributions to the entanglement entropy.
Finally, we note that gauge theory instantons in 4d are particularly interesting as regards entanglement entropy, since the existence of a size modulus leads to contributions from instantons of arbitrarily large spatial extent. It is conceivable that the effects of the conical singularity could therefore be felt far away from the singularity, leading to a violation of the area law. 8 We leave a detailed analysis of 4d gauge instantons to future work.
Entanglement entropy and false vacuum decay
One of the quintessential applications of instanton methods is nonperturbative vacuum decay. The simplest example is given by φ 4 theory. Consider a scalar theory with Euclidean action [15][16][17] where is a small positive number breaking the symmetry of the two minima located at φ = ±a. We choose the above U (φ) for concreteness, but any U with two nearly-degenerate minima will do. The dominant instanton configuration is radially symmetric around the instanton center, and satisfies the equation of motion This equation is difficult to solve in general, but there is a useful approximate solution if the single derivative term can be neglected. This is true if the thickness of the interpolation region is much smaller than the size of the instanton, which is known as the thin wall approximation. In this approximation, the result for the particular potential above yields where µ = a √ λ is the effective mass around the minima φ = ±a. Here the integration constant R gives the instanton radius, found by extremizing the Euclidean action where T d−1 = a −a dφ 2U 0 (φ) is the domain wall tension, T d = is the difference in energy density between the two vacua, and A and V are the instanton surface area and volume. Extremizing with respect to R gives and thus also the action of a single instanton Note that (4.5) implies the radius is arbitrarily large at small , guaranteeing self-consistency of the approximation.
Instantons in the dilute gas approximation
The instanton solution of the φ 4 theory describing tunnelling between a false vacuum and a true vacuum reviewed above is a typical example in which the single instanton solution is a localized field configuration falling off exponentially at large distances. At weak coupling, the dominant contribution to the path integral comes from configurations whose instanton density is of order e −S 0 . In this case, the contributing multi-instanton configurations are approximated well by the superposition of single instantons whose separation is much larger than their size. This is the dilute gas approximation.
Leading contributions to the path integral come from three sources: the measure, the instanton action, and the 1-loop determinant. The measure is determined by the instanton action through the presence of zero modes. While exact zero modes are absent in multiinstanton solutions (as well as single instanton solutions in the replica spacetime), for small coupling these corrections can be ignored. The final expression takes the form where K = 1 2 ( S 0 2π ) d/2 (det D (1) / det D (0) ) −1/2 , and V is the volume of Euclidean spacetime. The factor of 1 2 comes as usual from carrying out the correct analytic continuation of the path integral when D (1) has negative eigenvalues [16].
Single instanton contributions in d = 2
We now turn to the computation of single instanton contributions to the entanglement entropy. For simplicity we work in 1 + 1 dimensions; although this case is special in certain particulars, we expect that the essential process is not modified in higher dimensions. Remember that the thin wall approximation implies that the instanton size is much larger than the inverse mass in the false vacuum. This allows us to reduce the computation to two cases, shown in figure 1.
When the instanton is well-separated from the replica singularity (left side of figure 1), the solution is the same as in the flat spacetime; since φ is effectively constant outside the bubble, there is no incompatibility with the 2πn periodicity of θ on the replica spacetime. (In a single instanton configuration, of course, the instanton lives on one replica sheet only.) "Well-separated" means that the separation of the wall from the singularity is much larger than the wall thickness, but as the wall is thin compared to its size, this is true for typical instanton configurations. The action of such a configuration takes the same value S 0 as in flat spacetime.
To obtain the partition function we must also compute the relevant determinant and measure factors. According to the discussion of section 2, the complete expression is n the kinetic operator around the k-instanton background in the n-fold replica spacetime. (det is the determinant with zero modes omitted, and the 1 2 comes from the presence of a single negative eigenvalue.) Locality guarantees that to leading order the effects of the replica geometry and the instanton on the determinant are independent.
A gross approximation to the determinant would ignore the instanton entirely, which would simply give a factor (det D (0) n ) −1/2 . This is of course modified by the presence of the instanton. As the instanton is located far away from the conical singularity, modes near the instanton wall will not see the conical singularity at all. Therefore, 1 with A the total area of the flat Euclidean geometry (not the replica geometry), A(R) = πR 2 is the instanton area, and Z (0) n is the 0-instanton partition function on the replica geometry. When the singularity is contained well-inside the instanton bubble, as in figure 1(b), the constancy of φ in the interior once again implies that the instanton solution has the same functional form; the difference is that the cut intersects the instanton wall, and therefore the instanton extends to all n sheets of the replica geometry. The radius of curvature of the instanton wall is fixed by the domain wall tension, so that the classical solution will look like n copies of the same circle of fixed radius, each one glued to the next across the cut. The action of the configuration is the length of the wall times its tension, plus V (φ − ) times the area of the interior. Both of these quantities are n times the flat space value, and so the action is simply S = nS 0 .
In the measure factor, S 0 /2π is replaced by nS 0 /2π; on the other hand, because the instanton extends over n sheets rather than one, the solution is actually 2π-periodic, and therefore we should only integrate the angular coordinate of the instanton zero-mode over the interval (0, 2π). The contribution to Z is therefore with D < n the kinetic operator for r < R. The factor 2 1−2n is included because we now have 2n − 1 negative eigenvalues (which we will confirm momentarily).
We must now evaluate the determinant. It gets contributions from two widely separated scales. On the one hand there are the "fast" modes associated to wave-like fluctuations of the scalar field; the eigenvalues of the kinetic operator on these modes are µ 2 , and are the only ones present in the zero instanton sector. They are responsible for the saturation of the entanglement entropy at lengths L m −1 . On the other hand, for the instanton background there is a new class of "slow" fluctuations associated to deformations of the domain wall. Heuristically, we split the determinant into a product of contributions from each class of modes, det D = det s D · det f D. While strictly speaking the high-lying slow modes and lowlying fast modes overlap, the high-lying slow modes are insensitive to the details of the shape of the instanton, so that the factorization is accurate provided the answer is written in terms of ratios of slow and fast mode determinants, such that the relevant operators have the same density of eigenvalues at scales much larger than m −1 but much smaller than R.
The slow modes for the instanton come from the fluctuations of the domain wall. If we allow the domain wall to fluctuate around a constant radius, r = R + δr(θ), the corresponding fluctuation action is For the instanton background, λ = −1, but we leave it as a free parameter for future convenience. The integration limit β depends on the background: if the instanton encircles the origin then β = 2πn, otherwise β = 2π. The slow contribution to the path integral is therefore given by The result, [det s D] −1/2 , is simply the harmonic oscillator partition function, and takes the value We are interested in the behavior of Z(2πn, λ) as λ → −1. Note that Z has a singularity at this point, due to the two zero modes associated to translations. Since their contribution is already captured in the instanton measure, the corresponding eigenvalues should be omitted. This is done via multiplication by 2πn Because it is doubly degenerate, we are left with (4.14) We wish to compare with the slow modes in the single instanton sector, in such a way that the eigenvalue density matches at high energies. 9 This is accomplished by comparing to [Z 1 (2π, λ)] n , with The final result is The ratio of the fast determinants can be argued to take the following form. At distances from the domain wall well exceeding the correlation length, the path integral reduces to n massive scalars, giving a vacuum determinant contribution [det f D (0) 1 ] −n/2 . For a generic instanton contribution the domain wall is well-separated from the singularity, so that the contribution from modes localized near the domain wall can be taken into account by including a correction factor [(det f D 1 )] −n/2 . Finally, we must take into account the behavior near the singularity by including a factor [(det fDn )/(det fD1 ) n ] −1/2 ; hereD is the kinetic operator expanded around the true vacuum, whose mass generically differs from that 9 Appendix A of [18] gives a computation that exhibits some similar features.
of the false vacuum.
Putting everything together gives: (the bare replica geometry has no slow modes). Therefore we obtain the contribution of a single instanton with r < R: 1 has a single negative eigenvalue.) There is one further subtlety that arises in the case d = 2, for here it is only possible for at most one instanton to wrap the conical singularity. The full partition function is therefore given by Z n = Z (0) n e n[A−A(R)]Ke −S 0 1 + A(R)Ke −nS 0 P n (−iπκ) n−1 (4.19) and so the entanglement entropy becomes We note that the contribution of P n only depends on the relative entanglement entropy in the two vacua, and that in the special case where both vacua have the same mass, P n = 1.
There are several comments in order here. Since the leading order instanton corrections to the entanglement entropy come entirely from instantons wrapping the singularity, in the limit of large intervals (L R) the entanglement entropy will be insensitive to size, and hence in this limit the entanglement entropy satisfies the area law. (We will come back to this point momentarily.) More surprisingly, we should note as K is imaginary [15][16][17], the non-perturbative correction to the entanglement is imaginary. This raises the question of how one should interpret an imaginary entanglement entropy. Let us compare this computation with the original calculation in which the decay rate is extracted. There, one could understand the decay of the false vacuum wavefunction in time by Wick rotating the instanton result, in which Now, the area term A(R) is a Euclidean volume term evaluated in the x − T Euclidean plane, and so under Wick rotation T → it its measure should acquire an overall factor of i. Therefore, including the contribution of K recovers a real value.
What is the meaning of this value? Our methods implicitly assume that the cutoff on Euclidean time is large compared to the instanton scale, and our result is therefore most naturally interpreted as the contribution after a time has passed that is much larger than the instanton size R (in units where c = 1), but small enough that the vacuum remains mostly undecayed. Since our formulae really reflect the decay from the free false vacuum to the real vacuum, this suggests that there is an increase in entanglement entropy time scales t R that saturates for t ∼ R, and whose final value is captured by our formula; while for R t Γ −1/2 (Γ is the vacuum decay rate per unit length), there is no time dependence. On the other hand, we expect a non-trivial time dependence at subleading order in the perturbation theory expansion, which will pick up contributions from exponential tails in the domain wall shape, and from multi-instanton configurations. The late-time dynamics of the entanglement entropy should presumably be picked up by these subleading contributions. This suggestion seems consistent with the observations of [19] regarding tunneling in a twoparticle quantum mechanics model, where the leading time dependence of the entanglement entropy appears to arise at two-instanton order.
Of course, this is by no means a proof of how the formula should be interpreted, but only a guess suggested by analytic continuation. We leave a more rigorous analysis to future work.
Entanglement entropy of a finite interval
Above we evaluated the leading contribution to the entanglement entropy when the system is divided into two half-spaces. Let us now turn to the entropy of entanglement between a finite interval of length L and its complement.
The behavior of this case depends qualitatively on the relative size of L and R. When R < L, the computation is essentially above, except that the correction to the entropy (at leading order in e −S 0 ) is twice as large, because we now have contributions from instantons centered at both endpoints of the interval. On the other hand, when L < R, there is an additional type of contribution, coming from those instantons which encircle the entire interval. We divide the (approximate) moduli space of single instantons into three regions, R 0,1,2 , labeling the number of singularities lying inside the true vacuum region.
In the new configuration, the instanton wall doesn't cross the branch cut, and therefore each connected component of the wall has length 2πR. However, since the branch cut lies in the true vacuum region, each sheet must have its own instanton wall. Therefore the instanton action is S = nS 0 . Instantons on separate sheets however are free to move independently. We must therefore integrate over n instanton center of mass variables x n , each with measure Let us consider the contribution from integrating over a single one of these variables, x. The integration region is We also require the functional determinant det D. For this instanton, the slow modes factorize into n copies of the slow modes for a single instanton in the flat (n = 1) geometry, so that det s D = (det s D The fast modes are essentially as before, except that now we have two singularities in the interior region. Hence (4.27) 10 The explicit value is (4. 23) allowing us to write (4.29) (Note that in this case, there is one negative eigenvalue per sheet, giving 2 −n .) The other contributions are As before, for d = 2 only the instantons in region R 0 should be summed over in the dilute gas approximation, giving total partition function The final contribution to the entanglement at leading order is hence The L-dependence of this expression is of some interest. For m −1 L 2R, the entanglement entropy experiences steep growth in L (due to the tan −1 appearing in A R 2 ), while as L → 2R the result reduces to the one in (4.20). Therefore, instanton effects give rise to a functionally strong (though small in absolute terms) dependence on interval size up to the instanton scale, even when this scale is much larger than the correlation length.
Decay, thermalization, and volume contributions
A system undergoing vacuum decay exhibits an energy gap between the false and true vacua. After long times, the resulting energy should be contained in a roughly thermal soup of particles in a thermalized state. The entanglement entropy of thermal states are well-known to have contributions scaling with volume. Yet our computations see only area law contributions; why should this be?
Our results are not in conflict with this situation due to the qualifier "at late times". During the growth of a bubble, essentially all excess energy goes into accelerating the bubble wall [17]. Particles are produced only after bubbles are sufficiently large and common that they begin to collide. As the methods used here describe vacuum evolution only at small bubble density, we should not expect our computation to yield thermal behavior. Our results should therefore be interpreted as the contribution to entanglement due to well-separated bubbles at short times.
The theta vacuum and coherent sums over Fermi-surfaces
We would finally like to turn to a somewhat more detailed study of theta vacua. In the examples considered above, there is no evidence that the true (theta) vacua of a gauge theory should exhibit exotic scaling of entanglement entropy. It was also argued in [1] that field theories belong to class s = 1 in the s-source renormalization scheme, and in these models, the entanglement entropy of the ground state necessarily satisfies an area law. The ground state degeneracy of models in this class should not scale with system size. The perturbative ground states of gauge theories, on the other hand, are infinite in number, being labeled by a winding number; imposing locality requires physical states to transform by a phase under shift in winding number, so that a physical vacuum is an infinite superposition of perturbative vacua. As we shall see, upon UV regularization this degeneracy can be understood as scaling with system size. In this case, the dilute gas approximation is no longer valid. Therefore it is worth inspecting theta vacua in greater detail.
The Schwinger model
The vacuum structure of a generic gauge theory would be quite complicated to analyze directly. We therefore wish to focus on the simplest manifestation of θ vacua possible, and the (1+1)-dimensional Schwinger model [21] provides an exactly solvable gem.
The Schwinger model is nothing other than a (1+1)-d Dirac fermion minimally coupled to a gauge field, with action This theory is exactly solvable, since the Green's function of the fermion can be computed exactly for arbitrary A [22], allowing ψ to be integrated out directly. The theory can also be solved in the canonical formalism, and the ground state wavefunction has been explicitly written down in [23,24]. Put the theory on a spatial circle of length L. The theta vacuum takes the form (in temporal gauge A 0 = 0) Here, where c is the Wilson line c = 1 The state |N is essentially the Dirac sea, with the N lowest-lying particles added to (or removed from) both of the positive-and negative-chirality sectors. (These numbers must coincide in a physical state, where Q tot = 0).
The factor of U n thus arises from the Bogoliubov transformation diagonalizing the Hamiltonian. Since this Bogoliubov transformation defines the vacuum state of a massive bosonic mode, it can only lead to an area law scaling of the entanglement entropy. The function f 0 is given by What is noteworthy here is that the ground state wavefunction is an infinite sum of Fermi surfaces. In a discretized setting, this sum would be bounded by the total number of fermion sites. This means that the number of states that the instanton hops between actually scales with system size. This is a rather peculiar feature, and it is tempting to suggest that it gives the wavefunction a chance of violating the area law more severely than a simple Fermi surface. In the following, therefore, rather than working with the exact ground state wavefunction of the Schwinger model, we study a toy version in which we dispose of the oscillatory exponential factor and the gauge field, and focus on a coherent sum of Fermi surfaces. We would like to inspect how much extra entanglement such a sum can lead to.
Sum over Fermi surfaces as a toy model
Setup. We work with a toy analog of the Schwinger model theta vacuum: a coherent superposition of Fermi surfaces. Our system is a (1 + 1)-dimensional lattice model with L lattice sites. We make use of the Jordan-Wigner transformation mapping a chain of distinguishable spins to a system of spinless fermionic degrees of freedom. A Fermi surface is obtained by acting on the vacuum state |0 with fermionic creation operators, where Here En = (2πn/L) 2 + e 2 /π. k n = 2πn L , c j = i<j σ z i σ + j , and σ + j = (σ x j + i σ y j )/2. In this state, all momenta up to the Fermi level (k F = k p−1 ) are filled. To make contact with the theta vacuum, we now consider a linear superposition of Fermi surfaces: for some weight function f (p). We will explore the entanglement entropy for functional forms of f (p) inspired by the Schwinger model.
Computing the entanglement entropy. We first pick a contiguous subsystem of size L A ≤ L/2, and compute the reduced density matrix ρ A = Tr B |Ψ Ψ |. S EE is given in terms of the eigenvalues of ρ A . We do this for several choices of L and L A , and compare the result with the case of a single Fermi surface. We find that the coherent sum weighted by f (p) is characterized by more or less the same amount of entanglement as a single Fermi surface. It is possible to tune f (p) to acquire extra entanglement, but the dependence on L is nowhere as strong as a volume law.
Numerical results. We first compute S EE of a single Fermi surface for several lattice sizes. We chose the values L = 4, 6, 7, 8; calculations up to L = 8 are suffice to acquire a qualitative picture. For L = 8 and L A = 4, if we consider a Fermi surface obtained by filling all modes up to k = 6π L , we find from our numerical analysis that only 8 of the 16 eigenvalues of the reduced density matrix ρ A contribute to the entropy, as shown in the left-most plot of Figure 4. One can easily check that the number of contributing eigenvalues increases with system size. In 1d, where an "area" is a point, this constitutes a violation of the area law; this is expected in the presence of a Fermi surface. For each of these cases, S EE of a single Fermi surface (obtained by filling all modes up to k = 6π L ) is shown in black. Note that S EE is greater than that of a single Fermi surface only for a specific range of β.
Next, we consider the superposition of Fermi surfaces with weight functional of the form f (p) = αe −p β (5.10) in analogy to the theta vacuum of the Schwinger model. Figure 3 gives plots of S EE (β) for four choices of (sub-)system size. The figures show that there is a mild enhancement in entanglement entropy for some range of β. The β dependence of the eigenvalues contributing to S EE is plotted in Figure 4 for the same four systems. These plots show that the entanglement entropy rises above that of a single Fermi surface precisely when more eigenvalues are contributing. As we can see from the entanglement spectra, however, the number of contributing eigenvalues is not very sensitive to β. At most about 8 of the 16 eigenvalues contribute to the entropy -more than for a single Fermi surface, but not by much. In all of the cases considered above, it is clear that entanglement entropy does not exhibit a volume law behavior, as this would require the number of participating eigenvalues to be on the order of 2 L A .
Conclusions
In this note, we initiated the study by instanton calculus of non-perturbative corrections to the entanglement entropy. Our computations were made by applying the replica trick directly to instanton solutions in the replica geometry. Using the duality between the U(1) model and the XY-model, we showed that in this case our prescription preserves the duality map between path integral contributions, and therefore produces the correct entanglement entropy. Such insights allowed us to find explicit instanton solutions in the replica geometry, both in U(1) theories and in more general non-abelian Yang-Mills theories. Applying this prescription to several cases, we find that the non-perturbative contributions obey the area law. Moreover, for non-perturbative vacuum decay in φ 4 theories in the dilute gas limit, area law behavior of the entanglement entropy can be demonstrated explicitly.
We compared these results with numerical computations in discrete analogs of these models. As an analog to the theta vacuum of the Schwinger model, we considered entanglement in a coherent superposition of Fermi surfaces. We also studied the time dependence of entanglement entropy in the perturbed transverse Ising model as an analog of non-perturbative vacuum decay in φ 4 theory. In agreement with the instanton calculations, the area law is preserved in both cases. | 11,782.4 | 2017-03-05T00:00:00.000 | [
"Physics"
] |
Pseudoboehmite or graphene oxide, what is the best additive for natural polymer pla—poly (l-lactic acid)?
In cases of severe injuries or burns, skin grafts (scaffold) are often necessary skin substitutes. To not harm the patient or the donor, research is necessary to search for heterografts, that are formed by biomaterials and are also biodegradable and bioabsorbable to the human body, as is the case with poly (L-lactic acid) - PLA. However, the natural polymers placed on the skin suffer great degradation in media with large amounts of carbon and water, have little durability due to their low ductility. For the proposal, the graphene oxide (GO) nanocharge and pseudoboehmite (PB) were obtained. It is believed that the nanofillers dispersed in the polymer matrix can improve mechanical properties regarding ductility and tenacity, without losing thermal properties. Subsequently, the hybrid nanocharge dispersion methods were employed to obtain in the poly (L-Lactic Acid) (PLA) matrix, forming the material for the desired scaffold. For this research, injectable specimens of pure PLA, PLA structured with GO nanoparticles, and PLA structured with PB nanoparticles, were manufactured. The microstructural and mechanical characterizations were performed on the specimens, to compare the effect generated by the nanocharges on the bulk material. The results showed that the increase in the concentrations of PB and GO nanofillers showed an increase in tenacity and ductility compared to pure PLA, a property that is desired in the scaffold structure.
Introduction
About 1 million people suffer burns every year in Brazil (Vasconcelos 2017), and at least 10 to 15% seek medical or hospital assistance due to the severity of the lesion. The solutions to this type of problem can be debridement, washing, and application of dressings, and for most cases that arrive at the hospital, surgeries to insert grafts.
Skin grafts are skin substitutes that can be applied to people who suffer burns and severe skin injuries. Scaffold is a tissue or bone graft ideal for hosting biological components, ensuring its permanence at the site of the lesion, and helping its medicinal action (Santana 2016).
Pseudoboehmite (PB) (AlOOH.xH 2 O) is an aluminum oxide-hydroxide. Its direct precursor is alumina Al 2 O 3 . Its production is conducted with the sol-gel method, which consists of a transition of the solution system, obtained by hydrolysis, and then condensed to a gel system. which is very effective method when dealing with metal oxides, like aluminum chloride or nitrate (Munhoz et al 2007).
Using aged PB provides materials with greater homogeneity in dimensions of the pores and with high reactivity and high specific area (Munhoz et al 2006). Furthermore, PB can be recrystallized in this ceramic format, that is, it can be aged by changing its specific area (Munhoz JR et al 2012). PB has also been used for controlled drug release (Souza 2013).
Biocompatible material is defined as any material or combination of materials, synthetic or natural, that can be applied as part of a human system and that simultaneously treats or replaces any tissue or organ in the body. Bioabsorbable material does not need to be removed after grafting, because the body itself can absorb and dissociate it in the reaction region, avoiding future surgical procedures (Brito et al 2009). Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Among non-toxic bioactive materials, biodegradable polymers constitute a family of thermoplastics that present some instability in environments where carbon dioxide, water, and biomass (such as in human tissue) are found, suffering degradation, and showing low tenacity.
The natural polymer poly (L-Lactic Acid) (PLA) can be obtained by fermenting a series of microorganisms and is widely used in scale to synthesize a series of polymers (Ohara, Yakata 1996).
Lactic acid is a chiral molecule with 2 stereoisomers, L-and D-lactic acid (Lasprilla 2011). Due to its carbon chain, it should be a great binder for mixtures with graphene and derivatives. In addition, PLA is classified as non-toxic, bioactive, and biodegradable; therefore, it can be considered an excellent material for scaffold (Lasprilla 2011).
Biocompatible scaffolds, also called artificial skin, artificial bone, or material deposition receptacle, which will be in contact with the skin or bone, can be formed by ceramic materials for example, hydroxyapatite, metallic, such as titanium bone graft (Gonçalves 2012), or by biodegradable polymers, such as chitosan or fibrin (Bhardwaj;Kundu 2011).
In this work, scaffold precursor materials were obtained from composites formed by biodegradable polymers structured with both GO nanoparticles and PB particles.
The main intention is to improve the current scaffolds in terms of mechanical properties, preserving the thermal properties so as not to affect degradation.
The main problems encountered are tissue acceptance by the patient, film breakage due to low ductility, weakness in the placement and extraction technique, among others.
As the evolution of medicine is directly linked to the evolution of engineering, especially materials engineering, it is credible to conceive, develop, process, and characterize technological biomaterials that can serve as scaffolds, as an alternative for some cases of injury.
Resources and methods
Independent syntheses of each of the precursors were carried out, both from PB and GO. In the following items, the specific processes of their syntheses are described, as well as their characterizations to identify and measure their properties. After the description of the precursors, the present research provides the methodology to obtain and characterize the hybrid nanocharge of PB and GO.
Subsequently, the hybrid nanocharge dispersion methods obtained the PLA matrix, forming the material for the desired scaffold. Microstructural and mechanical characterizations complete the methodology used.
A. Materials and process to obtain pseudoboehmite PB was obtained by the sol-gel process according to Munhoz et al (2006). The first stage of PB synthesis consists in obtaining two precursors, the first being an aqueous solution of aluminum nitrate Al(NO 3 ) 3 .9H 2 O and the second an aqueous solution of polyvinyl alcohol (PVAL).
Al(NO 3 ) 3 .9H 2 O was solubilized in deionized water in the proportion of 412.5 g of solute to 500 g of solvent. PVAL was also solubilized in deionized water, at a ratio of 21.7 g of solute to 250 g of solvent, forming a viscous gel before reaching its full homogenization.
As the solubilization of both components is not immediate, they were obtained by means of a magnetic stirrer. Each of the flasks, containing their respective reagents and a magnetic bar, were suspended 0.5 cm from a magnetic stirrer, where rotation with a constant angular velocity of 240 rpm was maintained, mixing the components for 30 min When both processes are completed, the aqueous solution of PVAL is added, pouring it over the aqueous solution of hydrated aluminum nitrate.
The mixture obtained was then dropped (each drop with 0.05 ml) into a beaker previously containing an aqueous solution of ammonia hydroxide (NH 4 OH) with a concentration of 25-30 m%, obtained with 450 g of ammonia hydroxide in deionized water, kept in a thermostatic bath between −10.0°C and −7.0°C. At the end of the addition, with the aid of a pH meter, NH 4 OH must be added until pH=9 is reached.
For the second stage, which consists of aging the PB, the solution obtained is placed in a volumetric flask wrapped in a heating mantle, keeping the temperature at 80°C for approximately 7 days. After aging, recrystallization and fibril formation occur in PB, making the structure more oriented. After chemical oxidation of graphite, modifying the hybridization of carbon with oxygenated functional groups. At this stage, there is a reduction in Van der Waals forces between the graphite layers, due to a process of intercalation of molecules and ions between these graphite layers (sp 2 carbon planes), causing them to expand, thus making them more susceptible the breakup. After obtaining the graphite oxide, the material uses ultrasound, as the frequency of vibration at powers equal to or greater than 580 W is an efficient sonic agent in the separation of layers already weakened by exfoliation.
B. GO materials and procurement process
In the first route (Route 1), the process of synthesizing GrO-I was based on the Hummers method (Hummers, Ofeman 1958) modified from what is frequently found in the literature, as it is highly successful. Figure 1 illustrates the synthesis of graphite oxide, a process prior to obtaining Graphene oxide. Figure 2 shows the synthesis of graphene oxide, from graphite oxide. The process in the second route (Route 2) follows the same procedures as Route 1, except for the synthesis time which was increased to 48 h before the addition of hydrogen peroxide H2O2, in the ratio of 1 ml every 5 s.
Through Route 2, the material exhibited less roughness after drying, more exfoliation with a smoother appearance, and more adherence to the surface of the container.
After obtaining the different synthesized GrO, they were placed in a glass container (watch glass) and dried in an oven at 50°C for 24 h.
After drying, the materials obtained (GrO-I and GrO-II) were broken up into small flakes with a maximum size of 2 mm and thickness less than or equal to 0.01 mm (flakes with two-dimensional aspects), with the aid of a blender.
Subsequently, dispersions of graphene oxide in water were prepared from the graphite oxides obtained (GrO-I and GrO-II) at a concentration of 1 mg ml −1 , with the aid of an ultrasound bath at approximately 100 kHz, at a nominal power of 580 W, for 120 min The dispersions obtained were brown in color, completely homogeneous and dispersed in deionized water. For further characterization, these were then deposited in drops of 10 μl each on silicon substrates, which were Due to these processes, the GOs dispersed in deionized water: GO-I for dispersion in 24 h and GO-II in a smooth and fine layer for dispersion in 48 h.
C. Obtaining PLA nanocomposites structured with GO or PB Initially, the PLA in the form of granules was cooled in liquid nitrogen and then ground in a SEIBT ® knife mill to make a powder with a maximum diameter of 1.19 mm (granulometry 70 mesh). The powder was kept in sealed polypropylene packages until use.
The nanocomposites were made by pre-mixing the powdered PLA with the respective nanocharge (GO or PB) by employing the rotovaporization process. The process aims to ensure complete homogenization of the PLA polymer matrix and the desired nanocharge.
Subsequently, the mixture obtained was processed by injection, in a Demag Ergotech injector (thread diameter of 25 mm and L/D 20) with heating profile T1=160°C, T2=165°C, T3=170°C, and T4 (injection nozzle)=180°C. The other injection parameters are provided in table 1, to obtain the different bodies and test accordance with the respective standards.
The specimens used were ASTM type IV. Table 1 indicates the parameters of the injection molding process for the tensile, bending, and impact resistance specimens.
All compositions were previously dehumidified for 24 h at 40°C in a vacuum oven, before forming by injection. To perform the tests, the specimens were stored for 48 h at an ambient temperature of (23±2°C) in a desiccator.
D. Injection process to obtain proof test bodies
Mechanical tests are important to analyze properties and compare between pure and structured material with dopants or additives. However, mechanical testing requires a specific specimen standard from the sample that conforms to the specifications of the testing machines.
Each of the bodies (PLA, PLA/GO, PLA/PB) went through the injection process to form the traction and impact bodies.
Results and discussion
A. XRD X-ray diffraction tests were performed for all samples. Figure 3 shows the diffractogram of the graphite sample.
The diffractogram of graphite has a characteristic peak at 2θ=26°, which is in accordance with the literature (Miranda et al 2019;Camargos;Semmer;Silva;. Figure 4 shows a comparison between the three diffractograms, highlighting the secondary and the GO-II peaks, and indicate the characteristics closest to the commercial GO and the most used GO in academic research (Gascho et al 2019).
Watching the figure 4, the GO-I, which was obtained by oxidation for 24 h, presented identifiable crystalline phases (1, 1, 0) in 2θ=20°(beyond the expected phase at 2θ=11°, register too phase), (0, 0, 1) in 2θ=39°, as prevalent as the phase (0,0,2) at 2θ=11°, with a relative emphasis on the lower peaks. For the GO-II obtained by oxidation for 48 h, although these phases are present, they are not as accentuated as the phase (0, 0, 2) in the 2θ=11°section, which characterizes the GO. Comparing the two XRD, it is observed the presence of some impurity at 2θ=20°, which presents relative intensity in GO-I, which is not seen in GO-II, which is why this sample was discarded, as no impurities are desired in the scaffold polymer matrix.
GO-II was the chosen route, as it presented the most reliable diffractogram for assembling the final scaffold, and each GO-II sample used for assembling the final nanocomposite, and each GO-II sample used, was characterized by XRD and the average of the samples is present in figure 5. Figure 5 proposes the average graphic adjustment of all GO-II samples, normalizing the other peaks to show the behavior of the XRD with sampling reliability of γ=95%.
In figure 6, the red curve which is the graphite diffraction curve has its characteristic peak around 2θ=26°, (red curve), GO with highlighted intensity 2θ=11°(black curve) and reduced graphene oxide The different peaks of PB give the material a structure with a more ceramic character, which allows it to adhere to the PLA polymer in different ways. The ceramic character must not affect the thermal properties of the polymer.
B. SEM
Scanning electron microscopy was performed on both the GO nanocharge and the PB compound.
All samples visualized by SEM were previously metallized with a thin gold film on the surface of their materials, to aid in the electrical conduction of the electron beam on the sample. Figure 8 presents an SEM of PB, increased by 2500×magnification. Note that the average grain contour is approximately sized of 3 μm. However, the ceramic character and the excess of PB phases indicate good reactivity, a fundamental characteristic to bind to PLA polymer. Figure 9 presents an SEM of GO, increased by 10000× magnification.
In figure 9, the structure of GO is completely taken up by hexagons. Almost in the entire image, it is possible to observe hexagons formed by of carbon, with an average size of 350 nm.
Although images with greater magnifications were needed, the images with a magnification of 10000×, were the sharpest and, therefore, the most adequate to measure the hexagons (which are carbon structures -GO).
With such small structures, it is assumed that graphene oxide will coat the PLA polymer structure. Considering that the measurement error is high, in the order of 10 −1 , the measurement of the edge to the opposite edge in some places reaches 100 nm and may be even smaller. C. Nanocomposites of PLA/PB and PLA/GO C1. Impact tests Table 3 and figure 10 provide the results for the Izod impact test (notched) of the samples in mass percentages obtained with PLA/PB (100.0% PLA/0.0% PB; 99.7% PLA/0.30% PB; 99.5% PLA/0.5% PB; 99.0% PLA/1.0% PB; and 98.5% PLA/1.5% PB).
The results obtained evidence that: The presence of pseudoboehmite in the PLA matrix at concentrations of 0.5 and 1 m% decreases the impact resistance by 7.7 and 17.1%. In these concentrations, the presence of pseudoboehmite may act as a discontinuity of the matrix, decreasing its crystallization. There is no nanostructured PLA with PB particles in the literature.
The presence of pseudoboehmite in the PLA matrix at a concentration of 1.5 m% increases the impact resistance by 8.8%. In this concentration, the pseudoboehmite probably interacts with the polymeric matrix, increasing its crystallization, and these effects overlap with the effects caused by the decrease in molar mass caused by the shear during the processing of the matrix with the load, observed in the other compositions. New homogenization will be carried out, using ethanol as a solvent, aiming to minimize the hydrolysis of the matrix. Table 4 and figure 11 present the results for the Izod impact test (without notch) to 15 proof bodies of each samples in mass percentages obtained with PLA/GO (100.0% PLA/0.0% GO; 99.7% PLA-0.3% GO; 99.5% PLA/0.5% GO; 99.0% PLA/1.0% GO and 98.5% PLA/1.5% GO.
The results obtained suggest that the presence of GO in the PLA matrix increases the impact resistance in all studied concentrations. Figure 12 shows the average of 25 specimens of each composition, for the tensile test of the PLA/PB nanocomposites. The other compositions did not have enough numbers for statistical comparison. The compositions used in the process were: 100.0% PLA/0.0% PB; 99.5% PLA/0.5% PB; 99.0% PLA/1.0% PB and 98.5% PLA/1.5% PB. The tensile strength and the modulus of elasticity under tension increased with increasing concentration of PB in the nanocomposites obtained until the concentration of 1.0% of PB and then decreases.
C2. Tensile tests
The presence of PB in nanocomposites increases toughness and ductility as the concentration of PB increases. Figure 13 provides results for the maximum modulus under traction of the PLA/PB nanocomposites obtained.
The results for the maximum tensile strength of the obtained PLA/PB nanocomposites are in figure 14. Figure 15 shows Elongation at the maximum tensile stress of the PLA/PB nanocomposites obtained.
The results provide evidence that PB presence in the obtained nanocomposites increases the tensile strength. These increases are from 4.8% to 0.5% of PB, 9.35% to 1.0% of PB, and 5.09% to 1.5% of PB. The tensile strength and the modulus of elasticity under tension increase with increasing concentration of CP in the nanocomposites up to the concentration of 1.0% of CP and then decrease. The presence of PB in nanocomposites increases toughness and ductility as the concentration of PB increases. Elongation at maximum tension decreases with the presence of PB, that is, it deformed less until the flow point. Figure 16 shows the average of 25 specimens of each composition, for tensile testing of PLA/GO nanocomposites. The other compositions didn't have enough numbers for statistical comparison. The compositions used in the process were: 100.0% PLA/0.0% GO; 99.7% PLA/0.3% GO; 99.5% PLA/0.5% GO and 99.0% PLA/1.0% GO. Results for the maximum modulus under traction of the PLA/GO nanocomposites obtained are in figure 17. Figure 18 provides the results for the maximum tensile strength of the PLA/GO nanocomposites. Figure 19 illustrates Elongation at maximum tension under traction of the synthesized PLA/GO nanocomposites.
The results of GO indicate that the GO presence in the obtained nanocomposites reduces the modulus of elasticity under traction at concentrations of 0.3 and 0.5% GO. At the concentration of 1.0% GO, the presence of the nanocharge, practically, does not interfere in the modulus of elasticity under traction. The tensile strength increases with increased GO concentration in the nanocomposites. The presence of GO in the nanocomposites reduces the tensile strength at the concentration of 0.3% GO by 9.42%, when compared to the tensile strength of pure PLA. The GO in this concentration may act as a discontinuity in the polymeric matrix. For concentrations of 0.5 and 1.0% GO in the nanocomposites, an increase in tensile strength of 0.47% and 5.08% occurs, respectively. When comparing the elongation at the maximum tension of pure PLA with the nanocomposites containing GO, the presence of GO decreases the elongation. The higher the concentration of GO in the nanocomposites, the greater the elongation at maximum tension. Therefore, the presence of GO in the nanocomposites increases the ductility of the obtained nanocomposites, as the presence of GO causes the greatest elongation (21.6%) before rupture under tension. The tensile strength increases with the increased GO concentration in the synthesized nanocomposites. Probably, the presence of GO increases the crystallinity of the PLA.
The presence of GO in the obtained nanocomposites decreases the tensile strength in the concentration of 0.3% of GO by 9.42%, when compared to the tensile strength of pure PLA. The GO in this concentration may act as a discontinuity in the polymeric matrix.
For concentrations with 1.0% GO in the nanocomposites, an increase in tensile strength of 5.08% occurs. When comparing the elongation at maximum tension of pure PLA with the nanocomposites obtained containing GO, the presence of GO decreases the elongation.
Finally, the higher the concentration of GO in the nanocomposites, the greater the elongation at maximum tension. Therefore, the presence of GO in the nanocomposites increases the ductility of the obtained nanocomposites, in that the presence of GO causes the greatest elongation (21.6%) before rupture under tension.
In view of the relevance of the theme, additional research is necessary, for better advances. Both nanocharges significantly improved the mechanical properties when structured in the PLA, increasing their levels of ductility, with GO providing a little more tenacity. Therefore, graphene oxide is indicated, not only for the result mentioned, but also due to its more accessible cost than PB.
Conclusions
Both charges significantly increased the mechanical properties of ductility and tenacity in relation to pure PLA, as shown in the stress-strain diagram observed in figures 12 and 16, important characteristics in making a scaffold.
As for the impact resistance test, a higher concentration of PB (1.5% mass) indicates an increase of approximately 10% in impact resistance, differently of GO, which in much lower concentrations already provides 79% increases for bodies with 0.05% GO mass and 41% for bodies with 0.1% GO mass in impact strength. However, as the concentration of GO increases, the impact resistance decreases and tends to the resistance of pure PLA, as the excess of GO loses its structuring function in the chiral molecule and weakens the polymer chain.
PB, when dosed at 1% of mass, increases the maximum tensile strength and the maximum modulus under tension by 10%, and maintains the elongation at the maximum tension under tension. The GO at the same dosage did not show significant increase or decrease in maximum tensile strength and maximum modulus under tensile and positively reduced the elongation at maximum tensile stress, which is of great value for scaffolding.
In comparison, the structuring GO ends up being superior to PB in impact resistance, because in smaller quantities, it causes a good increase in strength and reduces the elongation to submitted stresses, in addition to the manufacturing cost being lower than that of BP. In terms of ductility, tenacity, and tensile strength, both were more efficient than pure PLA, but with similar results.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
Novelty statement
A scaffold is a preliminary structure for the composition of the tissue graft, hosting biological components, ensuring its permanence at the site of the lesion, helping its medicinal action, and as an alternative for some injury cases. Search to obtaining scaffolds from nanocomposites of PLA biodegradable polymers containing graphene oxide nanoparticles or pseudoboehmite compound aiming at increasing the mechanical properties, such as ductility and tenacity. | 5,338.2 | 2021-01-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
On intergranular interaction and the crystallographic mechanism of plastically deformed polycrystalline metals
Taylor principle prevailing currently has been proved to be effective only when grains or grain clusters are deformed in a rigid environment, which disagrees with the reality. Grains realize their plastic deformation in elastoplastic environment, and the equilibrium of intergranular stress and strain need to be reached simultaneously and naturally. The intergranular reaction stress (RS) during deformation can be calculated according to Hooke’s law and the yield stress should be the up-limit of the RS. The combinations of constant external stress and changing RS in RS theory induce alternate activation of deformation systems including slips and twinning with different work hardening rates, while grain orientations and inhomogeneous distribution of external stress have important influence. Some additional deformation systems near the boundaries need to be locally activated when the reduced stress up-limits are reached frequently, to balance the intergranular stress and strain incompatibilities in elastoplastic way. Based on the simpler RS theory, the simulation results of rolling texture on aluminium, low carbon steel, austenite stainless steel, titanium are very close to the reality, indicating that the intergranular elastoplastic interactions and the corresponding crystallographic behaviours of plastic deformation have been reasonably and quantitatively described, therefore, Taylor principle becomes no more necessary.
Introduction
Plastic deformation is usually the premise of recrystallization, and the formation of recrystallization texture starts from deformation texture.Therefore, the theory of plastic deformation crystallography is extremely important for understanding the formation of deformation texture and has received extensive attention [1].At present, the widely popular theories of plastic deformation crystallography and models of deformation texture formation are mostly based on the Taylor principle, e. g. the viscoplastic self-consistent (VPSC) theory [2], advanced Lamella (ALAMEL) theory [3], grain interaction (GIA) theory [4] etc., which shows some problems both in theory and practice [5].Several slip or twinning systems in deformation grains are activated simultaneously according to a power-law relationship controlled by a parameter of shear-rate sensibility exponent in all the theories above [1], which unreasonably violate the Schmidt's law.In practice, the real strain tensors of deformation grains in all kinds of metals are essentially different from that prescribed by the Taylor principle as well [5].Therefore, we would try to trace the real physical process of plastic deformation of metal polycrystals, and explore a more reasonable plastic deformation principle in theory and practice.If the grain is in a rigid environment, so that all plastic strain components in equation 1 would be elastically compressed back by 100%, thus forming a reaction stress (RS) tensor acting on the grain.At this time, the stress tensor [σij], that the deforming grain is bearing, consists of both the external stress tensor [ ] as well as the elastic RS tensor [ ] that can be calculated by Hooke's law (R=1 is the rigidity rate, E is Young's modulus and ν is Poisson's ratio) [1]: However, no matter how the RS accumulates, it must not exceed the yield level related with the yield stress σy of the polycrystal, and the up-limit of the RS tensor [ ]lim in equation 2 is expressed, in the case of rolling deformation, as (in which αij=1 are the effective coefficients of the RS) [1,5]: With the progress of the slip and accumulation of RS, the stress tensor (equation 2) borne by the slip system will continue to evolve, so that after deformation to a certain extent, another slip system may obtain the maximum critical resolved shear stress (CRSS), begins to slip instead, and penetrate the grain.The alternation of the individually activated slip systems will occur frequently during deformation, and multi-slips are realized by these alternating slips all penetrating the deformation grain.However, the observed true plastic behavior of grains is more complicated (figure 1b) [5], which includes not only penetrating slips, but also some non-penetrating slips mostly near grain boundary to coordinate the intergranular incompatibility of stress and strain (figure 1c).The nonpenetrating slips often do not show any regularity, which will lead to a randomization tendency of deformation texture.
Problems existing in the Taylor principle both in practice and theory
Figure 2a shows an example of deformation texture of pure aluminum sheet after 95% cold rolling from an initial state of roughly random orientation distribution, in which significant copper texture and brass texture are obtained simultaneously.The texture in figure 2a can be directly calculated with the help of the Taylor principle, in which combinations of five independent slip systems are activated and penetrate grains homogenously.A small enough simulation step of rolling strain ε in this paper is Δε=δb3n3=0.001according to equation 1.The simulation result (figure 2b) does not indicate any brass texture but extremely strong Taylor texture with lower Φ angle, rather than copper texture in figure 2a.The same calculations based on ALAMEL and GIA model could obtain a little brass texture, though it is too weak, but still very strong Taylor texture instead of the copper one [1,5], which indicates strong background of the Taylor principle in the two models.
If, in a rigid environment, only the slip system with the highest Schmidt factor in grains can start to slip under the combination of external stress and the reaction stress (equation 2, R=1), whereas the maximum reaction stress can always reach the yield level of the metals (equation 3, αij=1) and the small enough simulation step can also make all slip systems start to slip once its Schmidt factor becomes the highest, the simulated rolling texture of the aluminum sheet is shown in figure 2c [1,5].The texture shown in figure 2c is consistent with that of the Taylor model in figure 2b even in details, including the absence of brass texture, the extremely strong Taylor texture rather than copper one, and so on, only the orientation density increases a little owing to the limited reaction stress (equation 3).
The real metal grains realize their plastic deformation in an elastoplastic environment, and the intergranular stress and strain compatibility are reached naturally rather than rigidly.The simulation result based on equation 2 and equation 3 shows that the Taylor principle sets the grains absolutely in a rigid environment, which is inconsistent with the actual metal deformation process, so it often fails to accurately predict the texture formation during plastic deformation of metals.
The reaction stress (RS) theory
Real metal grains possess elastoplastic characteristics, and their plastic deformation takes place in the same elastoplastic environment.Therefore, for an elastic equilibrium, 50% of the plastic strain produced by the deformed grain in equation 1 needs to be absorbed by the adjacent zone in the form of elastic strain, while the other 50% will be compressed back into the deformed grain in the form of reverse elastic strain, i. e. the rigidity rate R in equation 2 should 0.5 [1,5].
In the case of rolling deformation, the external stress is usually expressed as a main stress tensor composed of a tensile stress in the rolling direction (RD, i. e. direction 1 in equation 1) and a compressive stress in the normal direction (ND, i. e. direction 3 in equation 1) of the rolling sheet.When the pass reduction increases, the main stress tensor near the sheet surface will rotate around the transverse direction about an angle θ, which gradually decreases from the surface inwards and becomes 0 at the center layer.The greater the pass reduction is, the higher the θ will be.Therefore, the rolling stress tensor in equation 2 including the RS is transformed into [1,5]: in which, R=0.5 is valid for elastoplastic matrix, b is the length of Burgers vector of a slip system, μ is its Schmidt factor under instant stress tensor [σij] (equation 4), and the average distance d between dislocations can be calculated momentarily according to the flow stress that is experiencing work hardening during deformation of the individual metals [1,5].The first term on the right side of equation 4 represents the external stress tensor, and the second term represents the RS.When θ is 0˚, the external stress becomes the normal rolling stress tensor, i. e. a tensile stress in RD and a compressive stress in ND.
The strain tensors produced respectively by the activated penetrating slips in two adjacent grains under the stress tensor shown in equation 4 are usually different, and will cause the intergranular incompatibility of strain and stress, especially near the grain boundary region, which will induce additional interaction stress.According to Schmidt's law, except for the deformation system with the highest Schmidt factor instantly under the action of stress tensor [σij], all slip and twining systems will not be activated, but they always bear a certain shear stress lower than their CRSS during deformation.However, if the shear stress borne by the un-activated slip or twining systems is superimposed with the additional stress of the interaction between grains, it may lead to the total stress reaching the CRSS, and cause some un-activated slip or twining systems to be activated in non-penetrating way (figure 1c), thus making the incompatibility between grains mostly relieved in the form of local plastic deformation, while the remaining incompatibility is coordinated in the form local elastic strain.In the process of deformation, the frequent coordination of stress and strain that occurs naturally between grains will constantly stop the rise of RS, so that it can never reach its theoretical maximum value, that is, the values of αij in equation 3 are usually far less than 1, which reduces the effect of RS on the activation selection of penetrating slip systems.The levels of the effective coefficients αij and their possible evolutions with deformation ε are closely related to the characteristics of the deformed metals, and should be determined individually in advance.
It is known, that crystallographic metals are intrinsically anisotropic and an anisotropy can also be induced by local texture, or different orientations of adjacent grains.In the case of rolling process, however, the deformed polycrystalline metals, from the statistical point of view, show often the characteristics of approximate isotropy because of the macroscopic symmetry of the samples and the symmetry of all texture components formed [1].Therefore, the metals are generally regarded as nearly elastic isotropic materials, as shown in the equation 2. On the other hand, the levels of the coefficients αij can also be adjusted according to the experimental characteristics of different metal rolling processes, in order to reflect the effects of the local anisotropy to a certain extent.
Equation 1 to 4 constitute the basic equations of the RS theory.In order to implement the RS simulation, it is necessary to first determine which slip or twinning systems will be active.When different types of systems may be alternately active, the relative CRSS between them and its possible evolution with deformation ε should also be confirmed in advance.RS theory only calculates the orientation evolution and corresponding deformation texture caused by twinning and penetrating slip.Because of the randomization effect induced by those non-penetrating slips both in deformed grain and its adjacent zone, certain random texture component vr (volume fraction of random orientations) needs to be added to the simulated texture whereas the vr level is also closely related to the characteristics of individual deformed metals [1,5].
Simulations of rolling textures of FCC, BCC and HCP metals based on the RS model
Rolling textures of aluminum, low carbon steel, austenite stainless steel, titanium, are simulated based on the RS theory, whereas the active slip or twinning systems, relative CRSS, effective coefficients αij, rotation angle θ and random texture component vr that were determined in advance are listed in table 1 [1,5].936 and 1716 homogeneously distributed initial orientations are used for the texture simulations of cubic and hexagonal metals, respectively [1]. Figure 3 to 7 give the simulated rolling textures in comparison with those of experimental observations [1,5].
RS model can reproduce rolling texture formation of FCC aluminum sheet, including all texture types and their density levels (figure 3), as well as the surface shear texture induced by obvious rotation θ (table 1) of principal stress tensor under large pass reductions (figure 4).Two types of slip systems with different work hardening effects are activated during rolling of BCC low carbon steel, in which {112}<111> slips become more and more active with the rolling strain ε (table 1), resulting in the stability of {111} fiber texture (figure5).The rolling deformation conducted by joint activation of slips and twinning in FCC austenite stainless steel is rather complicated, because not only will slip systems be work hardened more rapidly, but also RS levels (α12 and α23) will gradually decrease instead of keep constant (table 1).Nevertheless, the effect that the phenomenon of copper texture turning to Goss texture by means of twinning [1,5] can still be simulated (figure6).In general, all the simulation results agree very well with the experimental observations (figure 3 to 6).Many different types of slip systems and twin systems have been observed in HCP titanium.However, those slip and twinning systems that are rarely activated or not independent ones should be eliminated to simplify the simulation process [1].Nevertheless, there are still too many activated slip or twinning systems (table 1), which is prone to randomize or weaken the deformation texture resulting in that the vr becomes no more necessary.On the other hand, frequent twinning leads to too many twin orientation evolutions, which needs to be simulated separately from the orientation evolutions induced by slips.In order to avoid too cumbersome simulation calculation, two sets of the 1716 simulation orientations are adopted [1].One set only tracks the orientation evolutions of the continuously twinned part, and the other set tracks the orientation evolutions of the parent part of the twinning, as so, to obtain the rolling texture of titanium sheet in a simple and approximate way (table 1).The rolling texture evolution of Ti sheet has been simulated based on the RS model, and the results agree well with those of the experimental observations (figure 7), including the characteristics of very low-density levels and low Φ angle locations of the texture [1].
Summary
It is to see clearly that RS theory of the plastic deformation crystallography is extremely simple without complicated mathematical treatment, is also very intuitive and reasonable on physical background, and the corresponding simulations agree basically with the experimental observations of different metals.However, those non-penetrating slip or twinning may cause some uncertainty, so that the simulation parameters need to be adjusted for different metals with individual characteristics as well as for different processes in the actual calculations.RS theory gives good consideration to the compatibility both of intergranular stress and strain that naturally formed during elastoplastic deformation of polycrystals while Schmidt's law prevails all the time, and takes into account also many factors that have important impacts on deformation, including: coactivation of multiple deformation systems, changing work hardening rate, homogeneity of the deformation, arbitrary external stress tensor, effective extent of the RS, random effect of non-penetrating deformation, even the elastic anisotropy of the single crystal or that induced by matrix texture, etc.Of course, the RS theory still needs to be greatly improved and perfected.However, the Taylor principle could therefore be abandoned rather than trying somehow to modify it.
Figure 1 .
Figure 1.Deformation behaviours of a grain in polycrystal, a. initial behaviour, b. exp.observation in steel, c. multi penetrating and non-penetrating slips.
Table 1 .
Simulation parameters of rolling textures based on the RS theory | 3,547.6 | 2023-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The implementation of freedom of speech principles in Indonesian press regulation
The 1999 Press Law is a regulation that forms the basis of press practice in Indonesia and makes the Journalistic Code of Ethics a reference for journalists in Indonesia. The researcher interested in analyzing the 1999 Press Law in its implementation in mass media since the researcher found several cases of alleged defamation by the person who is the object of reporting done by journalists, for instance, Bambang Harimurti and Tomy Winata in the case of profiteering Tomy Winata in reporting "Ada Tomy in Tenabang" or “There is Tomy in Tenabang” in the Tempo print media [1].
Introduction
The 1999 Press Law is a regulation that forms the basis of press practice in Indonesia and makes the Journalistic Code of Ethics a reference for journalists in Indonesia. The researcher interested in analyzing the 1999 Press Law in its implementation in mass media since the researcher found several cases of alleged defamation by the person who is the object of reporting done by journalists, for instance, Bambang Harimurti and Tomy Winata in the case of profiteering Tomy Winata in reporting "Ada Tomy in Tenabang" or "There is Tomy in Tenabang" in the Tempo print media [1].
Based on Tempo online [2], it was known that this case stemmed from the news from the mass media, Tempo, about the alleged connection of Tomy Winata in the case of the Tanah Abang Market burning. Tommy Winata was the owner of the Artha Graha Group, while Bambang Harimurti was the Editor in Chief of Tempo Magazine at the time. Bambang Harimurti was responsible for the writing done by a Tempo journalist who reviewed the possibility of Tomy Winata being involved in the case of the Tanah Abang Market burning. The Tempo reporter was considered to have written the news without prior confirmation to Artha Graha and only used anonymous sources. It was stated that the anonymous source was an architectural consultant used by Tomy Winata in making development planning for the Tanah Abang Market, to be submitted to the Jakarta Regional Government. According to Tomy, the news in the mass media has harmed himself individually and the company as a whole, because he felt his reputation and the company became polluted and his employees experienced threats from people who on behalf of themselves as residents of Tanah Abang.
AR T IC LE IN F O A BS T RA C T
The case of Tomy Winata and Bambang Harimurti cannot be resolved with the settlement of civil or criminal law in the District Court, because problems in the press can only be resolved by the 1999 Press Law through the Press Council. This was felt to be unfair to Tommy Winata since this problem was resolved with the right of reply accommodated in Tempo's printed media. The material and immaterial losses that have been experienced due to the circulation of the news cannot be replaced through the right of reply.
The case of Tomy Winata and Bambang Harimurti is compelling for the researcher because the researcher sees the impact and inequality that occurs with the implementation of the 1999 Press Law. On one side, the 'freedom' is granted through the 1999 Press Law to Indonesian press personnel, on the other side, this can be considered detrimental to other communities.
Actually case of Tanah Abang is not the only problem that occurred because of the blurry barrier of freedom of speech as in 1999 Press Law, and not only in printed media this can be happened. In this era, which internet as our first reference in searching information, we can find similar cases too. As an example, Merdeka online [3] site publish a news about unrest tragedy in Makassar Public University (Universitas Negeri Makassar or UNM) which in this tragedy had two dead victims. The fight happened between Language and Art Faculty because two of them accidentally bumped their motor cycle. Because of this news, Adinuansa started a petition to Press Council because he found that some information in this news and the picture were not correct and it against Journalistic Code of Ethics [4].
These problem does not stop there, even in this era, that can still happen. These days we hear a lot of information about Corona Virus or Covid-19 issue, when the President finally announced that they found two cases of Corona Virus, the press looked for deeper information until they found out about patient's personal data and published it. This made the patients felt uncomfortable because of public's stigma [5]. Beside that, sometimes journalists post irrelevant and inappropriate pictures to describe the situation, for example in Kuningan Bomb tragedy which some journalists posted uncensored picture of the victims and using over generated words [6], and this could make the victim's families hurt and people who read or watch the news feel uncomfortable too.
By looking at these cases, the researcher interested in analyzing the press regulation, the basis of regulation, and sees the implementation of freedom of speech in other countries as evaluation and suggestion to review and revise the existing press regulations to be better.
Press Freedom Regulation in the 1999 Press Law
Historically, press regulation began from Persbreidel-ordonantie, the 1966 Press Law, the 1967 Press Law, 1982 Press Law, and the last was the 1999 Press Law and the Press Code of Ethics. This 1999 Press Law repealed the previous Press Law, following Article 20 of the Final Provisions.
As Press Council [7] the 1999 Press Law was made during the fall of the new order and replaced existing laws that gave the president authority to control the press system on the grounds of maintaining national stability. Whereas the Press Law No. 40 1999 gives more control authority to the public, especially the Indonesian Press, for example in Article 15 paragraph [8] (1) which states that, "In an effort to develop the freedom of the press and expand the existence of the national press, a Board of the Press is established." The Board of the Press has the following functions; (1) protect the freedom of the press from any intervention; (2) conduct studies to develop the existence of the press; (3) decide and control the compliance of Code of Ethics of Journalistic; (4) give consideration and find solutions any complaint lodged by public towards cases concerned with press' reportage; (5) develop communication between press, public and government; (6) facilitate press' organizations in order to form regulations in press as well as increase the quality of journalistic professionalism; (7) register press companies [9].
In addition to the aforementioned article, the researcher also finds several other articles that supported the values of press freedom. In Article 1 as in [8], it is explained that any party is prohibited from carrying out activities of forced removal or cencorship of information material to be broadcast or published. This includes the prohibition of committing warming or intimidation. The activities to stop the publication and distribution or broadcasting by force are included in unlawful International Journal of Communicaton and Society ISSN 2684-9267 Vol. 2, No. 1, June 2020, pp. 20-29 Putri Tunjung Sari (The implementation of freedom of speech principles in Indonesian press regulation) banning. This shows that the press has freedom and discretion in carrying out journalistic activities because banning and censorship are not permitted which are considered to restrict journalists' freedom in presenting ideas and information.
Based on the aforementioned law, the Press Council can be identified as an independent body that is not influenced by any power and has the main function in establishing the Journalistic Code of Ethics as a reference for journalists' work, as well as conducting supervision in its implementation.
This Journalistic Code of Ethics is produced by, from, and for journalists, so that it is personal or means that it depends entirely on the journalist's conscience. This Journalistic Code of Ethics is also discussed in Article 7 paragraph (2) as in [1], namely "Journalist owns and adheres to The Ethic Codes of Journalistic." There are eleven articles in the Journalistic Code of Ethics , namely: (1) The Indonesian journalist is independent and produces news stories that are accurate, balanced and without malice; (2) The Indonesian journalist adheres to professional methods in the execution of a journalistic duties; (3) The Indonesian journalist always verifies information, conducts balanced reporting, does not mix facts with biased opinion, and upholds the presumption of innocence principle; (4) The Indonesian journalist refrains from producing false, slanderous, sadistic and obscene news stories; (5) The Indonesian journalist does not disclose and broadcast the identity of victims of a sexually-exploitative crime and refrains from identifying a minor who committed a criminal act; (6) The Indonesian journalist does not misuse his/her profession and accepts no bribe; (7) The Indonesian journalist has the right of refusal to protect the identity of a news source who does not wish his/her identity and whereabouts known, and abides by the conditions for an embargo, background information and off the record as mutually agreed; (8) The Indonesian journalist does not write or report news based on prejudice or discrimination against anyone on the basis of differences in ethnicity, race, color, religion, gender, and language and does not degrade the dignity of the weak, the poor, the sick, the mentally or physically handicapped; (9) The Indonesian journalist respects the right of the news source's private life except in the public interest; (10) The Indonesian journalist immediately retracts, rectifies, and corrects errors and inaccuracies in a news story accompanied with an apology to readers, listeners or viewers; (11) The Indonesian journalist accedes to the right of reply and the right of correction in a proportional manner [10].
In the implementation of this code of conduct, it is not uncommon for a violation to occur towards the Press Law and Journalistic Code of Ethics which cannot be resolved through litigation, this is because it is Lex Specialist or extraordinary and has certain procedures. Therefore, problems related to press violations must be solved first to the Press Council to obtain mediation and agreement. According to Hikmat [10], this extraordinary problem-solving is a consequence of the Journalistic Code of Ethics was made by journalists and supervised by journalists who are members of the Press Council. However, when the press problems cannot be resolved through the Press Council, the cases can be agreed to be resolved through police [11].
This Journalistic Code of Ethics is universal which binds journalists in a country or another. This is because the Journalistic Code of Ethics has the same essence and is applied throughout the world. The essence of the Journalistic Code of Ethics is; their balance in their news, neutral, objective, accurate, factual, do not mix up the facts and opinions, do not insert personal matter (privacy), respecting the presumption of innocence, not defamatory, false and obscene, and the title reflects the content news [10].
Freedom of Speech Principles
Freedom of speech in the world of journalism means freedom to seek, obtain, and disseminate ideas and information. This freedom of speech is guaranteed by the 1999 Press Law in Article 4 [8], as the citizen's human rights guaranteed by the state. Citizens' human rights are regulated in Article 28 F of the 1945 Constitution [12], "Every person shall have the right to communicate and to obtain information for the purpose of the development of his/her self and social environment, and shall have the right to seek, obtain, possess, store, process and convey information by employing all available types of channels." Thus it can be seen that the freedom of the press in conveying information by using all available channels and communicate is journalists' right to fulfill society's needs for information. There is a mutual need relationship between journalists and nonpress communities. Robert G. Pickard and Victor Pickard [13] state that, aside from the law, freedom of speech is also protected in Article 19 of the Declaration of Human Rights.
23
Vol. 2, No. 1, June 2020, pp. 20-29 Putri Tunjung Sari (The implementation of freedom of speech principles in Indonesian press regulation) The freedom of speech principle is the basic principles of communication regulation. Napoli [14] explains that there is a regulation aimed at one principle and can relate to other principles. "It is important to recognize that the policies directed at one of these foundation principles can have effects pertaining to the other principles." The principles are: (a) the first amendment; (b) the public interest; (c) the marketplace of media; (d) diversity; (e) competition; (f) universal service; (g) localism. This freedom of speech discussion is included in the principle of the first amendment. "The first amendment states simply that congress shall make no law abridging freedom of speech or of the process." [14] The first amendment is positioned as a structure because it has a relationship with each principle. There are several functions of the first amendment, one of them is the liberty or selffulfillment function. Freedom of speech is important for self-fulfillment and it is important for individuals to feel integrity and be valuable [14].
According to Barker in Napoli [14], the main value about freedom of speech is the freedom of self to define, build, and express themselves and affect the speaker. For example in the case of Vietnamese war protesters who promoted the slogan 'Stop this War Now' during the demonstration. He could have done this without hope, for example its influence on whether the war would continue or whether the leaders in power heard it. It could be that these protesters shout and participate so they are defined as opponents of the war. Out of protests about this war can emerge dramatic illustrations of the urgency of expensive speech, apart from any effective communication with others." [14] The first amendment principle on freedom of speech is criticized for failing to distinguish speech from other forms of human activity so that it fails to explain how speech gets a unique portion and protection under the constitution and how the first amendment protects forms of speech expression more than non-speech. This freedom of speech cannot explain the scope of action from free speech to self-freedom and self-fulfillment, so the rationalists suggest specific selfexpression to show one's point of view in the form of actions and behavior to get the same protection as speaking [14]. This freedom of speech, besides benefiting oneself in conveying ideas, opinions, and debate relating to personal and public matters, Garton said in Robert G. Pickard and Victor Pickard [13], was also seen as beneficial for the social environment. Thus, it is necessary to make media and communication regulations that accommodate freedom of opinion for the public and private.
Regulations relating to public expressions in print or broadcast media and the internet must be improved in terms of giving and receiving information, ideas and participation in debates. Policy makers must pay more attention to freedom of expression and communication in the public sphere, and not to limit and judge based on social care. As Fisher and Harms [13] once stated that the effective right of individuals to accept ideas, opinions and information, and to disseminate their view is as important as the obligation not to interfere with these expressions. Raboy and Shtern [13] reinforce this statement by stating that the right to expression and communication are basic rights in democracy [13].
Besides the concept of freedom of speech, included in freedom of expression is freedom of the press, the right to answer, fair access to communication, the right to know information, and different opinions. Freedom of expression is a basic right that is not unlimited. The limit of freedom of expression is regulation. As Robert G. Pickard and Victor Pickard [13] regulations should be made to preserve freedom without neglecting national defense, protecting the public interests as well as the interests and dignity of individuals. So, policymakers must examine the regulations made whether it is appropriate. "When addressing such issues, policymakers in democratic states need to exercise great care to ensure that any constraint on the right to free expression is specific, justified, reasonable, and would satisfy and independent judicial body." [13] Therefore, it can be concluded that the making of regulations must consider various parties and aspects, not only concerning the interests of one of the parties, such as Putri Tunjung Sari (The implementation of freedom of speech principles in Indonesian press regulation) also protected by the state. Moreover, this freedom must still consider the aspects of national security and stability.
Press Regulation Implementations Related to Freedom of Speech in Other Countrie
Regulations in a country are inseparable from the political system adopted by the country, for example in Indonesia itself. Bagir Manan [15] explained that Indonesian political system, democracy, is related to press. Press is the fourth element after three elements in Trias Politica by Montesquieu. In Trias Politica, those elements are legislative, judicative, and executive. Press is one of democratic characteristics beside rule of the law and alternative and freedom of choosing among the alternatives. Media is also an indicator of political openness because media gives stimulus in political openness by giving information that actual and critical which can make people realize the importance of political system [16]. If press has important meaning in democracy political system, it is different to communist political system. Ratna Dewi [17] tried to explained the importance of mass media in both political system, with using United States (US) and Russia as the examples. In US, mass media has similar function as Indonesia, but in Russia, mass media has function as a propaganda media. Because of this difference, the researcher wants to see differences the implementation of freedom of speech in press regulations in China and Norway since both countries have a far different political system.
1) Freedom of the Press in China
China was ranked 177 in 2019 on the online site [18], an online site that ranks press freedom in delivering information (reporters without borders for freedom of information). China's ranking is the lowest of 180 countries. So, it can be seen that the implementation of freedom of speech in China is not going well.
The Chinese Constitution has guaranteed freedom of speech, but in reality, this freedom cannot be applied in the lives of the Chinese people. This is due to freedom of expression or speech only applies to political elites. "The only people in China who can publish criticism of, or opinions contrary to those of, the communist party, are senior members of the Communist Party." While elite intellectuals and professionals get the opportunity better when compared to ordinary people to question the political policies in public, criticize in private and restricted forum [19].
China adheres to the Socialist-Communist Political system. According to Tohir Bawazir [20], Social-Communism is an understanding that all assets are the property of the state and are used for the overall prosperity of the people. Since the focus of this communist understanding is the management of all things by the government, the government has enormous authority, including in intellectual properties, opinions, ideas, and thoughts considered by the state to disrupt the stability and prosperity of the people. As Ellen R. Eliasoph [21] this is proven by the disarmament of the writer and editor, Wei Jinsheng, because his writings relating to differences in political views. Also, the Central Committee of the Communist Party filed for the revocation of "Four Big Right", where the right regulates freedom of expression, freedom of speech, holding large debates, and writing on posters.
The Chinese government strictly regulates what can be said and done by its citizens in expressing themselves even though the Chinese Government has declared to guarantee freedom of expression. This reality of pseudo freedom makes people who work in journalism and arts like; filmmakers and websites or content creators experience fear because if the message in their writing or work cannot be accepted by the government, they will get punishment. These can include harsh prison sentences, expulsion from the state, loss of work and business [22].
Restrictions on freedom of expression of opinions or ideas are also carried out by instilling doctrine in journalism students. Students are taught that the government gives freedom to express ideas or opinions, but they must understand that informing people of something that can bring bad impacts to society is useless, even dangerous. Besides, the Chinese government also instills in journalists understanding that the interests of the government are the same as those of the people. "The party's newspapers are the people's newspaper." Therefore, people's thinking must be in line with the thoughts of the government. Furthermore, the press becomes a tool for government propaganda and the government is also checking the propaganda department in the Party Central Committee [22].
25
Vol. 2, No. 1, June 2020, pp. 20-29 Putri Tunjung Sari (The implementation of freedom of speech principles in Indonesian press regulation) Freedom of speech in the press in China is not only done for local Chinese journalists. The influence of the Chinese government in controlling stability is huge, including deporting foreign journalists and prohibiting returning to China within a certain period of time. Even Hong Kong journalists must get permission from the Chinese government to do news coverage in China mainland [22].
Another effort to maintain the stability of the Chinese Government through the press is to establish censorship institutions. This institution is tasked to conduct a review of all information that is in, into, and out of China. The censorship agency also cooperates with the press to ensure that content displayed by the mass media contains government propaganda. The press agency is tasked with providing guidelines to the mass media for the limitations in content creation that will be published in the mass media, especially in the case of politically sensitive topics [23].
In the era of the development of telecommunications and information technology that has expanded rapidly, the Chinese government employs many civil servants to form a Great Firewall whose job is to censor content on the internet using bandwidth throttling methods, keyword filtering and blocking access to websites that are assessed not in line with the Chinese government propaganda, including Facebook, Twitter, Google, and Instagram. The Chinese government is blocking access to Facebook, Twitter, Google, and Instagram because the Chinese government cannot control the content on these sites and it is feared that the dissemination of information that is not following propaganda, such as democracy, can damage stability in China [23].
2) Freedom of the Press in Norway
Norway is a democratic country and based on press freedom ranking issued as in [18], Norway has been ranked first in the last three years in a row. The Norwegian government as in [24] believes that freedom of expression includes freedom to seek and receive information and express opinions that are prerequisites for participating in social and political life, so this is considered important as a basis for democracy.
Restrictions on freedom of opinion must be based on clear laws, for legitimate purposes, and democratic needs. In addition, freedom of opinion has limitations that need to be considered and monitored by the government, such as freedom that is misused to spread hatred and disturb other people or groups. So, besides giving freedom to its citizens to express opinions or ideas, the Norwegian government also pays attention to the limits not to interfere with other people's human rights. [24] The Norwegian government provides freedom of the press and makes it an independent institution and supports pluralism in the media to support the creation of democracy. The media has a role as a government watchdog that has a fourth power in society and can carry out criticism and correction of abuse of power, corruption, and non-transparency. The state guarantees the confidentiality of sources and does not allow censorship. Norway also promotes press freedom and media independence, especially in conflicting, and undemocratic countries [24].
The seriousness of the Norwegian government in upholding press freedom in expressing its thoughts is not only seen by the absence of censorship and positioning the media as a government watchdog but also related to regulations that protect the performance of journalists. The number of incidents of violence against journalists, making the Norwegian government establish provisions to conduct investigations and give punishment to the perpetrators for every journalist killed will make more people are depressed in silence. The Norwegian government also provides support to UNESCO and media organizations in protecting journalists [24].
Method
This research was conducted by using a critical paradigm that saw a gap between existing regulations and the implementations in social reality. Besides, as a logical consequence of this paradigm then qualitative research methods were used so that the analysis results of data and observations were analyzed and perceived to explain the reality of the freedom of speech implementation in Indonesia [25].
Results and Discussion
The researcher conducted an analysis related to the press regulation of the 1999 Press Law and the Journalistic Code of Ethics. According to the articles in the 1999 Press Law [8], it can be seen that the Press Law only regulates crimes related to banning, censorship, and or broadcasting prohibition, reporting without respecting religion and decency, and the mass media agencies must take the form of Indonesian legal entities. This is under Article 18 of the 1999 Press Law, namely: (1) Everyone who, against the law, deliberately take action that caused hindrance or prevention of the criteria stated in Article 4 item (2) and item (3) will be sentenced to jail for 2 (two) years at the maximum or charged with fine of Rp. 500,000,000.-(five hundred million rupiahs) at the maximum; (2) Press company who violates the criteria as stated in Article 5 item (1) and item (2), and Article 13, will be charged with fine of Rp. 500,000,000.-(five hundred million rupiahs) at the maximum; (3) Press company who violates the criteria as stated in Article 9 item (2) and Article 12 will be charged with fine of Rp. 100,000,000.-(one hundred million rupiahs) [8].
Furthermore, the Journalistic Code of Ethics becomes a reference in the implementation of journalistic activities regulated in Article 7 paragraph (2) as in [8], "Journalist owns and adheres to The Ethic Codes of Journalistic." This is reinforced in Article 15 as in [8] concerning the function of the Press Council, namely, "Decide and control the compliance of the Code of Ethics of Journalistic." While the Journalistic Code of Ethics cannot be used as a source of law, because ethics is only a group of list about good and right attitudes in doing a job and does not contain legal consequences in it [26]. The Journalistic Code of Ethics only regulates journalistic activities in a professional manner.
This causes imbalance when there are problems related to the implementation of journalistic activities that are not following the Journalistic Code of Ethics, such as the process of finding information that is not balanced, not through adjustment or supporting strong and valid data, and containing false and slanderous news that can harm others materially and immaterially. Examples of the cases of Tommy Winata and Bambang Harimurti, where the Court has ruled that Bambang Harimurti is not subject to criminal penalties because this case is a Lex Specialist where the resolution of cases requires extraordinary procedures through the Press Council and Tommy Winata is given the right of reply to clarify the reporting in Tempo Magazine, even though the Tempo Magazine reporter has been found guilty of not being able to prove the truth of the news and containing Tommy's name without going through a confirmation process or procedural error. This Judge's decision was considered unfair for Tommy Winata since the right of reply could not replace the losses that had been suffered, such as a damaged reputation and threats to his employees [27].
The researcher finds that the existence of this case and several other cases related to defamation due to the absence of information seeking procedures that are following the Journalistic Code of Ethics, and the problem-solving that can be given only by the right to reply that the 'victim' cannot compensate for the loss, this proves the existence of inequality fulfillment of human rights as Indonesian citizens guaranteed in the 1945 Constitution. The 1945 Constitution has a higher legal status than the 1999 Press Law. In the 1945 Constitution Article 28 G as in [8] it states that "(1) the right to protection of his/herself, family, honor, dignity, and property, and shall have the right to feel secure against and receive protection from the threat of fear to do or not do something that is a human right; (2) the right to be free from torture or inhumane and degrading treatment." This protection of human rights is also made clear in the Bill of Rights where freedom or right does not harm other rights or freedoms as in [28], "the enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people." So when there are other people who have freedom that exceeds and injures the freedom of others, there needs to be a law that regulates the consequences fairly. Because social justice is guaranteed in the 1945 Constitution and Pancasila as the rights of all Indonesian citizens.
Inequality in the 1999 Press Law is very likely to occur since historically, the 1999 Press Law was made in a very short period of time, which is approximately two weeks due to pressure during the reform period. Also, the 1999 Press Law was made by journalists, from journalists, and for journalists. Supervision of the Journalistic Code of Ethics implementation is also carried out by the Press Council, which, although independent, is composed of its individuals. It is feared that this making and monitoring will only use one dominant perspective, namely the perspective of the press. Even though the press also involves the community in carrying out its professional work,
27
Vol. 2, No. 1, June 2020, pp. 20-29 Putri Tunjung Sari (The implementation of freedom of speech principles in Indonesian press regulation) there is a need to balance the Press Law regulations which can equalize the position of both sides [29].
The existence of regulations related to the 1999 Press Law indicates that the Indonesian government has made efforts to facilitate the presence of press freedom to reach a democratic country, and this is proven by Indonesia is ranked 124 out of 180 countries that implement freedom of speech in the world of the press as in [18], although it still needs to reinforce the freedom limitations so that ambiguity does not occur when applied. As a reference to the limitations of freedom of speech, it can refer to the implementation of the press law and the role of the government in the freedom of speech principle implementation in the State of Norway. The Norwegian government limits that freedom of expression must respect the rights of others, not uttering hate speech, or interfere with other people or groups of people.
The level of success for the implementation of the freedom of speech principle depends very much on the political system adopted by a country. The implementation of freedom of speech, especially in the press, will be easier to implement in a country that adopts a democratic political system. This will be different when freedom of speech is applied in a country that follows the Communist-Socialist political system since in that system the government has an enormous role in regulating the country, including the press. China is an example of a country that has implemented a Communist-Socialist political system. The Chinese government will take firm action against journalists who write news about the collapse of the government. For example, the Chinese government jailed Tan Zuoren who reported the Chinese government's corruption related to the collapse of school buildings. This punishment does not only apply to people who work in the field of journalism, but also those who support the implementation of freedom of speech. Liu Xiaboo was jailed for eleven years for making a petition for reformation and freedom of speech in China, and this petition was signed by more than 2000 Chinese citizens. The Chinese government also banned the media from publishing news when Liu Xiaboo was awarded the Nobel Peace Prize [30].
The magnitude of the Chinese government's power over the press makes oversight of government performance uncontrollable by the people. The aim of the Communist-Socialist political system, where all assets are controlled by the state in the interests of the people, cannot be achieved. This is because with ownership of all assets by the state but in its management cannot be monitored by the people through the mass media, there may be a misuse of rights that will benefit those who have power alone. This will increase the gap between people who have power and ordinary people without power. Finally, the criticism that forms the basis of the Communist-Socialist political system, which is to eliminate the bourgeoisie, instead creates people who have power. The role of the mass media is only apparent because all press activities are only propaganda by people who have power.
Conclusion
The imbalance in the 1999 Press Law and the Code of Ethics of Journalism that become the basis of the professional journalists' work, considered only beneficial for the interests of the press without seeing the rights of other Indonesian citizens. For example, the case between Tommy Winata and Bambang Harimurti. Tempo parties get their freedom in obtaining and conveying information through its media, but in its implementation, it does not follow the principles of journalism and the Journalistic Code of Ethics. Violations of the Journalistic Code of Ethics occur in Article: (1) The Indonesian journalist is independent and produces news stories that are accurate, balanced and without malice; (2) The Indonesian journalist adheres to professional methods in the execution of journalistic duties; (3) The Indonesian journalist always verifies information, conducts balanced reporting, does not mix facts with biased opinion, and upholds the presumption of innocence principle [27].
In connection with the paragraph above, article 1 and 2 in the Journalistic Code of Ethics, explains that journalists must make the news without an agenda other than to provide accurate information by confirming the person whose name is included in the news. Article 3, journalists need to double-check data obtained with other supporting data, especially if the data obtained is weak. One example of weak data is using anonymous sources. Therefore, it is necessary to review the regulations, both the 1999 Press Law and the Journalistic Code of Ethics. The review can be done by involving several elements of the community that are considered to be related in the implementation of the professional journalists' work, to provide a variety of perspectives. After reviewing, any changes or rearrangement of regulations can be made. Then, there needs to be coordination between sectors in the legislative, executive, and judiciary related to the connection between the Press Law as a Lex Specialist with the Criminal Code, so there is no legal loophole that benefits one of the parties in trouble between the two law sources. Furthermore, by coordinating with the judiciary, it can be regulated how the limits of problem-solving in the implementation of the regulations before it is ratified.
With the effort to review and make changes to the Indonesian press regulations, it is hoped that the press roles as a supervisor of the government implementation as well as a forum for conveying ideas and information guaranteed in the 1945 Constitution can be well implemented. | 8,289.4 | 2020-06-04T00:00:00.000 | [
"Philosophy"
] |
Occam's Razor in Quark Mass Matrices
On the standpoint of Occam's Razor approach, we consider the minimum number of parameters in the quark mass matrices needed for the successful CKM mixing and CP violation. We impose three zeros in the down-quark mass matrix with taking the diagonal up-quark mass matrix to reduce the number of free parameters. The three zeros are maximal zeros in order to have a CP violating phase in the quark mass matrix. Then, there remain six real parameters and one CP violating phase, which is the minimal number to reproduce the observed data of the down-quark masses and the CKM parameters. The twenty textures with three zeros are examined. Among them, the thirteen textures are viable for the down-quark mass matrix. As a representative of these textures, we discuss a texture $M_d^{(1)}$ in details. By using the experimental data on $\sin 2\beta$, $\theta_{13}$ and $\theta_{23}$ together with the observed quark masses, the Cabibbo angle is predicted to be close to the experimental data. It is found that this surprising result remains unchanged in all other viable textures. We also investigate the correlations among $|V_{ub}/V_{cb}|$, $\sin 2\beta$ and $J_{CP}$. For all textures, the maximal value of the ratio $|V_{ub}/V_{cb}|$ is $0.09$, which is smaller than the upper-bound of the experimental data, $0.094$. We hope that this prediction will be tested in future experiments.
Introduction
The standard model is now well established by the recent discovery of the Higgs boson. In spite of the success of the standard model, underlying physics determining the quark and lepton mass matrices is still unknown. Because of this there have been proposed a number of models based on flavor symmetries, but no convincing model has been proposed.
A long times ago, Weinberg [1] considered a mass matrix for the down-quark sector in the basis of the up-quark mass matrix being diagonal. He assumed a vanishing (1,1) element in the 2 × 2 matrix and imposed a symmetric form of the matrix in order to reduce the number of free parameters. Then, the number of free parameters is reduced to only two and hence he succeeded to predict the Cabibbo angle to be m d /m s , which is very successful and called as the Gatto, Sartori, Tonin relation [2]. 1 The success of the Weinberg approach encouraged many authors to consider various flavor symmetries in the quark matrices by extending it to the three family case. In this paper, however, we point out that the Cabibbo angle is predicted successfully in the framework of "Occam's Razor" approach proposed in the lepton sector [6]. 2 In other word, we show that the Weinberg matrix can be obtained without any symmetry.
On the standpoint of Occam's Razor approach, we consider the minimum number of parameters needed for the successful CKM mixing and CP violation without assuming the symmetric or hermitian mass matrix of down quarks. We impose three zeros in the downquark mass matrix. We always take the up-quark mass matrix to be diagonal since we do not need any off-diagonal element to explain the observation. Therefore, the down-quark mass matrix is given with six complex parameters. Among them, five phases can be removed by the phase redefinition of the three right-handed and three left-handed down quark fields. After the field-phase rotation, there remain six real parameters and one CP violating phase, which are the minimal number to reproduce the seven observed data, that is the three downquark masses and the four CKM parameters. It is emphasized that the three-zero texture keeps one CP violating phase in the down-quark mass matrix. In the present Occam's Razor approach with three families, we show that the successful prediction of the Cabibbo angle is obtained. It is surprising that the Weinberg's anzatz, that is (1, 2) and (2, 1) elements to be symmetric, is derived in our Occam's Razor approach.
In order to reproduce the bottom-quark mass and the CKM mixing angles, V us and V cb , we take the elements (3, 3), (2, 3), (1, 2) of the 3 × 3 down-quark mass matrix to be nonvanishing. Then, we have 6 C 3 = 20 textures with three zeros. For our convenience, we classify them in two categories; (A) where a (2, 2) element is non-vanishing and (B) where a (2, 2) element is vanishing. In the category (A), the six textures with three zeros are 1 Fritzsch extended the above approach to the three family case [3,4]. He set four zeros in each downquark and up-quark mass matrices, which are both symmetric. Then, there were eight parameters against the ten observed data. However, it was ruled out by the observed CKM element V cb . Ramond, Roberts and Ross also presented the systematic work with four or five zeros for the symmetric or hermitian quark mass matrix [5]. Their textures are also not viable under the precise experimental data at present, because four or five zeros is too tight to reproduce the ten observed data. 2 The Occam's Razor approach predicts the CP-violating phase δ = ±π/2 in the neutrino oscillation. It is very much interesting that one of the predictions, δ ≃ −π/2, is favored in a global analysis of the neutrino oscillation data [7].
viable for the down-quark mass matrix. In the category (B), there are seven viable textures with three zeros. Finally, we have found that those thirteen textures are all consistent with the present experimental data on quark masses and the CKM parameters. However, we should note that some textures are equivalent each other due to the freedom of the unitary transformation of the right-handed down quarks.
In section 2, we show viable down-quark mass matrices with three zeros in the categories (A) and (B), and explain how to obtain the CKM mixing angles and the CP violating phase. In section 3, we show numerical results for those mass matrices. The discussion and summary is devoted in section 4. In Appendices A and B, we show unfavored down-quark mass matrices and the above redundancy of our textures, respectively.
2 Quark mass matrix in the Occam's Razor approach 2.1 Three texture zeros for down-quarks Let us discuss the down-quark mass matrix. We always take the basis where the up-quark mass matrix is diagonal. The number of free parameters in the down-quark mass matrix is reduced by putting zero at several elements in the matrix. We consider the three texture zeros, which provide the minimum number of parameters needed for the successful CKM mixing and CP violation. We never assume any flavor symmetry. We call this as "Occam's Razor " approach.
Before investigating the quark mass matrix, we present our setup in more details. The Lagrangian for the quark Yukawa sector is given by where Q Lα , u Rβ , d Rβ and h denote the left-handed quark doublets, the right-handed up-quark singlet, the right-handed down-quark singlet, and the Higgs doublet, respectively. The quark mass matrices are given as m αβ = y αβ v H with v H = 174.104 GeV. In order to reproduce the observed quark masses and the CKM matrix with the minimal number of parameters, we take the diagonal basis in the up-quark sector, For the down-quark mass matrix, we impose the three texture zeros. Then, the texture of the down-quark mass matrix M d is given with six complex parameters. The five phases can be removed by the phase rotation of the three right-handed and three left-handed down-quark fields. Therefore, there remains six real parameters and one CP violating phase, which are the minimal number to reproduce the observed data of masses and the CKM parameters. Now, we can discuss textures for the down-quark mass matrix. Let us start with taking (3, 3), (2, 3), (1, 2) elements of M d to be non-vanishing values to reproduce the observed bottom quark mass and the CKM mixing angles, V us and V cb . 3 Then, we have 6 C 3 = 20 textures with three zeros for the down-quark mass matrix. For our convenience, we classify them in two categories, (A) and (B), as explained in the introduction. In (A), we have 5 C 2 = 10 textures with a non-vanishing (2, 2) element and in (B) we have also ten textures with a vanishing (2, 2) element.
We first discuss textures in the category (A). The ten textures are written as follows: where A, A ′ , B, C, C ′ and D are complex parameters. We will show that the first six textures are consistent with the present experimental data. Those six down-quark mass matrices are parametrized after removing five phases by the phase rotation of quark fields as follows: where a, a ′ , b, c, c ′ and d are real parameters, and φ is the CP violating phase. It should be stressed that our matrices are not symmetric at all. The CP violating phase φ is put in the (2, 2) entry. 4 In the next section, we examine those seven parameters numerically to reproduce the three down quark masses, three CKM mixing angles and one CP violating phase. We discuss briefly why the last four textures (seventh-tenth ones) in Eq.(3) are excluded by the experimental data. The seventh and eighth textures in Eq.(3) give a vanishing CKM mixing angle. The ninth one gives us one massless quark because the first column is a zero vector in the flavor space. The last one cannot reproduce the magnitude of the CP violation. The details are shown in Appendix A. Now, we discuss textures in the category (B), in which the (2, 2) element is zero. We can write the ten textures as follows: The first seven textures are also consistent with the present experimental data. After removing five phases by the phase rotation of quark fields, those seven down-quark mass matrices are parametrized as: The last three textures (eighth-tenth ones) in Eq.(5) are also excluded by the experimental data. The eighth and ninth textures in Eq.(5) give a vanishing CKM mixing angle. The tenth ones cannot reproduce the magnitudes of the CP violation. The details are discussed in Appendix A.
Finally, we comment on the freedoms of the unitary transformation of the right-handed quarks. Since the CKM matrix is the flavor mixing among the left-handed quarks, some textures in Eqs. (4) and (6) are equivalent each other due to the freedom of the unitary transformation of the right-handed quarks. We show the redundancy among them in Appendix B. . In order to determine the left-handed quark mixing angles, we study M
CKM parameters for M
By solving the eigenvalue equation of M On the other hand, the eigenvectors lead to the CKM elements V ij and the CKM phase δ CP , which is given in the PDG parametrization [8]. Those are given in the leading order as follows: where we adopt the approximate relations b ∼ c and c ′ ∼ d, which will be justified in our numerical results. Although the source of the CP violation is only one, we have various measurements of the CP violating phase. We can also define the other CP violating quantity, β (or φ 1 ), which is the one angle of the unitarity triangle: Actually, sin 2β has been measured precisely in the B-factory [8]. It is given concisely in the leading order by sin 2β ≃ sin φ .
There is another CP violating observable, the Jarlskog invariant J CP [9], which is derived from the following relation: The predicted one is exactly expressed in terms of the parameters of the mass matrix elements as: where j CP = a 2 bcc ′ d sin φ .
For M , we summarize three CKM elements V ij , the CP violating phase δ CP and j CP in Table 1.
d , M which give quark masses as m q = y q v H with v H = 174.104 GeV. For M (1) d , it is convenient to eliminate the parameters a ′ , d and φ by using Eq.(8), where the three quark masses in Eq.(15) are put at the 90% C.L, respectively. Then, there remain four parameters a, b, c and c ′ . We calculate the CKM parameters by scanning these parameters randomly in the In fig.1, we show the frequency distribution of the predicted value of |V us |. The large angle is allowed as well as the small angle with almost same weight. In fig.2, we show the |V us | versus a/a ′ . Since we do not assume any relation among the parameters a, b, c and c ′ , the larger values than 1/ √ 2 are allowed. The observed Cabibbo angle favors around a/a ′ = 1, which was assumed in the Weinberg's approach. Since we do not assume a/a ′ = 1, the allowed |V us | leis in 0 − 1 at this stage. We will show that the desired relation a/a ′ ≃ 1 is finally derived without using the observed Cabibbo angle in our approach.
In figs.3 and 4, we show the frequency distributions of the predicted value of |V cb | and |V ub |, respectively. The |V cb | is allowed up to the maximal mixing 1/ √ 2. On the other hand, the |V ub | is predicted to be smaller than 0.02 because of the quark mass hierarchy as seen in Eqs. (8) and (9). However, it is still much larger than the observed value.
Here, we show the present experimental data on the CKM mixing angles and the CP where θ ij is defined in the PDG parametrization [8]. We can also use the other CP violating measure, sin 2β, as an input parameter. The experimental data on sin 2β is given by [8]: We will discuss the predicted J CP comparing with the experimental data, Let us, first, use the CP violating phase in addition to the three quark masses. We adopt the observed data on sin 2β in Eq.(17), which is more sensitive to constrain the parameters compared with the data on δ CP . In figs.5 and 6, we show the frequency distributions of the predicted values of |V us | and |V cb |, respectively. The predicted |V us | is still broad as 0 − 1. However, the frequency distribution of |V cb | is impressive because the peak is very sharp around the observed value. Thank to the constraint of the experimental data of sin 2β, the predicted |V cb | is restricted to be in the narrow region at one push. In fig.7, we show the frequency distribution of the predicted value of |V ub |. This distribution is not much changed compared with the case in fig.4.
In the next step, we add the constraint of the experimental data θ 13 in Eq.(16) . In fig.8, we show the frequency distribution for the predicted value of |V us |. The peak of this distribution is around the experimental value although the large mixing is still allowed. The frequency distribution of the predicted value of |V cb | is presented in fig.9. This distribution is not much changed compared with the case in fig.6.
At the last step, we impose the constraint of the experimental data θ 23 in Eq.(16). Then, we obtain the predicted Cabibbo angle with a good accuracy. In fig.10, the frequency distribution of the predicted value of |V us | is shown. The predicted |V us | is shown versus the parameter a/a ′ in fig.11. The prediction is completely consistent with the experimental data, and the ratio a/a ′ = 0.7 − 1.9 is given. Thus, the Weinberg's anzatz a/a ′ = 1 is obtained by using the experimental data sin 2β, θ 13 and θ 23 without assuming any flavor symmetry. Now, we can determine the parameter a (or a/a ′ ) precisely by using the experimental data θ 12 in Eq.(16). The seven parameters of M (1) d are determined completely by the experimental data. However, we can add further investigations since there is still large error-bars in the experimental data |V ub |. We show the predicted ratio |V ub /V cb | versus sin 2β in fig.12. The upper bound of the predicted ratio is 0.09 while the experimental data allows up to 0.094. Thus, the precise measurements of the ratio |V ub /V cb | and sin 2β are a crucial test of our textures.
There is another CP violating parameter J CP in addition to sin 2β and δ CP . We also show the predicted J CP versus |V ub | in fig.13, where the red dashed lines denote experimental bounds in Eqs. (16) and (17). The some experimental allowed region is excluded in our texture. The precise measurements of |V ub | and J CP also provide us a test of our textures. The predictions of the mixing angles and the CP violating phase in our thirteen textures are classified in the three groups as seen in Table 1. The predictions of the first group have been presented for M (1) d in the previous subsection. Other mass matrices M (k) d (k = 2 − 6, 11 − 17) can be also studied in the same way. We have checked numerically that the Cabibbo angle is predicted with a good accuracy by using the experimental data sin 2β, θ 13 and θ 23 without assuming any flavor symmetry. We summarize the allowed region of the parameters in Table 2 and Table 3, where M (16) d is the exceptional case with c ′ > d. We have found that only six textures are independent among thirteen textures as shown in Appendix B. as a representative of the second group in Table 1. The upper bound of the predicted ratio is also 0.09, which is the same as the one in fig.12 for M (1) d . The prediction is smaller than the experimental upper bound 0.094. We show the J CP versus |V ub | in fig.15 for M fig.17. The predicted J CP is almost the same as the one in fig.13, but different from the one in fig. 15. Therefore, the precise measurements of |V ub /V cb |, J CP and sin 2β are important to distinguish the textures.
Discussions and Summary
In our work, we consider the minimum number of parameters of the quark mass matrices needed for the successful CKM mixing angles and the CP violation. We impose three zeros in the down-quark mass matrix by taking the diagonal up-quark mass matrix. The three zeros are maximal zeros to keep the CP violating phase in the quark mass matrix. Then, there remain six real parameters and one CP violating phase, which is the minimal number to reproduce the observed data of the down-quark masses and the CKM parameters. In order to reproduce the bottom-quark mass and the CKM mixing, V us and V cb , we take the (3, 3), (2, 3), (1, 2) elements of the 3 × 3 down-quark mass matrix to be non-vanishing. Therefore, we have 6 C 3 = 20 textures with three zeros. We have found that the thirteen textures among twenty ones are viable for the down-quark mass matrix. They are classified into three groups in Table 1. It is remarked that these textures have freedoms of the unitary transformation of the right-handed quarks and some textures are transformed to other ones. By using such transformations, we have found that six textures are independent among the thirteen textures (see Appendix B).
As a representative of the above six textures, we have discussed the texture M (1) d in details to see how well the Cabibbo angle is predicted. By imposing the experimental data on sin 2β, θ 13 and θ 23 , the Cabibbo angle is predicted to be close to the experimental data. We have found that this surprising result remains unchanged in all other viable textures. Thus, the Occam's Razor approach is very powerful to obtain the successful Cabibbo angle.
After fixing all parameters by using the experimental data of three down-quark masses, the three CKM mixing angles and one CP violating phase, we have investigated the correlation between |V ub /V cb | and sin 2β. For all textures, the maximal value of the ratio |V ub /V cb | is 0.09, which is smaller than the upper-bound of the experimental data, 0.094. We have also discussed J CP versus |V ub |. The predicted J CP is almost the same among the first and third groups of Table 1, but different from the one in the second group. The precise data |V ub /V cb |, J CP and sin 2β provide us the important test for our textures.
Our textures have been analyzed at the electroweak scale in this paper. The stability of texture zeros of the quark mass matrix has been examined against the renormalization-group evolution from the GUT scale to the electroweak scale by Xing and Zhao [11]. They found that texture zeros of the quark mass matrix are essentially stable against the evolution. Thus, we expect that the conclusions derived in this paper do not change much even if we consider the textures in Eqs. (4) and (6) at the GUT scale. | 4,986.8 | 2016-01-18T00:00:00.000 | [
"Physics"
] |
Gutzwiller Hybrid Quantum-Classical Computing Approach for Correlated Materials
Inspired by the fast-pace evolution of noisy intermediate-scale quantum (NISQ) computing technology, novel resource-efficient hybrid quantum-classical approaches are under active development to address grand scientific challenges faced by classical computations. Proof-of-principle applications of NISQ technology in quantum chemistry have been reported in solving ground state properties of small molecules. While several approaches have also been proposed to address the long-standing strongly correlated materials problem, a complete calculation of periodic correlated electron model systems on NISQ devices, as a crucial step forward, has not been demonstrated yet. In this paper we showcase the first self-consistent hybrid quantum-classical calculations of the periodic Anderson model on Rigetti quantum devices, within the Gutzwiller variational embedding theoretical framework. It maps the infinite lattice problem to a quasi-particle Hamiltonian coupled to a quantum many-body embedding system, which is solved on quantum devices to overcome the classical exponential scaling in complexity. We show that the Gutzwiller hybrid quantum-classical embedding (GQCE) framework describes very well the quantum phase transitions from Kondo insulator to metal and from metal to Mott insulator in the correlated electron lattice model, with critical parameters at the phase boundaries accurately determined. The GQCE simulation framework, equipped with a full arsenal of evolving quantum algorithms to solve ground state and dynamics of quantum chemistry or equivalently finite embedding systems, is a well-adapted approach toward resolving complicated emergent properties in correlated condensed matter by exploiting NISQ technologies.
I. INTRODUCTION
Quantum computing holds the promise to revolutionize modern high-performance computations by potentially providing exponential speedups compared to currently known classical algorithms for a variety of important problems such as Hamiltonian simulation [1,2]. Accurately predicting the properties of competing phases or simulating the dynamics of interacting quantum mechanical many-body systems directly addresses grand challenges in quantum chemistry and materials science, with far reaching consequences on the science, technology and health sectors of our digital society [3].
While not being fully fault-tolerant, the currently available noisy intermediate-scale quantum (NISQ) hardware [4] is still extremely powerful as recently demonstrated by the Google team [5]. As the number of coherent gate operations is limited, however, the development of resource efficient algorithms with sufficiently short quantum circuits is crucial in order to be able to tackle open scientific problems on NISQ devices. One such example is the variational quantum eigensolver (VQE) algorithm that addresses the eigenvalue problem [6,7]. It was successfully implemented on NISQ technology to yield the ground state energy of small molecules such as H 2 , HHe + , LiH and BeH 2 [6,[8][9][10]. The algorithm is a hybrid quantum-classical approach that combines a quantum computation of a suitable cost function such as the Hamiltonian with a classical optimization routine. Instead of adiabatic state preparation followed by quan- * Electronic address<EMAIL_ADDRESS>tum phase estimation [11,12], which requires deep circuits, VQE employs a shallow variational circuit to evolve a chosen initial state into a target state. The objective cost function is then measured as a weighted sum of expectation values for associated Pauli terms. The variational parameters are classically optimized to minimize the cost [6,7]. Different forms of the variational circuit have been discussed in the literature, for example, based on the unitary coupled cluster ansatz (UCC) [7,13,14], or a trotterized adiabatic preparation ansatz [15]. Common to all of them is that the number of variational parameters rapidly increases with the number of orbitals, which makes the generally non-convex classical optimization problem increasingly difficult to solve. This is further complicated by the presence of noise on real NISQ devices. While directly applicable to small molecules, simulating infinite periodic quantum materials on NISQ devices therefore requires further algorithmic development.
Various proposals of efficient quantum algorithms for correlated materials have been discussed in the literature. For example, using an adiabatic VQE approach in a dual plane wave basis leads to particularly favorable scaling of the circuit depth and the number of qubits required for periodic systems [16]. This work proposed to simulate the Jellium model as a benchmark on near-term devices. Alternatively, there exists a long tradition in correlated materials theory to map infinite periodic systems onto effective impurity models. For example, dynamical mean-field theory (DMFT) has been established as the state-of-the-art classical computational approach for correlated materials [17][18][19][20]. In a pioneering work quantum algorithms based on adiabatic state preparation and phase estimation were introduced to solve the impu-2 rity Green function repeatedly, upon reaching the convergence with the local lattice Green function [21]. A hybrid quantum-classical approach based on a simplified twosite version of DMFT has also been proposed [22], and recently implemented using a generalized VQE method to find both the ground and excited states of the impurity model [23]. None of the algorithms proposed in these influential works have yet been fully demonstrated on a real NISQ device, since they require resources that are still beyond the current technology [24].
Here, we propose and demonstrate a new resourceefficient hybrid quantum-classical algorithm to simulate correlated materials on NISQ devices: the Gutzwiller quantum-classical embedding (GQCE) simulation framework. We have implemented the algorithm on Rigetti's quantum cloud service using PyQuil [25,26], and have performed the first fully self-consistent calculations of an infinite periodic correlated electron model on a quantum computer. Our GQCE approach is based on the powerful Gutzwiller variational embedding theory that is known to be able to capture many of the phenomena associated with strong local correlations such as Mott-Hubbard transitions [27]. When combined with density-functional theory (DFT), it has been shown to successfully address ground state properties of real correlated materials [27][28][29][30][31][32][33][34].
Similar to DMFT, the Gutzwiller embedding method maps the infinite interacting lattice model onto an effective impurity problem consisting of a cluster of correlated orbitals that are embedded in a self-consistent medium. In contrast to DMFT, which requires to solve for the fully frequency dependent impurity self-energy, the Gutzwiller theory uses only the ground state single-particle density matrix of the correlated impurity cluster. In practice, the approach amounts to finding a self-consistent solution of a set of coupled eigenvalue equations. The Gutzwiller embedding method is therefore ideally suited to be formulated as a hybrid quantum-classical algorithm, where the ground state of the correlated impurity cluster is determined using VQE. The GQCE approach is illustrated in Fig. 1.
As a first non-trivial benchmark study we have performed fully self-consistent GQCE calculations of the periodic Anderson model on Rigetti's Aspen-4 quantum device. Our results show that GQCE correctly describes the ground state phase diagram of the correlated model, which contains the Kondo insulator, correlated metal, and Mott insulator phases [17,35,36]. In contrast to Hartree-Fock theory, the critical parameters for the associated quantum phase transitions are also accurately determined using GQCE. Our work demonstrates the current capabilities of NISQ devices in the simulation of correlated materials. More importantly, it identifies GQCE as a promising framework for performing practical quantum computations of correlated materials in the near term, where NISQ devices may offer a quantum advantage.
II. RESULTS
The hybrid quantum-classical Gutzwiller embedding framework (GQCE) is based on the Gutzwiller variational embedding approach that has been developed in References [27,31]. It was shown to be equivalent to the rotationally invariant slave-boson method in the saddlepoint approximation. We review the formalism in the supplemental materials (SM), and focus in the main text on the part of the algorithm where quantum computers may offer a quantum advantage. We then present results of GQCE calculations for the infinite PAM lattice model. These fully self-consistent calculations were performed on Rigetti's Aspen-4 quantum computer.
A. GQCE approach to periodic Anderson model
To perform a first non-trivial benchmark study of GQCE for infinite systems, we consider the periodic Anderson model on the Bethe lattice in infinite dimension, as illustrated in Fig. 1(a). It is specified by a Hamiltonian composed of an itinerant c-band, a local interacting d-orbital and their onsite hybridization [36], The itinerant c-band center and the correlated d-orbital level at site are given by c and d , respectively. U is the intra-orbital Hubbard interaction parameter on the correlated d-level site, and V determines the onsite hybridization strength between c-and d-electrons.
On a Bethe lattice in infinite dimensions or with infinite nearest-neighbour connectivity, the conduction band has a density of states in the semicircular form ρ c ( ) = where D is the half band width, and hereafter set to be 1. The model hosts a diversity of paramagnetic electronic phases, including metal, band insulator, Kondo insulator and Mott insulator and the quantum phase transformations, which have been extensively studied in the literature [17,35,36]. Highly accurate numerical results have been obtained within the framework of DMFT [17][18][19][20], which becomes exact for systems in infinite dimension. This renders the PAM with Bethe lattice an ideal testbed for hybrid quantum-classical calculations for infinite systems on NISQ devices.
In this work, we choose the particle-hole symmetric point ofĤ d with Fermi level at 0, i.e., d = −U/2, and fixed V = 0.4, U = 2. In this parameter space, the system starts with a Kondo insulating (KI) phase at c-band center c = 0, transforms to metallic (M) phase with rising c , and finally enters the Mott-Hubbard insulating (MI) phase [36]. The zero temperature critical KI−M c at KI and M phase boundary is 0.07, and the critical M −M I c leading to M to MI transformation is 1.08, obtained from the DMFT calculations with the numerical renormalization group (NRG) as the impurity solver [36,37], which are numerically exact for the model in the current study.
We consider a Gutzwiller variational ansatz with the correlator acting on Hilbert space spanned by the single particle d-orbital only. This leads to a set of coupled eigenvalue problems of the Gutzwiller embedding Hamiltonian, which provides an accurate description of the local degrees of freedom, and the effective mean-field quasi-particle Hamiltonian, as shown in Fig.1 Apart from a constant, the Gutzwiller embedding Hamiltonian can be written aŝ Here D is the coupling strength between the d−orbital and the bath orbital f , at an orbital level −λ c , as the input from calculations of the effective quasi-particle Hamiltonian (see Eq. S14 and S21). The general local one-body matrix, such as the kinetic energy renormalization matrix R introduced in Eq. S6 and coupling matrix D, bears a 2 × 2 diagonal form with degenerate diagonal elements due to spin-symmetry in the paramagnetic state.
The output R and λ matrices are calculated from ground state solution of the embedding Hamiltonian, more specifically, through the one-particle density matrix, together with the ground state energy for total energy evaluation of the whole system. To solve the ground state of the embedding Hamiltonian using VQE on quantum computers, we first translate the fermionic Hamiltonian into molecular orbital representation, obtained from a spin-restricted Hartree-Fock (HF) calculation. It is subsequently transformed into a bosonic qubit representation via parity mapping [38,39]. As the ground state is restricted to total S = 0 at half-filling N e = 2, the embedding Hamiltonian can be represented in a two-qubit basis by the Z 2 symmetry aŝ Here X i , Y i and Z i are the Pauli operators acting on qubit i, and {g} are determined by embedding Hamiltonian parameters in Eq. (5), as well as the HF molecular orbitals. The asymmetric two-site embedding Hamiltonian is slightly more complex than that of the hydrogen dimer H 2 , which is a widely used example for the application of VQE in quantum chemistry [9]. To find the ground state energy and single-particle density matrix, we use VQE with an unitary coupled-cluster ansatz [6]. For a two-electron system, the UCC ansatz at double excitation level is known to be exact. And the single-excitation has no contribution to the ground state energy according to Brillouin theorem [40]. The UCC ansatz can be reduced to a simple form in two-qubit representation using parity transformation [38,39] where θ is the variational parameter bounded by [−π, π]. |01 is the spin-restricted HF ground state wave function, which is obtained by standard self-consistent calculations using PySCF package efficiently run on classical computers [41].
To get the expectation value of the embedding Hamiltonian Eq. (6) under the UCC wave function on quantum computers, we group Pauli terms that are diagonal in a common tensor-product basis. A typical quantum circuit, composed of the initial HF state preparation, UCC ansatz, and a measurement of Pauli term X 0 X 1 , is shown in Fig.1(c). In addition to Pauli terms in the embedding Hamiltonian, Pauli term Y 0 is also measured with the optimized UCC ansatz to derive the one-particle density matrix of the embedding system. The VQE code is developed based on a quantum computing library pyQuil [25,26], where we use a simultaneous perturbation stochastic approximation algorithm to optimize the noisy objective function on real devices [42]. The GQCE calculation amount to finding a fixed-point solution of the Gutzwiller nonlinear Lagrange equations (S14-S20) presented in the Supplementary Material (SM). These equations correspond to a set of coupled eigenvalue problems.
When the error vector functions F, which is defined in equations (S19-S20) of the SM can be evaluated accurately enough, the modified Powell hybrid method [43] is the method of choice to reach the solution. It employs information about the numerical Jacobian. Since the noise level of the real quantum device due to gate infidelities and decoherence is significant [44], however, F cannot be accurately calculated within GQCE. We thus employ the "exciting-mixing" method, which uses a self-tuned diagonal Jacobian approximation, instead of the Powell method. It is implemented in the SciPy library [45] and is well suited to solve the root problem of noisy nonlinear equations. The iterative procedure starts with the solution of quasi-particle Hamiltonian defined by initial guess of {R, λ} (Eq. S14), It leads to the matrices {D λ c }, which defines the embedding Hamiltonian (Eq. S18). The embedding Hamiltonian is solved on quantum computers to get the ground state single-particle density matrix. The vector error function (Eq. S19 and S20) is evaluated, based on which the matrices {R, λ} is adjusted by the "exciting-mixing" algorithm. The procedure continues with the updated quasi-particle Hamiltonian, until convergence is reached.
B. Quantum simulation results
The GQCE calculations on the PAM are carried out in two ways. First, we use a statevector simulator, which represents an ideal fault-tolerant quantum computer with an infinite number of measurements. Second, we use Rigetti's Aspen-4 quantum device, which contains 13 qubits in total. Figure 2 demonstrates the convergence of total energy, kinetic energy renormalization factor Z ≡ R † R, and maximal element of the error vector F in our GQCE calculation on Aspen-4 as a function of iteration number. The iterative nonlinear solver starts from the Hartree-Fock mean-field solution and reaches convergence after about 20 iteration steps. The remaining steps are used to estimate the error bars. The calculation results on the real quantum device closely follow that of noiseless simulations. with fluctuations that result from the device noise. The maximal absolute value of the error vector elements in Fig.2(c) levels near 0.01, which coincides with the scale of the two-qubit CZ-gate fidelity of the device. Because of the stochastic nature of quantum computing on real devices, hereafter we report results by mean values with errors estimated from the final 20 iterations. The standard deviation is estimated to be 0.05(3%) for total energy and 0.03(5%) for Z-factor.
In Fig. 2, the center of the conduction band is set to zero, c = 0. The system is then in the Kondo insulator phase, where the local correlated d-orbital, which is located at zero energy as well, hybridizes with the cband and opens a Kondo gap. The Z-factor in Fig. 2(b) shows appreciable amount of reduction from unity, manifesting local on-site Coulomb interaction effects. The kinetic energy (or hybridization) is effectively lower than the Kondo energy scale and the Kondo gap.
Let us now consider the quantum phase diagram as we tune the position of the conduction band c . Even in this restricted parameter space, where all other parameters are held fixed, the PAM goes through a series of quantum phase transition from Kondo insulator to metal and from metal to Mott insulator. We compare our GQCE findings to the numerically exact phase boundaries at zero temperature that have been determined by DMFT calculations using NRG as the impurity solver [36,37]. To extract the phase boundaries, we calculate the change of total energy E, renormalization Z-factor and total electron filling n as a function of c . Results are shown in the upper panels of Fig.3, which also includes the numerically exact phase boundaries from DMFT+NRG for comparison.
As seen in Fig. 3(a), the total energy from noiseless simulations monotonically increases with increasing c and reaches a constant as the system crosses the metal-Mott insulator transition. The GQCE calculations on Aspen-4 follows closely the exact energy curve along the phase transformation path, yet with a sizable error bar that originates from the noise of the device. The error bar drops by several orders of magnitude in the Mott insulator phase due to an efficient treatment of the (orbitalselective) Mott insulating phase within Gutzwiller theory, which exploits that Mott localized quasi-particle bands are pinned at the chemical potential with integer filling [27]. The embedding Hamiltonian in the Mott phase has a doubly degenerate ground state, which can be written as tensor product states |00 and |11 in the two-qubit parity basis. In practice, we choose one of the states to evaluate the energy and one-particle density matrix, followed by a symmetrization in the spin-sector to recover spin-symmetry. As only shallow circuits with measurements for tensor product state are necessary, the GQCE calculations generates accurate results due to the high fidelity of the single-qubit gates on the real devices.
We compare GQCE to HF calculations, where the embedding Hamiltonian solver is chosen to be at HF meanfield level (green curves in Fig. 3). Within HF, the total energy is monotonically increasing and significantly larger than the GQCE result. Crucially, it bears no signature of the metal-insulator phase transitions. The important physical phenomenon that is not captured within HF theory is the suppression of energetically unfavorable doubly occupied sites in the Hilbert space of the correlated d-orbitals.
In Fig. 3(b), we show the kinetic energy renormaliza- tion Z-factor [46], which is a key physical concept captured by Gutzwiller theory. When the conduction band center rises above the zero chemical potential, the renormalization Z-factor drops gradually and vanishes at the metal to Mott-insulator transition. Remarkably, for the parameters shown, GQCE predicts a metal-Mott insulator transition phase boundary that is in perfect agreement with the numerically exact value obtained from DMFT. The Z-factor obtained from GQCE calculations on the Aspen-4 quantum device closely follows the exact simulated data, with sizable error bar in the Kondo insulating and metallic phase. Again, the error drops significantly in the Mott phase for reasons described above. Within the HF approximation, the renormalization factor remains constant, Z HF = 1, demonstrating that the metal-Mott insulator transition is beyond the description of HF theory.
Finally, in Fig. 3(c), we show that the variation of the total electron filling is an effective way to locate the phase boundaries. In the Kondo (Mott) insulator phases, the electron filling is equal to two (one), while it is in between the two values for the correlated metal phase. The electron filling obtained from GQCE calculations on Aspen-4 agrees well with the exact statevector simulations. Its behavior can be used to locate the phase boundaries and the obtained critical parameter values are in decent agreement with the numerically exact ones. In contrast, the HF approach can only identify the transition from the Kondo insulator to the metal. As the correlation-induced renormalization of the hybridization is not captured within HF theory, the Kondo energy scale is overestimated and the Kondo insulator phase incorrectly persists up to larger values of c , (see dotted line in Fig.3(c)).
III. DISCUSSION
The Gutzwiller method adopts a Jastrow-type variational wave function, which describes the ground state properties of a correlated model beyond an effective single-particle mean-field theory. Although there is no efficient way to evaluate the associated full Green's function, the coherent part of it can be straightforwardly calculated [31]. The resulting coherent spectral density of states (DOS), which includes coherent quasi-particle excitations, can be used to distinguish the different quantum phases in the model. It is shown in the lower panels of Fig. 3(d-f), which correspond to Kondo insulator, correlated metal and Mott insulator phases. Data from GQCE calculations on Rigetti's Aspen-4 device are shown to be in excellent agreement with exact simulation results.
In the Kondo insulator phase (Fig. 3(d)), the center correlated d-orbital hybridizes with the conduction band, resulting in a finite hybridization gap. The inset shows that the hybridization gap from GQCE calculations agrees well with the exact simulation result and is significantly reduced compared with the HF mean-field value, due to the correlation-induced renormalization of the hybridization strength. As the conduction band is lifted to c = 0.8, the system is situated in a metallic phase. The hybridization gap is still present but moves to higher energy, and the chemical potential is located at the sharp quasi-particle resonance peak. The total coherent spectral weight decreases in accordance with the smaller quasi-particle weight Z as shown in Fig. 3(b). At c = 1.3, the coherent spectral weight completely vanishes as the d-orbital becomes Mott localized at halffilling. In the Mott phase, the incoherent lower and upper Hubbard bands, together with the conduction c- band, define the band gap size and distinguish between a Mott-Hubbard versus charge-transfer insulator phase. Although the GQCE calculations at this level cannot explicitly generate the Hubbard bands [47], the band gap size and characteristics can still be resolved by varying the chemical potential and monitoring the electron filling [48].
The GQCE framework is built on the open-source "CyGutz" package, which is an implementation of the Gutzwiller embedding approach in classical computer [34]. We have developed the quantum computing module of GQCE using IBM Qiskit and Rigetti's Forest SDK [25,26,49], which will also be released as opensource. The statevector simulator in IBM Qiskit and the wavefunction simulator in Forest SDK have been employed for the noiseless simulations. The GQCE calculations on the real quantum device are conveniently performed through the quantum cloud service (QCS) provided by Rigetti. It provides a quantum machine image that is co-located with the quantum infrastructure, which allows fast virtual execution of hybrid quantum-classical programs at low latency cost. As previously mentioned, the quantum processing unit (QPU) used in this study is Aspen-4. The device contains 13 qubits in total, among which we choose qubit 0 and 1 for the calculations. The associated two-qubit CZ-gate, which is one controlling factor for the noise level of the calculation results, has a fidelity of about 95.8%. Platform-level optimizations of parametric compilation and active qubit reset, which dramatically reduce the latency in QCS platform, have been utilized in our GQCE calculations. We adopt a resource-efficient readout error mitigation technique for the measurements [26]. It first measures the expectation value ξ of each Pauli term of the objective function in the associated tensor product ground state, which basically characterizes the read-out error rate. The expectation value of the same observable in the VQE ground state is then divided by ξ, correcting the bias due to measurement errors. More advanced error mitigation approaches, such as Richardson extrapolation techniques, have been proposed and experimentally realized recently [44]. It is of great interest to explore these techniques to see how feasible and efficient they are to further improve the accuracies of GQCE calculations.
To conclude, we have successfully implemented a novel hybrid quantum-classical simulation framework for correlated materials, which is based on the Gutzwiller variational embedding theory. In combination with density functional theory, this GQCE approach can describe ground state properties of correlated multi-orbital quantum materials. The GQCE method lends well to NISQ technology as it maps the infinite lattice system to an effective interacting impurity model, which is selfconsistently coupled to a non-interacting fermionic bath. To obtain a self-consistent solution of a set of coupled eigenvalue equations, we employ VQE with a standard UCC ansatz that runs on quantum computing devices. Using Rigetti's quantum cloud service, we have performed the first fully self-consistent hybrid quantumclassical calculation of an infinite correlated model. As a non-trivial benchmark calculation we have investigated the ground state quantum phase diagram of the periodic Anderson model and found good agreement with numerically exact results.
Going forward, GQCE calculations share the favorable polynomial system size scaling of VQE approaches in solving the correlated embedding Hamiltonian. Therefore, GQCE promises to be able to consider larger embedding cluster Hamiltonians, which take multi-orbital or spatial correlations into account. These are necessary to describe the impact of short-range non-local fluctuations [50] and non-local order parameters such a d-wave superconductivity [51][52][53] or vestigial orders [54]. In the near term, a robust VQE solution of a 28-qubit Hubbardtype Hamiltonian, which is equivalent to a Gutzwiller embedding Hamiltonian of a single f -orbital site in rareearth and actinide materials, would bring the capabilities of GQCE calculations on NISQ devices to the verge of what is currently possible on classical computers, thus demonstrating practical quantum advantage. approximation (GA) [S1, S5-S7], which becomes exact for systems in infinite dimension or with infinite coordination number [S4]. It is equivalent to rotationally invariant slave-boson method with saddle-point approximation [S2, S8, S9]. For nonlocal one-particle density operator or hopping operatorĉ † RiαĉR jβ , which cannot be fully represented in the reduced local Hilbert space spanned by the correlated orbitals at a single site, the expectation value has a closed form as that of the noninteracting wavefunction subject to additional renormalization with R the renormalization matrix.f operator is introduced to distinguish it from the physicalĉ operators defined in the Hamiltonian. It describes a physical local electron correlation-induced renormalization effects on the kinetic energy of the systems. For any local observableÔ i [{ĉ † iα }, {ĉ iα }], which is defined in a local correlated Hilbert space, the expectation value can be rigorously evaluated through the local reduced many-body density matrix, or equivalently the ground state wavefunction Φ i of the Gutzwiller embedding HamiltonianĤ emb i at site i with the embedding Hamiltonian of the following form The embedding Hamiltonian essentially describes a physical impurityĤ loc i coupled to a quadratic bath λ c of the same orbital dimension, with a coupling matrix D.
The total energy of the system per unit cell can now be written as withT It consists of contribution from the expectation value of an effective quasi-particle Hamiltonian and that of the quantum embedding Hamiltonians.
III. LAGRANGE EQUATIONS
The total energy in Eq. S9 is to be minimized in the parameter space defined the noninteracting wavefunction Ψ 0 and local Hilbert space of each nonequivalent embedding Hamiltonian, subject to the Gutzwiller constraints The constrained minimization can be conveniently formulated with the following Lagrange function | 6,288.8 | 2020-03-09T00:00:00.000 | [
"Physics"
] |
Ceramide Phosphoethanolamine Biosynthesis in Drosophila Is Mediated by a Unique Ethanolamine Phosphotransferase in the Golgi Lumen♦
Background: Many invertebrates contain ceramide phosphoethanolamine (CPE) rather than sphingomyelin as key membrane component. Results: Insect-specific CPE synthase belongs to a novel branch of CDP-alcohol phosphotransferases with unique membrane topology. Conclusion: CPE production is catalyzed by a CDP-ethanolamine:ceramide ethanolamine phosphotransferase in the Golgi lumen. Significance: Identification of CPE synthase provides a novel opportunity to elucidate the biological role of an enigmatic but widespread sphingolipid. Sphingomyelin (SM) is a vital component of mammalian membranes, providing mechanical stability and a structural framework for plasma membrane organization. Its production involves the transfer of phosphocholine from phosphatidylcholine onto ceramide, a reaction catalyzed by SM synthase in the Golgi lumen. Drosophila lacks SM and instead synthesizes the SM analogue ceramide phosphoethanolamine (CPE) as the principal membrane sphingolipid. The corresponding CPE synthase shares mechanistic features with enzymes mediating phospholipid biosynthesis via the Kennedy pathway. Using a functional cloning strategy, we here identified a CDP-ethanolamine:ceramide ethanolamine phosphotransferase as the enzyme responsible for CPE production in Drosophila. CPE synthase constitutes a new branch within the CDP-alcohol phosphotransferase superfamily with homologues in Arthropoda (insects, spiders, mites, scorpions), Cnidaria (Hydra, sea anemones), and Mollusca (oysters) but not in most other animal phyla. The enzyme resides in the Golgi complex with its active site facing the lumen, contrary to the membrane topology of other CDP-alcohol phosphotransferases. Our findings open up an important new avenue to address the biological role of CPE, an enigmatic membrane constituent of a wide variety of invertebrate and marine organisms.
Sphingomyelin (SM) is a vital component of mammalian membranes, providing mechanical stability and a structural framework for plasma membrane organization. Its production involves the transfer of phosphocholine from phosphatidylcholine onto ceramide, a reaction catalyzed by SM synthase in
Sphingolipids are essential components of the plasma membrane. They are primarily concentrated in the exoplasmic leaflet, providing an important structural framework for plasma membrane organization and function (1). As in mammals, Drosophila sphingolipids are critical for developmental processes such as embryogenesis, neurogenesis, and gametogenesis, whereas intermediates of sphingolipid metabolism have been associated with signal transduction cascades, cell death, and phagocytosis (2,3).
Nevertheless, there are some remarkable differences between sphingolipids of Drosophila and mammals. The major sphingoid bases in Drosophila and other dipterans are tetradecasphingenine (C14) and hexadecasphingenine (C16) as compared with octadecasphingenine (C18) in mammals (4,5). Also, the fatty acids that are amino-linked to the sphingoid bases to create ceramides are shorter in Drosophila sphingolipids in comparison with mammals. These characteristics predict that membranes would remain fluid even at lower temperature, which correlates well with the requirement of lower ambient temperatures for Drosophila survival. Moreover, Drosophila lacks the phosphocholine-containing sphingomyelin (SM) 4 found in mammalian membranes and instead synthesizes ceramide phosphoethanolamine (CPE) (4, 6, 7). The smaller crosssectional area of the phosphoethanolamine headgroup in CPE allows a closer contact between these molecules in comparison with SM, promoting membrane viscosity. Contrary to SM, CPE does not interact favorably with cholesterol and fails to form sterol-rich domains in model bilayers (8). Addressing how each organism evolved functional membranes based on such highly divergent membrane components is an important topic in lipid biology.
SM biosynthesis in mammals is catalyzed by a PC:ceramide cholinephosphotransferase (EC 2.7.8.27) or SM synthase (SMS) (9). This enzyme catalyzes the transfer of phosphocholine from phosphatidylcholine (PC) onto ceramide, yielding SM and diacylglycerol. Mammalian cells contain two SM synthase isoforms, namely SMS1 responsible for bulk production of SM in the Golgi lumen and SMS2 serving a role in regenerating SM from ceramides liberated by sphingomyelin phosphodiesterase on the exoplasmic surface of the plasma membrane (10,11). Both SMS1 and SMS2 are required for cell growth, at least in certain types of cancer cells (12,13). Together with a closely related enzyme, SMSr, they form the SMS protein family (10). Mammalian cells also produce CPE, although its concentration in membranes is very low and its biological role is unknown. Two CPE synthase activities have been described in mammalian cells, one enriched in a microsomal fraction (presumably ER) and the other one associated with the plasma membrane (14 -16). As PE serves as the headgroup donor for both activities, the enzyme(s) involved can be classified as PE:ceramide ethanolamine phosphotransferases analogous to SM synthase. We previously demonstrated that SMS2 is a bifunctional enzyme that produces both SM and CPE (17). Thus, SMS2 likely accounts for the plasma membrane-resident CPE synthase activity reported previously (14,16). The function of SMSr had so far been unknown, but we recently identified it to be a monofunctional CPE synthase that resides in the ER (17,18). SMSr thus qualifies for the microsomal CPE synthase activity first described by Malgat et al. (14).
Drosophila lacks SMS1 and SMS2 homologues, but contains a homologue of SMSr, which we named dSMSr. Although dSMSr possesses CPE synthase activity, its removal had no impact on bulk production of CPE in Drosophila S2 cells (18). In vitro enzyme assays revealed that these cells contain a second, dSMSr-independent CPE synthase that uses CDP-ethanolamine rather then PE as headgroup donor in CPE biosynthesis. This implied that the latter enzyme uses a reaction mechanism different from the one used by SMS family members, but similar to that of the enzymes producing phosphatidyl-ethanolamine via the Kennedy pathway. We here set out to identify the enzyme responsible for bulk production of CPE in Drosophila.
Antibodies-Rabbit polyclonal and mouse monoclonal anti-V5 antibodies were from Sigma and Invitrogen, respectively. The mouse monoclonal anti-GM130 antibody was from BD Biosciences, and the rabbit polyclonal anti-calnexin antibody was from Santa Cruz Biotechnology. The rabbit polyclonal anti-dGolgin245 antibody was a generous gift from Sean Munro (Cambridge, UK). The rabbit polyclonal anti-dGMAP antibody was described by Kondylis et al. (19). Horseradish peroxidase-conjugated secondary antibodies were from PerBio, whereas antibodies conjugated to FITC and Texas Red or Alexa dyes were purchased from Jackson ImmunoResearch Laboratories or Molecular Probes, respectively. The antibody against dSMSr was obtained as described (18).
Selection, Cloning, and Expression of dCCS Sequences-Selection of candidate CPE synthases (CCS) from the National Center for Biotechnology Information (NCBI) database involved the following steps: 1) selection of proteins containing a CDP-alcohol phosphatidyltransferase motif (NCBI accession number cI00453); 4982 RefSeq proteins; 2) restriction of results to Drosophila melanogaster; 14 RefSeq proteins; 3) restriction of results to one isoform per gene; 8 RefSeq proteins; 4) removal of one incomplete (CG40928) and one misannotated sequence (CG6921); 6 RefSeq proteins; 5) removal of phosphatidylinositol synthase (PIS) (CG9245) and cardiolipin synthase (CLS) (CG4774) proteins because their biochemical function is certain; 4 RefSeq proteins. This procedure yielded four CCS proteins in Drosophila, namely dCCS1 (CG33116), dCCS2 (CG6016), dCCS3 (CG7149), and dCCS4 (CG4585). The open reading frames (ORFs) of the corresponding dCSS sequences were amplified by RT-PCR (Titan One, Roche Applied Science) from mRNA isolated from Drosophila S2 cells (TRIzol, Invitrogen) using the primers listed in Table 1. PCR products were cloned into mammalian expression vector pcDNA3.1/V5-His-TOPO (Invitrogen), and the resulting plasmids dCCS1-V5, dCCS2-V5, dCCS3-V5, and dCCS4-V5 were used to transfect HeLa cells. For expression studies in S2 cells, the cDNAs were subcloned into the copper-inducible pMT/V5-His B (Invitrogen) vector using the restriction sites KpnI and XhoI (for dCCS1 and dCCS3) or KpnI and NotI (for dCCS2 and dCCS4). PE-methyltransferase-GFP plasmid was obtained as described in Ref. 18. Cell Culture and RNA Interference-Human HeLa cells were grown in DMEM with 10% FCS. Transfections with dCCS1-V5, dCCS2-V5, dCCS3-V5, and dCCS4-V5/pcDNA3.1 constructs were performed using Lipofectamine reagent (Invitrogen) following the manufacturer's instructions. Drosophila S2 cells were grown in Schneider's insect medium with 10% FBS (Cambrex) at 27°C in a humidified atmosphere. Cells were transfected with dCCS1, dCCS2, dCCS3, or dCCS4/pMT/V5-HisB constructs using Effectene (Qiagen). Expression of recombinant dCCS proteins was induced by the addition of 1 mM CuSO 4 for 3 h followed by a 2-h chase in the presence of 150 g/ml cycloheximide. RNAi on Drosophila S2 cells was performed by treatment with double-stranded RNA (dsRNA) synthesized by in vitro transcription of PCR products flanked by T7 RNA polymerase binding sites (TTAATACGACTCAC-TATAGGGAGA) using the MEGASCRIPT T7 transcription kit (Ambion). PCR products of 753, 604, 651, and 530 bp were amplified from dCCS1, dCCS2, dCCS3, and dCCS4 cDNAs, respectively, using primer sets listed in Table 1. dsRNAs targeting dSMSr and green fluorescent protein (GFP) were obtained as described in Ref. 18, and dsRNA treatment was performed as described in Ref. 20. On day 1, 10 6 cells were plated in a 35-mm dish and incubated with 30 g of dsRNA in 1 ml of serum-free medium for 1 h at room temperature followed by the addition of 2 ml of complete medium. After 3 days, cells were either harvested for enzyme assays and metabolic labeling or fixed for immunofluorescence.
In Vitro Enzyme Assay-HeLa and S2 cells were lysed in icecold reaction buffer (0. Reactions were stopped by adding 1 ml of MeOH and 0.5 ml of CHCl 3 , and lipids were extracted according to Bligh and Dyer (21). The lower phase was evaporated under N 2 , and the reaction products were analyzed by TLC using CHCl 3 /acetone/ MeOH/acetic acid/H 2 O (50/20/10/10/5, v/v/v/v/v; reactions containing NBD-Cer) or CHCl 3 /MeOH/25% NH 4 OH (50/ 25/6, v/v/v; reactions containing CDP-[ 14 C] ethanolamine). Fluorescent lipids were visualized on a STORM 860 image analysis system (GE Healthcare) and quantified with Quantity One software (Bio-Rad). Radiolabeled lipids were detected by exposure to BAS-MS imaging screens (Fuji Photo Film), scanned on a Bio-Rad personal molecular imager, and quantified with Quantity One software.
Cell Surface Enzyme Assay-This assay was performed essentially as described in Ref. 17. In brief, HeLa cells transfected with dCCS4 or empty vector were grown to 80 -90% confluence in a 10-cm dish. The cells were washed in HBSS and preincubated with HBSS containing 1% fatty acid-free BSA at 5°C for 30 min. To permeabilize the plasma membrane, cells were treated with 1 g/ml streptolysin in HBSS for 15 min at 37°C prior to preincubation in BSA-supplemented HBSS at 5°C. Next, NBD-Cer dissolved in ethanol was added to a final concentration of 2 M (0.2% ethanol in the medium), and the cells were incubated at 5°C for 3 h in the presence or absence of 10 mM MnCl 2 and 500 M CDP-ethanolamine. The incubation medium was saved, and the cells were washed by incubating with HBSS containing 1% fatty acid-free BSA at 5°C for 30 min. Incubation medium and wash were combined and subjected to lipid extraction according to Bligh and Dyer (21). Fluorescent lipids were analyzed by TLC using CHCl 3 /acetone/MeOH/acetic acid/H 2 O (50/20/10/10/5, v/v/v/v/v) and visualized as described above.
Microscopy and Image Analysis-Cells were fixed in 4% paraformaldehyde/PBS and processed for immunofluorescence after permeabilization with Triton (S2 cells) as in Ref. 22 or saponin (HeLa cells) as in Ref. 13. Images were captured using a confocal microscope D-eclipse C1, Nikon with 60ϫ 1.40 NA Plan Apo oil objective (Nikon). Images presented are confocal sections.
RESULTS AND DISCUSSION
Drosophila Contains a Unique CPE Synthase Unrelated to dSMSr-Drosophila S2 cell lysates incubated with fluorescent NBD-Cer form NBD-CPE (Fig. 1A, left panel). Depletion of CPE synthase dSMSr abolished NBD-CPE formation in S2 cell lysates (Fig. 1B, left panel) but had no effect on NBD-CPE formation in intact S2 cells (18). This indicated that S2 cells contain a second, dSMSr-independent CPE synthase. We reasoned that this second enzyme might not be detectable in cell lysates if it would require a soluble substrate that is continuously regenerated in living cells. CDP-ethanolamine (CDP-Eth) is an attractive candidate for such substrate given its role as headgroup donor in PE biosynthesis (23). Indeed, the addition of CDP-Eth dramatically enhanced NBD-CPE formation in S2 cell lysates when Mn 2ϩ ions were present (Fig. 1A, right panel) (18). CDP-Eth-dependent CPE synthase activity was unaffected by dSMSr depletion (Fig. 1B, right panels). Together, these results indicate that Drosophila S2 cells contain two distinct CPE synthases, namely a PE:ceramide ethanolamine phosphotransferase corresponding to dSMSr and a CDP-Eth:ceramide Ethphosphotransferase of unknown identity ( Fig. 2A). The latter enzyme appears unique for insect cells as the addition of CDP-Eth and Mn 2ϩ to lysates of human HeLa cells did not enhance NBD-CPE formation from NBD-Cer (Fig. 1A, right panel). The insect-specific CPE synthase shares two important features with ethanolamine phosphotransferases of the Kennedy pathway, i.e. the use of CDP-Eth as headgroup donor and a requirement for Mn 2ϩ ions for proper catalysis (24,25). One may therefore anticipate that the enzymes share a certain degree of structural similarity. This provided the starting point of a bioinformatics-based cloning strategy to identify the insect-specific CPE synthase. From now on, we will refer to this enzyme as CPES.
Selection of CPES Candidates from the Insect Database-The reaction catalyzed by CPES is very similar to the one catalyzed by CDP-Eth:diacylglycerol Eth-phosphotransferase (EPT; EC 2.7.8.1) during PE formation via the Kennedy pathway, except that ceramide instead of diacylglycerol serves as acceptor of the phosphoethanolamine headgroup ( Fig. 2A). EPT belongs to the superfamily of CDP-alcohol phosphotransferases (NCBI accession number cl00453). Members of this superfamily share the CDP-alcohol phosphotransferase (CAPT) sequence motif D(X) 2 DG(X) 2 (A/Y)R(X) 8 -16 G(X) 3 D(X) 3 D. In human choline/ ethanolamine phosphotransferase CEPT1, the final two aspartates of this motif are essential for catalysis, whereas the remainder of the conserved residues serves a role in substrate affinity or steric stability (26,27).
The CDP-alcohol phosphotransferase superfamily includes six Drosophila proteins, with three of them showing high similarity to CEPT (CG33116, CG6016, and CG7149), one showing high similarity to PIS proteins (CG9245), and one showing high similarity to CLS proteins (CG4774), whereas one does not show similarity to any protein of known function (CG4585).
We considered the possibility that one of the three CEPT-related proteins might have evolved into a CPES by a change of the acceptor substrate. This hypothesis appeared especially attractive because CG33116 and CG7149 (46% identity) have only one human orthologue and must therefore have originated from a gene duplication specific to the fly lineage (Fig. 2B, sequences 15 and 16). The protein of unknown function (CG4585) also represents an interesting CPES candidate as its homologues are present in insects and other arthropods, but not in most other animal phyla (Fig. 2B, sequence 3). Hence, we considered four CCS in Drosophila, namely three proteins from the CEPT subfamily (dCCS1/CG33116, dCCS2/CG6016 and dCCS3/CG7149) and one protein (dCCS4/CG4585) belonging to a novel protein family with a phylogenetic distribution mainly restricted to Arthropoda (Fig. 2B). These four proteins were next subjected to a detailed functional analysis.
Subcellular Distribution of CPES Candidates-Because bulk production of CPE in Drosophila S2 cells occurs independently of dSMSr (18), it is likely mediated by CPES. Indeed, this would explain why mammalian cells, which lack the latter enzyme, contain only trace amounts of CPE. Efficient CPE production in Drosophila requires the ceramide transfer protein CERT (28), which mediates ER-to-Golgi transport of newly synthesized ceramides (29). This implies that CPES resides in the Golgi, so we first mapped the subcellular distributions of the four dCCS proteins. To this end, V5-tagged versions of these proteins were expressed in Drosophila S2 cells and localized by immunofluorescence microscopy. Both dCCS1-V5 and dCCS2-V5 localized exclusively to the ER, as evidenced by a reticular and nuclear envelope staining that overlapped extensively with the ER marker PE-methyltransferase ( Fig. 2A, top). In contrast, the bulk of dCCS3-V5 and dCCS4-V5 was found in punctate structures containing the Golgi marker dGMAP, suggesting that these two proteins are at least partially associated with the Golgi (Fig. 3A, bottom). Some dCCS4-V5 was occasionally found at the plasma membrane, which is not unusual for Golgi proteins expressed at a high level. Localization studies in human HeLa cells produced very similar results; dCCS1 and dCCS2 localized exclusively to the ER, whereas dCCS3 and dCCS4 partially colocalized with the Golgi marker GM130 (Fig. 3B). Hence, unlike dCCS1 and dCCS2, dCCS3 and dCCS4 each meet at least one additional characteristic of CPES. dCCS4 Corresponds to the Elusive CPES-We next screened dCCS proteins for CPES activity. As a first approach, Drosoph-ila S2 cells were treated with dCCS-targeting dsRNAs to deplete individual dCCS proteins, lysed, and then incubated with NBD-Cer in the presence of CDP-Eth and Mn 2ϩ ions. Formation of NBD-CPE was monitored by TLC. dsRNA targeting GFP served as control. The efficiency of depletion was verified by immunoblotting of dsRNA-treated S2 cells expressing individual V5-tagged dCCS proteins (Fig. 4A). Contrary to removal of dCCS1, dCCS2, or dCCS3, depletion of dCCS4 caused a major (ϳ60%) reduction in CPES activity (Fig. 4B). When incubated with CDP-[ 14 C]Eth in the presence of Mn 2ϩ ions, lysates of dCCS4-depleted cells synthesized only a minor (ϳ25%) fraction of the radiolabeled CPE formed in lysates of control (dsGFP-treated) cells (Fig. 4C). In addition, loss of dCCS4 caused a substantial drop in de novo synthesis of CPE as monitored by metabolic labeling of S2 cells with [ 14 C]Eth (Fig. 4D). This was accompanied by a defect in cell growth. 5 Together, these results indicate that Drosophila S2 cells require dCCS4 for CDP-Eth-dependent CPE production and growth.
To investigate whether dCCS4 is not only required, but also directly responsible for CDP-Eth-dependent CPE formation, we next analyzed its ability to synthesize CPE in human HeLa cells. When added to HeLa cell lysates, NBD-Cer is converted to NBD-CPE by the CPE synthase SMSr (18) and the dual specificity SM/CPE synthase SMS2 (17). The addition of CDP-Eth has no effect on NBD-CPE formation because SMSr and SMS2 each use PE as headgroup donor. However, the addition of CDP-Eth to lysates of HeLa cells expressing dCCS4 caused a dramatic increase in NBD-CPE formation (Fig. 5A). This increase was strictly dependent on the presence of Mn 2ϩ ions. Moreover, when incubated with NBD-Cer and CDP-[ 14 C]Eth simultaneously, lysates of dCCS4-expressing HeLa cells, but not of control cells, supported formation of radiolabeled NBD-CPE (Fig. 5B). In sum, these findings indicate that dCCS4 cor- responds to the elusive CDP-Eth:ceramide ethanolamine phosphotransferase or CPES in Drosophila. CPES Structure and Topology-Homologues of CPES occur in a variety of Arthropoda, including flies, mosquitoes, bees, spiders, and mites. In addition, CPES homologues are present in at least two species of Cnidaria: i.e. Hydra and sea anemones. All CPES homologues contain a highly conserved CAPT motif (Fig. 6A). However, what distinguishes the CAPT motif in CPES from those present in CEPT, PIS, and CLS proteins is the insertion of 7-8 additional amino acid residues between the invariant Arg and the second invariant Gly residue. Although no tertiary structure is available for any of these enzymes, it is conceivable that this change in spacing corresponds to a change in specificity for the acceptor substrate, namely from diacylglycerol (for CEPT, PIS, and CLS) to ceramide (for CPES).
Apart from the CAPT motif, CPES does not display any obvious sequence similarity with other members of the CDP-alcohol phosphotransferase superfamily. Hydrophobicity analysis using a combination of different methods (Octopus (Stockholm Bioinformatics Center); TMHMM Server v. 2.0; Phobius (Stockholm Bioinformatics Center)) predicted six membranespanning ␣ helices connected by hydrophilic regions that would form extramembrane loops (Fig. 6B). The hydrophilic CAPT motif is situated between the second and third membrane span. The amino and carboxyl termini of CPES are predicted to face the luminal or exoplasmic space. This would position the CAPT motif in the Golgi lumen (Fig. 6F). When heterologously expressed in HeLa cells, a significant portion of dCCS4-V5 reaches the plasma membrane (Fig. 3B). According to the model (Fig. 6F), this would result in exposure of the C-terminal V5 epitope on the cell surface. To test this prediction, HeLa cells expressing dCCS4-V5 were immunostained with anti-V5 antibodies either before (Fig. 6C, intact) or after fixation and permeabilization (Fig. 6C, perm.). In line with the model, intact cells displayed a cell surface staining, whereas in permeabilized cells, both the plasma membrane and the Golgi were stained (Fig. 6C). Moreover, trypsinization of intact cells caused a substantial loss of V5-tagged dCCS4 (Fig. 6D), consistent with an exoplasmic orientation of the C-terminal V5 epitope. These results suggest that the active site of CPES is facing the luminal or exoplasmic space. To verify this directly, HeLa cells expressing dCCS4-V5 were analyzed for their ability to catalyze CDP-Eth-dependent CPE synthesis on the cell surface. For this approach, cells were incubated at 5°C to block endocytosis, and NBD-Cer was added to the medium, which was supplemented with BSA to extract any newly formed NBDlabeled lipid from the cell surface. Under these conditions, cells synthesized NBD-SM and trace amounts of NBD-CPE because of the plasma membrane-associated and bifunctional SM/CPE synthase, SMS2 (17). However, the external addition of CDP-Eth and Mn 2ϩ ions greatly stimulated NBD-CPE formation (Fig. 6E). This stimulation was strictly dependent on dCCS4. Permeabilization of the plasma membrane by streptolysin did not result in any further stimulation of CDP-Eth-dependent NBD-CPE formation (Fig. 6E). From these results, we conclude that dCCS4/CPES catalyzes CPE production on the exoplasmic side of the membrane, which is thus consistent with the model depicted in Fig. 6F.
Phylogenetic Distribution of CPES and SMS Enzymes-Our data predict that organisms containing a CPES homologue would produce bulk amounts of CPE, analogous to SM being an abundant membrane component in organisms harboring SMS. Indeed, as illustrated in Fig. 7, the presence of these enzymes in different organisms correlates well with their membrane lipid composition. Vertebrates and nematodes lack CPES homologues but contain multiple SMS enzymes (11). Accordingly, they have SM as their principal phosphosphingolipid and produce only trace amounts of CPE through SMSr and SMS2 (17,18,38,39). Arthropods in general have CPES in addition to SMS and therefore produce bulk amounts of both CPE and SM (2, 4, 40 -45). However, within the Arthropoda, the fly lineage (Brachycera) has lost SMS and thus the ability to synthesize SM. Flies therefore have CPE as their only phosphosphingolipid.
CAEP is structurally very similar to CPE except for phosphate being replaced by phosphonate with a direct C-P bond connecting phosphonate and ethanolamine (30,31). The nonhydrolyzable C-P bond would enhance stability of the lipid, especially with respect to phospholipases. It is conceivable that CPES mediates CAEP production in Cnidaria and Mollusca. Interestingly, the genomes of sea anemone, Hydra, and the pacific oyster Crassostrea gigas contain CPES homologues with Ala replaced by an aromatic amino acid (Tyr or Phe) in an otherwise perfectly conserved CAPT motif ( Fig. 6A; data not shown). This amino acid substitution may be linked to the need to transfer aminoethylphosphonate instead of phosphoethanolamine from the donor to the acceptor substrate during CAEP biosynthesis. Free 2-aminoethylphosphonate, a likely precursor of CAEP, has been found in sea anemone (32), supporting this hypothesis. Based on the currently available sequence information, it appears likely that CPES in Mollusca catalyzes production of both CAEP and CPE.
Concluding Remarks-In this study, we identified CPES, a Golgi-resident enzyme responsible for bulk production of the SM analogue CPE in Drosophila. Strikingly, CPES is unrelated to members of the SMS family, which synthesize SM and trace amounts of CPE in vertebrates and nematodes. Instead, CPES shares a similar reaction mechanism with the ethanolamine phosphotransferase that mediates PE biosynthesis via the Kennedy pathway. Common features include: 1) the use of CDP-Eth as headgroup donor in the enzymatic reaction; 2) dependence on Mn 2ϩ ions for catalytic activity; and 3) the presence of a CDP-alcohol phosphotransferase or CAPT motif. However, apart from the CAPT motif, CPES does not share any sequence similarity with other members of the CDP-alcohol phosphotransferase superfamily. CPES homologues occur in Arthropoda (insects, spiders, mites, scorpions), Cnidaria (Hydra, sea anemones), and Mollusca (oysters), but not in most other animal phyla (Fig. 7). Another feature that sets CPES apart from all previously identified CDP-alcohol phosphotransferases is that its active site appears to be situated on the exoplasmic surface of the membrane. This implies that CPE biosynthesis in Arthropoda, Cnidaria, and Mollusca relies on the presence of a membrane transporter involved in moving CDP-ethanolamine from the cytosol into the Golgi lumen. Although the identity of this transporter remains to be established, its presence in organisms containing CPES may explain why functional expression of this enzyme in mammalian cells is not sufficient to allow bulk production of CPE. 5 Molecular cloning of the CDP-ethanolamine transporter is the subject of ongoing studies. The present identification of CPES provides a novel opportunity to address the biological role of CPE, an enigmatic lipid with a widespread occurrence in the animal kingdom. | 5,679.8 | 2013-02-28T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Multiculturalism in Chang-Rae Lee’s Native Speaker: A Sociological Perspective
Deep into the novel, an inarticulate sense of unease in the psyche of Henry Park is explored being extremely disturbed, and an outcast. Trapped being in American-Korean identity, he has got his impression on his wife, Lilia beings ‘emotional alien’, ‘yellow peril: neo-American,’ ‘stranger/follower/traitor/spy’. In addition, she speaks of him being a ‘False speaker of Language’ because Henry looks listening to her attentively; following her executing language word by word like someone resembling a non-native speaker. In fact, the cultural differences between the Korean-American and the Native American bring tension around the ways the English language is used.
from the top of the church roof at the end of the speech. Therefore, such disagreement between the white and Oriental generations never ends effortlessly.
Critical Discussion
It is more than one hundred years since the first Korean immigrants arrived in America. Immigrant Koreans arrived in the country in three distinct groups over the last century. The first wave of Korean immigration was the landing of the S.S. Gaelic into Honolulu Harbor in January 13, 1903. Shinae Chun, Director of Women's Bureau, U.S. Dept. of Labor, says that 120 men, women, and children, who made up the first significant group of Korean-Americans, were carried to the harbor by a boat. This group became low-wage laborers on Hawaii's growing sugar plantations. The second wave of Korean immigration began during the Korean War (1950)(1951)(1952)(1953) when the brides of U.S. servicemen arrived in the United States, thanks to the War Brides Act of 1946. In 1952, the McCarran-Walter Act allowed Asians to immigrate in small numbers and eventually to become U.S. citizens. The largest wave of immigration from Koreaand the largest wave of immigration from all of Asiabegan with the passing of the Immigration Act of 1965. For the first time in U.S. history, immigrants from around the world, including Asia, were allowed to enter the United States in substantial numbers (Chun 2003).
Koreans are hardworking people. They open their own small businesses. Koreanowned greengrocers, restaurants, and dry cleaners can be seen throughout the country (Chun 2003). Koreans' success lies in some cultural characteristics which appreciate hard working. Those characteristics include speaking quietly and little. Korean culture promotes extreme instructions of "being quiet", "keeping secrets", "exquisite control over face muscles", "working before sunrise to the death of night", regarding "family as a Korean's life, though rarely sees them", and more features from which Koreans are considered by the narrator of the novel as "difficult people".
Chang-Rae Lee's Native Speaker is a profound investigation within the complicated layers of one's personality. The narrator is aware of his "hundredfold" (192) Korean identity. He generously takes the reader's hand and makes a tour through his own hidden dark angles of his mind, showing the reader his "secret rooms", his "identity". This "self-reflexive" feature of the novel helps the reader to discover as much mystery in his/her own life as the capacity allows him/her. The narrator wants "to come out, step into the light, bare himself" (190). Henry (the main character of the novel) has "been raised to speak quietly and little." He is an actor. Since his childhood, he has been taught to play roles by his father, for his father is an actor, too. "Were I an actor, I would have all the material I required"(130).
Henry's language is his main source of problems. He envies how other kids black, young Jewish and his Italian friends talk confidently. He cannot express himself well as a kid, therefore, he mostly listens. When he is six years old, he is sent to Speech Therapy room among other English non-native speakers to learn pronounce English sounds. But he permanently feels this lack of domination on English language. This lack of English domination brings a lack of confidence for him as a kid, a teenager, and then as a husband. Henry knows the values that language institutionalizes in people, and is aware of his language shortcomings; therefore, he feels a crisis which originates from his hybrid identity regarding his Korean legacy, and his immigrant parents. The issue of language will be discussed further in the present essay.
Henry's parents are good examples of a Korean's traditional family. His mother is the most difficult person, "My mother was the worst. She was an impossible woman. Of course she was a good mother. I think now she treated it like a job. She wasn't what you'd call friendly. Never warm", "She believed that displays of emotion signaled a certain failure between people" (28).
However, Henry loves his mother. When his father disputes with his wife, he speaks in English, as if he uses English language as a power, as a weapon against his wife "He used to break into it when he argued with my mother, and it drove her crazy when he did and she would just plead, 'No, no!' as though he had suddenly introduced a switchblade into a clean fistfight." Therefore, Henry breaks into their argument and starts "yelling at him, making sure I was speaking in complete sentences about his cowardice and unfairness, shooting back at him his own medicine… using the biggest words I knew, whether they made sense or not" (58).
Both Henry and his father exploit English language as a weapon against each other. The one who masters the language wins. Therefore, language is a power for those who master it. There is a kind of language hierarchy in this novel. Starting from the bottom, Henry's mother does not know English at all; as a result, she is defeated by her husband in their arguments. The father knows English a little; therefore, he cannot find a job he deserves in the Western society. Henry is not a native speaker in America though he is born in New York City. He struggles with communication problems with Lelia, his American wife, and finally, Lelia is a language expert. Jacques Lacan argues "As the speechless infant develops into a speaking beinga subject, he experiences feelings of both loss and power" (Lacan 1977). Loss because, Lacan argues, the infant begins to speak in reaction against unfulfilled desires, the relinquish speechlessness to voice his needs because those needs are not being (immediately) met. In beginning to speak, he experiences power because in asserting himself in speech he discovers that words can sometimes have a transitive effect on the world. "What is attractive to the literary critic about the Lacanian account of subjectivity is its emphasis on language twinned with its openness to the sense that an entire social system might just be at work in the process of learning to speak" (Robbins 2005). Robbins refers to 'Feral Children' who experienced no human contact and normal socialization during their childhood: The documented cases demonstrate a continuity of features: when the children are rescued, they are rarely able to learn to speak, seemingly rendered incapable of hearing the significance of human speech because of the developmental failures consequent on being deprived of human contact. They can possess neither the logos nor even a minimal capacity for the exchange of signs (Robbins 2005). IJECLS He argues that children who have been deprived of human contact are not able to speak, and consequently they cannot communicate in human ways. Being disabled to use human language, they cannot even think, because thoughts come through language.
Henry cannot express himself well, he remains silent, consequently he is accused by Lelia as "student of life, illegal alien, emotional alien" (5). Henry is forever uncertain of his place in the Western society. As a Korean-American, he is scared that he has betrayed both cultures, and belongs to neither. Robbins discusses further that the kinds of identities "we have developed and lived within are intimately connected to the societies we live in: we draw our models of selfhood from the models that our culture makes available to us" (Robbins 2005). Korean language, as any other languages, is produced through Korean cultural instructions. For instance, 'there are no exact words' for privacy and private space in Chinese (Chang 1993). Lack of a word depicts that there are no concept for this exact word in that particular language, and therefore, there is no such cultural value for that. Likewise, there is no concept for 'individualism' in Korean culture. Because of this lack of individualism concept, Korean identity is collective. There is, for example, no space for first names in Korean culture. Wives and husbands do not call each other's name; instead, they call each other as 'spouse'. "But then he never even called my mother by her name, nor did she ever in my presence speak his. She was always and only 'spouse' or 'wife' or 'mother'; he was 'husband' or 'father' or 'Henry's father'" (63).
A Korean woman comes to work for Henry's father after the death of his mother. She works for them for 20 years. Once Lelia asks Henry about the woman's name, but Henry does not know her name. Lelia is shocked: "This woman has given twenty years of her life to you and your father and it still seems like she could be anyone to you. It does not seem to matter who she is" (60). Henry and his father call the woman simply 'Ah-juh-ma', which means literally 'aunt'. She possesses a common name not a particular name. It reveals that they do not assume her as an autonomous individual who deserves a personal name. It does not relate to the woman but to the Korean culture, which does not bear individualism.
In traditional Korean culture, 'self' is a series of masks and public gestures. Cultural values are given to characteristics such as hard working, being quiet, silence, role playing, and keeping secrets, which lead to regard Koreans as strangers and 'difficult people', at least in the beginning, by Westerns. Korean workers are seen as hard-workers, who work grueling 18-hour days, 7 days a week, and who are going to occupy all the job opportunities in the American society. Korean success in business creates conflict for them with other minority groups. "The most highly visible example of racial tension began on April 29, 1992 in South Central Los Angeles, California, when African American customers revolted violently against Korean American merchants. The result of a long series of racially-charged events, the three days of violent chaos would prove to be the most destructive riot in U.S. history" (Chun 2003).
Henry's father is well and constantly described as a Korean immigrant model throughout the novel. "For him, the world (this land, his chosen nation) operated on a determined set of procedures, certain rules of engagements. These were the inalienable rights of the immigrants"(43-4). He admires Americans or any other person for their appearance but not openly. He makes fun of, for example, Joe Namath, a TV showman, but he uses the same things Joe does, "the little green bottle of musky potion that Joe also used", "He would have probably admired John Kwangat least for his appearance.
Though not openly, of course" (127). He possesses no good knowledge of English language. Therefore, he cannot communicate well with Americans, especially blacks. He is not fluent in English; therefore, he is not able to work in the professions in which he was trained in Korea. He has a master degree in engineering, but he works in a vegetable store. He is a man of family, but pays no good attention to his wife's and his son's emotional feelings. He has not learnt to love as Henry says "For most of my youth I was not sure that he had the capacity to love. He showed great respect to my mother to the day she died -I was tenand practiced for her the deepest sense of duty and honor, but I never witnessed from him a devotion I could call love. He never kissed her hand or bent down before her"(53). Henry is an actor. He plays different roles well, hides his emotions, keeps his secrets, and says a lot of lies to Lelia (his wife) (6). In a phrase, he is a "spy". Therefore he can occupy a spy position: "I had always thought that I could be anyone, perhaps several anyones at once. Dennis Hoagland and his private firm had conveniently appeared at the right time, offering the perfect vocation for the person I was, someone who could reside in his one place and take half-steps out whenever he wished." He is satisfied with the kind of job which requires some qualities Henry already possesses, such as role playing. "I thought I had finally found my truest place in the culture"(118).
He is an industrial spy for a strange firm with unknown clients. "Our clients were multinational corporations, bureaus of foreign governments, individuals of resource and connection"(16). The individual he is spying on is called 'subject'; they could be "foreign workers, immigrants, first-generationals, neo-Americans", "We generated background studies, psychological assessments, daily chronologies, myriad facts and extrapolations. These in extensive reports."(16).
Dennis Hoagland, the boss, makes the plans, and appoints the spies for appropriate missions or "assignments", as Henry calls them. John Kwang is his new subject. "I can write three or four pages on my subject and then another page of breezy analysis in less than an hour. I am supposed to do it this way, precisely but fast" (189). At the beginning, Kwang is only a 'subject', and does not possess any meaningful identity for him. Henry is a dutiful spy "I am to be a clean writer, of the most reasonable eye, and present the subject in question like some sentient machine of transcription." He does not permit his moral interfere with his job. Judgment is not his task "I leave to the unseen experts the arcane of human interpretation" (189).
Although Henry knows his job, "his appointed plan" well, he approaches Kwang's inner self "who he was to himself, the man he beholds in his most private mirror.", which is not a wisely deed a professional spy does. Henry is impressed by IJECLS Kwan's personality, unlike what he thought at first when Hoagland mentioned Kwan's name. "I thought I could peg him easily; were I an actor, I would have all the material I required" (130).
Henry makes a lot of flash backs between his father and John Kwang. Kwang is a Korean politician, though there are some significant differences between them, he displays some cultural similarities as a Korean immigrant. Henry sees some of their appearance characteristics look like each other (Kwang's and his father's), as if the narrator were going to state that Koreans in any position of literacy, political situation, and age had a lot in common. For example, Kwang is a man of family, like Henry's father. "His neatly clipped black hair … reminded me of my father's head" (125), their way of dressing, (127), and their "short Korean legs" (128).
However, working with Kwang, Henry finds somehow a new family. When he reports to Hoagland, he feels he is reporting something private about his own parents "as if I were offering a private fact about my father or mother to a completely stranger in one of our stores"(137). Henry feels a kind of love, family love or family bound with Kwang, and says "perhaps this was because John Kwang constantly spoke of us as his own, of himself as a part of us. Though he rarely called you a brother, sister, son" (137).
Kwang introduces Korean culture to Henry from a new angle, and makes him acquainted with his legacy. He remarks "we misapply what our parents taught us"(180), or reminds Henry "Don't be so hard on your father… likely I know, you are right. But I understand his feeling more than I ever have" (183). Through his sessions with Luzan (Henry's previous subject) and his working for Kwang, Henry asks himself "Who have you been all your life?" (191). And some uncertainties grow in his conscious: "in some moments then, I don't know how long, exactly, I forgot the entirety of what I was doing. I lost-or better, misplaced-the very reason why I was there…there was nothing to report, certainly, nothing worth commentary" (184), which can be well defined in terms of identity crisis. Henry does not try to report on Kwang the way he is supposed to by Hoagland. "But there is one more version I want to write for Hoagland, for the client, for the entire business of our research. The greater lore I can now see" (196). The ethical conflicts of Henry's job are never really resolved. "For how do you trail someone who keeps you so close? How do you write of one who tells you more stories than you need to know? Where do you begin, and where are you able to end?" This new version is "the leap of his identity no one in our work would find valuable but me" (196). Henry finds out that throughout his working on this 'assignment', he was in fact searching his own identity "I am here for the hope of his identity, which may also be mine". He is looking for his true self "to catch a glimpse of who I truly was" (300).
Lelia is a white, a native speaker, "the standard-bearer" (11) in terms of English language. She comes from an entirely different background. She does not have a language problem. She is a native speaker, a language teacher. She stands on the opposite end of the line which leads to Henry's language shortcomings. She is equipped with what Henry lacks, which is language-power.
Before anything, Henry notices how closely he is listening to her: "She could really speak…she was simply executing the language. She went word by word" (9).
Henry knows the values and power that language institutionalizes in people, and is aware of his language shortcomings. Therefore, he chooses Lelia, and her language abilities, to construct his own identity as an American. Their marriage can be a symbol for unifying of East and West, the very goal of politicians like Kwang in an American society. However, such kind of unification is not an easy effort. It takes ten years for Lelia to know some of Henry's characteristics, and write them down on a piece of paper: "Surreptitious, B+ student of life, illegal alien, emotional alien, Yellow peril: neo-American, stranger, follower, traitor, spy [...]" (5) or so says his wife, in the list she writes upon leaving him. Henry is, in top of her list, a "surreptitious" for her. Indeed, his Korean cultural instructions taught him to remain as quiet, silent, and mysterious as possible. In such manners, Koreans are as hurt as Westerns. "We perhaps depend too much often on the faulty honor of silence; use it too liberally and for gaining advantage" (89). To clarify this kind of cultural 'silence' which Koreans depend on, and to demonstrate Korean and Western cultural diversities, it may be useful to refer to the part in which Janice Pawlowsky, the Scheduling Manager in Kwang's office, shares her love experience with Henry. She tells him she loved John Kim, a Korean college student, and how they broke up. She wants to go back to her parents' home in Chicago, and asks John to go with her, but he does not want to go with her.
They have a huge fight: "Actually, I mostly yelled at him. He wouldn't say anything back. I called him later from Chicago but he wasn't home and his mother answered. She had no idea who I was. She never knew I existed. He never told them." John's mother tells Janice "very politely that I shouldn't call back" (88). Janice does not understand him. He never calls her. She asks to Henry "Is it a Korean thing? I mean, what kind of person does that? Everything was great between us. We had great sex, too, and that doesn't happen a lot in college. But now I have to think none of it was very good. It was like he'd done his time with me, with a white girl, and then it was over. I almost still hate him." (88). Henry thinks within himself "I knew I could have tried to comfort her, perhaps telling her how John Kim was probably just as hurt as she was and that his silence was more complicated than she presently understood"(88). However, he keeps silent again.
Henry compares Janice's and John Kim's love story failure with his own. "I showed Lelia how this was done, sometimes brutally, my face a peerless mask, the bluntest instrument" (89). This Korean silence and failure to talk freely seems an old misunderstanding in terms of language and culture. Talking with Lelia, Henry feels "I tried to answer but I couldn't. I wanted to explain myself, smartly, irrefutably. But once again I had nothing to offer" (118). He, like most immigrants, lacks the English language power. Henry and John suffer the same cultural problem in terms of 'silence'. Henry envies how other kids, black, Jewish and his Italian friends talk confidently with their parents. "When I was a teenager, I so wanted to be familiar and friendly with my parents like my white friends were with theirs. You know, they'd use curses with each other, make fun of each other at dinner, maybe even get drunk together on holidays." (205) But the Korean teenagers cannot share their love experiences with their parents. Mostly, the Korean parents are not aware of their children's relationships with their friends of opposite sex, because it is not traditionally and culturally appreciated. Familial affection is marked by its total absence. Ideas such as 'falling in love' are considered 'shameful', a family disgrace. Marriage is seen above all as a duty, an arrangement between two families. Henry is increasingly aware of the formality of his relationship with his father in particular. This is both a cultural issue: fathers and children are not supposed to show each other physical affection in Korean culture, and also a personal one. Once Henry tells Lelia, "I wanted just once for my mother and father to relax a little bit with me. Not treat me so much like a son, like a figure in a long line of figures. They treated each other like that, too. Like it was their duty and not their love" (205).
The kind of person one becomes is a very direct result of the childhood one lives through (Robbins 2005). Although Henry is brought up in America, he is living within a Korean home, with Korean language and culture. Hybridism is central to the story which blends traditional and new elements. He is not American enough to tolerate his individualism, nor Korean enough to knock that individualism out of him. He is affected by identity crisis. His crisis is because of living on the border between cultures, and exploring possible meanings of identity in two very different cultural models, Korean and Western.
There are a lot of differences between Lelia and Henry, which they ignore at first, but their son's death helps Lelia to look carefully into Henry and find the differences, which seem to her at first his shortcomings. Then, it seems to her that their only common reason to live with each other (their son) has been removed, so she leaves himand goes to islands for a while.
Henry does not talk about his son's death directly. It depicts that it is difficult for him to talk about his profound feelings. He approaches the issue cautiously after providing some long backgrounds. "Our boy, Mitt, was exactly seven years old when he died." He suffers in his silence. He does not talk about his suffering and pain with his wife, therefore, Lelia assumes that Henry is neutral about their son's death. They do not talk about Mitt until Lelia comes back from islands. Lelia tells Henry "Just think about it. You haven't said his name more than four or five times since it happened. You haven't said his name tonight. Maybe you've talked all this time with jack about him, maybe you say his name in your sleep, but we've never really talked about it, we haven't really come right out together and said it, really named what happened for what it was." Again, their relationship suffers from Henry's Korean silence, his avoidance to show his feelings.
Among the other features in Lelia's list, there is a dangerous word: "Stranger". The very existence of this word reveals that Lelia (West) does not know Henry (East) well enough to have a healthy coexistence with him. Kwang in his speech in front of the church mentions to this strangeness as a feature or reason for misunderstanding: "whom you believe to be other, the enemy, the cause of the problem in your life. Those who are a different dark color. Who may seem stranger. Who cannot speak your language just yet. Who cannot seem to understand the first thing about who you are"(141). He is a peacemaker, but such recognition is difficult to achieve in reality. In theory, it is somewhat easy to talk about, as the audience sees two kids shot them at the end of Kwang's speech.
Therefore, such disagreement between the white and oriental generations never ends effortlessly, because important economical and political elements are involved. Social pressures should be considered in analyzing the situation, as well. In April 29, 1992, African American customers revolted violently against Korean American merchants. This tragedy shook the Korean American population to the core. Of the $850 million in estimated property damage, Korean Americans sustained 47% or $400 million of that damage, and of the 3,100 businesses destroyed, approximately 2,500 of them were owned by Korean Americans (Chun 2003). In fact, the conflict was a result of a far more complex situation involving two minority groups who were both plagued by a history of racial inequality and oppression.
In spite of the tragic events, the Korean American community also made hopeful progress as a direct result of the riots. For the first time in American history, Korean Americans found a unifying voice. They organized to stop the rioting, to ultimately find peace with their African American neighbors, and to move ahead in cooperation with one another (Chun 2003).
Conclusion
Identity is not something fixed since birth day. Julia Kristeva's term can be applied here for identity though she uses it for subjectivity. She employs the term 'subject in process', and argues that "subjectivity is never complete, but is always in the process of being made and remade by the competing forces of the symbolic order and the semiotic, a space made up of the demands of the body and of the non-signifying parts of language, such as speech rhythms, sounds, intonations and other non-semantic gestures" (Kristeva 1984). Identity is made and remade in an instant process. To find out who it is, it is required to work out who has been, and how s/he has got to be her/him. Culture and language are among the most significant factors of the mentioned process of making identity. Those people who live in foreign countries, as migrants, refugees and exiles are exposed to two languages and cultures, which are sometimes completely different, in this respect, Koreans are living in a culture dominated by a white hegemony.
Language, as the title of the novel shows, is the central issue for the narrator around which other significant subjects move. Nothing can find its true meaning unless the main problem is decoded. When Henry finds his own native language, he finds his place on both Korean and Western worlds. Kwang and Lelia help him in their own ways. Kwang makes him familiar with Korean culture, language, old songs, lyrics, and music. Shows him how Henry can live in a Western society, be an American citizen, and still be a Korean. "They are every shape and color but they still share this talk, and this is the other tongue they have learned, this must be the special language" (316). Kwang, as a | 6,554.8 | 2021-01-26T00:00:00.000 | [
"Sociology",
"Linguistics"
] |
A Novel Triple-Mode Bandpass Filter Based on a Dual-Mode Defected Ground Structure Resonator and a Microstrip Resonator
A novel triple-mode bandpass filter (BPF) using a dual-mode defected ground structure (DGS) resonator and amicrostrip resonator is proposed in this paper. The dual-mode characteristic is achieved by loading a defected T-shaped stub to a uniform impedance DGS resonator. A uniform impedance microstrip resonator is designed on the top layer of the DGS resonator and a compact bandpass filter with three resonant modes in the passband can be achieved. A coupling scheme for the structure is given and the coupling matrix is synthesized. Based on the structure, a triple-mode BPF with central frequency of 2.57 GHz and equal ripple bandwidth of 15% is designed for the Wireless Local Area Network. Three transmission zeros are achieved at 1.48GHz, 2.17 GHz, and 4.18GHz, respectively, which improve the stopband characteristics of the filter. The proposed filter is fabricated and measured. Good agreements between measured results and simulated results verify the proposed structure well.
Introduction
Recent decades have seen the rapid development of wireless technology; as a result, there is an increasing demand on high performance microwave filters [1].The appearance of dualmode resonators has found its way into the application of filters and gained increasing popularity among the microwave filters for their capability to make a reduction of the numbers of resonating components.Since a microstrip ring dual-mode bandpass filter (BPF) was firstly proposed by Wolff in 1972 [2], various forms of dual-mode microstrip resonators and filters have been reported, including square patch resonators [3], square loop resonators [4], triangular loop resonators [5], and hexagonal loop resonators [6].Degenerate modes are excited by the perturbations within dual-mode resonators.Dualmode or triple-mode characteristics can also be achieved by loading a stub to a resonator [7].When changing the size of loading stub, the even-mode resonant frequencies can be easily controlled whereas the odd-mode resonant frequencies keep almost unchanged.Recently, microwave circuits are popularly designed on the ground plane [8][9][10], such as slotline with defected ground structure (DGS) stubs [11], defected resonator [12], and defected stepped impedance resonator [13].It provides a novel way for realizing dual-mode filters and multimode filters by fully utilizing the printed circuit board.Multimode filters can also be realized by using dual-mode resonator doublets [14].In our previous work [15], a four-mode BPF is achieved by combination of two dual-mode microstrip resonators.By setting two stub-loaded dual-mode resonators in parallel, a BPF with four poles is realized.However, the resonators could be properly arranged to minimize the circuit size.
International Journal of Antennas and Propagation
In this paper, a novel triple-mode bandpass filter is proposed using a dual-mode defected structure resonator and a microstrip resonator.A coupling scheme for the filter is given and the coupling matrix is synthesized.The ideal response of the filter agrees well with the simulated results.Compared with traditional filter of the same characteristic, the size of proposed filter reduces approximately 2/3.Three transmission zeros are achieved at 1.48 GHz, 2.17 GHz, and 4.18 GHz in the stopband of the filter, which greatly improve the selectivity and rejection of the filter.The proposed BPF is simulated, implemented, and measured.Good agreement is observed between simulated results and measured results.
Theoretical Analysis
2.1.Analysis of Filter Structure.Figure 1 is the 3D structure of the proposed triple-mode filter.The filter structure can be divided into three layers: the top layer is covered with a microstrip open-loop resonator and a pair of microstrip feed lines, the middle layer is the substrate, and the bottom layer is metal ground that is loaded by a dual-mode DGS resonator.The microstrip resonator on the top layer and the DGS resonator in the bottom layer are both directly coupled to the microstrip feed lines.Since the microstrip resonator and the DGS resonator are located on different layer of the circuit, the space of the circuit is fully utilized and the size of filter is reduced.
The coupling scheme of the triple-mode filter is presented in Figure 2. The dark circles and the white circles indicate resonant modes of resonators and source/load, respectively.Mode 1 is generated by the microstrip resonator; modes 2 and 3 are even and odd modes of the dual-mode DGS resonator [16].These modes are all directly coupled to both the source and the load.The coupling between microstrip resonator and source/load can be modified by changing their distance and overlap length.The coupling between dual-mode resonator and source/load also can be tuned by changing the location of the resonator.Commonly, the coupling between the microstrip resonator and source or load and the coupling between the even mode of the dualmode DGS resonator and input or output are all positive.The coupling between the odd mode of the dual-mode DGS resonator and source is positive, while the coupling between this mode and load is negative.The dashed line indicates the coupling between source and load that is determined by the gap between input and output microstrip line.Therefore, the corresponding coupling matrix of the coupling scheme is given by Due to symmetrical geometry of the proposed filter, the coupling coefficients agree with 1 = 1 , 2 = 2 , and 3 = − 3 .A transmission zero is produced by the coupling between source and load which improve the selectivity of the proposed BPF.Therefore, the generalized coupling matrix for the proposed BPF with central frequency of 2.57 GHz can be obtained on the basis of the approach of synthesis in [17] as follows: The synthesized scatting characteristic of the proposed filter is shown in Figure 3.The solid line and the dashed line indicate insertion loss and return loss, respectively.Three transmission poles are clearly observed in the passband of the filter.In addition, three transmission zeros are created at 1.41 GHz, 2.17 GHz, and 4.18 GHz, which improve the selectivity in the transition band and attenuation in the stopband.The return loss in the passband is larger than 20 dB and the minimum insertion loss in the stopband is almost greater than 20 dB. increases, that is, the coupling gap between input line and output line decreases, the coupling between source and load increases accordingly.Consequently, the third transmission zero (TZ 3 ) moves towards the passband, whereas the other two transmission zeros almost remain unchanged.Thus, it turns out to be convenient to realize filters with sharp transition band.Simultaneously, increment of the length of microstrip feed line may increase the coupling between source/load and resonators, and the transmission zero (TZ 1 ) will shift to the passband.
Analysis of Transmission Zeros.
To study the influence of the microstrip resonator on the characteristic of the filter, frequency responses of triple-mode BPF and dual-mode resonator are compared in Figure 5 clearly observed that an additional transmission zero is created at about 2.17 GHz when the microstrip open-loop resonator is loaded.The phenomenon can be explained by the fact that the microstrip open-loop resonator adds an extra transmission path to the circuit, and it will counteract with signal from another path at certain frequency.In addition, the first transmission zero TZ 1 almost keeps stable and the transmission zero in the upper stopband of the filter shifts away from the pass band, when the microstrip open-loop resonator is added to the circuit.Figure 6 gives the simulated transmission characteristics of the filter versus 8 .When 8 increases from 4.5 mm to 5.5 mm, TZ 2 moves towards lower frequency and TZ 3 shifts to higher frequency, while TZ 1 seems to keep unchanged.When 8 increases from 5.5 mm to 6.5 mm, three transmission zeros shift accordingly.It is obvious that the bandwidth of the filter will enlarge with the increment of 8 .Changing 8 will modify the overlapping length between feed line and microstrip resonator, and the coupling between source/load and microstrip resonator will vary accordingly.Moreover, resonant frequency of the microstrip resonator will change with 8 , so that 8 influences not only the bandwidth of the filter but also the location of the transmission zeros.
Simulation and Experimental Results
For the sake of validating above-mentioned theory, a compact and high selectivity triple-mode BPF is designed and fabricated.The designed filter has a central frequency of 2.57 GHz and fractional bandwidth of 15% with equal ripple of 0.0432 dB.A substrate with a relative dielectric constant of 3.5 and a thickness of 0.8 mm is used in the design.Obtained parameters of the filter shown in Figure 1 8 = 5.5 mm, 9 = 2.6 mm, 10 = 18 mm, W = 1.5 mm, 1 = 0.5 mm, 2 = 0.5 mm, and 3 = 1mm.The filtering performance is measured by using Network Analyzer AV3926, and a comparison between EM simulated results and measured results is shown in Figure 7. Solid lines and dotted lines indicate the simulated and measured results, respectively.The passband of the proposed filter is from 2.2 GHz to 3.68 GHz, and its passband return loss is larger than 20 dB.Three transmission poles are clearly observed at 2.31 GHz, 2.5 GHz, and 2.82 GHz in the passband of the filter.Three transmission zeros are generated at 1.48 GHz, 2.17 GHz, and 4.18 GHz, which improve the selectivity of the filter.The simulated and measured maximum insertion loss in the passband are 1 dB and 1.12 dB, respectively.Apart from the frequency shift that may be caused by the discrepancy of the dielectric constant between its nominal value and real value, measured results agree well with the simulated results.The photograph of the fabricated filter is shown in Figure 8.The designed filter circuit occupies the overall size of about 30 mm × 15 mm.
Conclusion
A novel miniature microstrip triple-mode bandpass filter is proposed in this paper.Three modes are obtained by combination of a dual-mode DGS resonator and a microstrip resonator.The coupling matrix of proposed structure is established to further explain the proposed design.Three transmission zeros are realized in the stopband of the filter, which greatly improve the selectivity and attenuation of proposed filter.Measured results agree well with the simulated results, verifying the proposed structure and design methodology.
Figure 4
shows the transmission characteristics of the filter versus the length of microstrip feed line.When the length of microstrip feed line
Figure 4 :
Figure 4: The changes of transmission zeros versus L.
Figure 5 :
Figure 5: The comparison of frequency responses between triplemode BPF and dual-mode resonator.
Figure 6 :
Figure 6: Transmission characteristics of the filter versus 8 .
Figure 7 :
Figure 7: Comparison between EM simulated and measured results of the proposed filter.
Figure 8 :
Figure 8: Photographs of the fabricated filter: (a) top view and (b) bottom view. | 2,433.2 | 2013-04-11T00:00:00.000 | [
"Physics",
"Engineering"
] |
Energy-momentum tensor correlation function in Nf = 2 + 1 full QCD at finite temperature
We measure correlation functions of the nonperturbatively renormalized energy-momentum tensor in Nf = 2 + 1 full QCD at finite temperature by applying the gradient flow method both to the gauge and quark fields. Our main interest is to study the conservation law of the energy-momentum tensor and to test whether the linear response relation is properly realized for the entropy density. By using the linear response relation we calculate the specific heat from the correlation function. We adopt the nonperturbatively improved Wilson fermion and Iwasaki gauge action at a fine lattice spacing = 0.07 fm. In this paper the temperature is limited to a single value T ≃ 232 MeV. The u, d quark mass is rather heavy with mπ/mρ ≃ 0.63 while the s quark mass is set to approximately its physical value.
Introduction
The nucleons are expected to undergo a phase transition at high temperature and density and turn to be the quark gluon plasma (QGP). Several heavy ion collision experiments are going on and many phenomena have been observed which support the transition to QGP. One of the most fascinating observation is that of strongly coupled hydrodynamical collective motion, which is well described with the ideal fluid model [1].
At the same time there appear several challenges to calculate the viscosity of QGP by using the lattice QCD without quarks [2][3][4]. These pioneering works calculate the viscosity from correlation functions of the energy-momentum tensor via the spectral function by making use of Kubo formula.
However there are two difficulties in this strategy. One is the ill defined inverse problem to extract the continuum spectral function from the correlation function on lattice with finite temporal extent. The other is the renormalization of the energy-momentum tensor. The energy-momentum tensor is the conserved current associated with the translational invariance and is not uniquely given on lattice. Exception is diagonal components of the tensor for the pure gluon system, for which nonperturbative renormalization factors are given by using the thermodynamical relation.
In this paper we shall avoid the latter difficulty by applying the gradient flow [5][6][7] both to the gluon and quark fields and use it as a nonperturbative renormalization scheme [8,9]. The gradient flow renormalization scheme can be applied every component of the energy-momentum tensor even with quarks. The method has already been applied to the finite temperature QCD with N f = 2 + 1 flavors [10,11] and successful results are given for the equation of state of QCD, the chiral condensate and the topological susceptibility. The formulation is also used for a calculation of the energy-momentum tensor correlation function in pure Yang-Mills theory at finite temperature [12]. We apply the strategy to N f = 2 + 1 flavors QCD and calculate the energy-momentum tensor correlation function. The purpose in this paper is to investigate (i) conservation law, (ii) restoration of rotational symmetry, (iii) consistency between several derivation of the entropy density and (iv) specific heat.
Nonperturbative renormalization using gradient flow
We adopt the flow equations given in Refs. [5,7] for the gauge and quark fields where the field strength and the covariant derivative are given in terms of the flowed gauge field f = u, d, s, denotes the flavor index. The statement of the gradient flow is that any composite operator made of flowed fields is already renormalized and is free from the UV divergences if multiplied with the wave function renormalization factor for quark fields [5,7]. The renormalization scale is given in terms of the flow time µ = 1/ √ 8t. The gradient flow can be used as a nonperturbative renormalization scheme when applied to the lattice QCD Monte Carlo simulation. An important manipulation in a nonperturbative renormalization is a conversion to commonly used schemes like the MS or MS scheme, which are defined in perturbation theory. This can be accomplished by two steps; (i) in the nonperturbative scheme visit a perturbative high energy region nonperturbatively for example by using the step scaling function, (ii) convert to the MS or MS scheme with a matching factor calculated perturbatively.
For the gradient flow scheme the first step is accomplished by following the flow towards the t → 0 limit, which is realized easily in numerical simulation. The matching factor needed for the second step is calculated in Refs. [8,9] for the energy-momentum tensor at one loop level. According to the strategy the properly normalized energy-momentum tensor which satisfies the Ward-Takahashi identity in the continuum limit is given by taking a limit of the flowed tensor operator where ⟨· · · ⟩ 0 stands for the vacuum expectation value (VEV), i.e., the expectation value at zero temperature. The flowed tensor operator is given by a linear combination of five operatorsÕ iµν (t, x) with matching coefficients c i (t) given in Refs. [8,9] using MS scheme 1 .
We notice that five operators are expected to be evaluated on lattice and the continuum limit is taken before the t → 0 limit in (8). The t → 0 limit is necessary in order to resolve a mixing with irrelevant dimension six operators which is proportional to the flow time t.
Observables
In this paper we shall calculate two point correlation functions of the fluctuation of the energymomentum tensor where and τ is the Euclidean time. ⟨· · · ⟩ T is the one point function at the same temperature. The renormalized energy-momentum tensor consists of gluon contribution given by the first line of (9) and that from quarks given by the second and third lines in (9). In two point correlation functions, they lead to contributions categorized into two types; (i) two point functions. The first terms are calculated by using the same technique used in Ref. [10], where the noise estimator is adopted for the quark one point functions by inserting the noise vector at flow time t. In this paper we put the noise vector at flow time t = 0 in order to save the numerical cost.
For the second contribution from connected quark diagrams we need the flowed quark propagator [7].
where c fl is the O(a) improvement factor and we adopt c fl = 1/2 at tree level. � S and � C are given by where v and w stands for a apace-time point at t = 0. S f (v, w) is the bare quark propagator at t = 0. K and K † is the flow kernel which satisfies the ordinary flow equation and the adjoint flow equation [7] ( In an evaluation of the propagator (12)
Numerical results
Measurements of the energy-momentum tensor are performed on N f = 2+1 gauge configurations generated for Ref. [13]. As given in (9) we need to subtract the zero-temperature values of the operators. The zero temperature gauge configurations are also prepared which were generated for Ref. [14].
The nonperturbatively O(a)-improved Wilson quark action and the renormalization-group improved Iwasaki gauge action are adopted. The bare coupling constant is set to β = 2.05, which corresponds to a = 0.0701(29) fm (1/a ≃ 2.79 GeV). The hopping parameters are set to κ u = κ d ≡ κ ud = 0.1356 and κ s = 0.1351, which correspond to heavy u and d quarks, m π /m ρ ≃ 0.63, and almost physical s quark, m η ss /m ϕ ≃ 0.74. See Ref. [10] for a detailed explanation of numerical parameters. The temperature is limited to a single value T ≃ 232 MeV.
In this section we perform three tests for the conservation law, the rotational symmetry and the linear response relations. We calculate the specific heat by using the linear response relation.
Conservation law
Spatial integral of temporal component of the energy-momentum tensor is a conserved charge in the continuum limit and satisfies the conservation law Three corresponding correlation functions C 00;00 , C 20;20 and C 00;22 are plotted as a function of the Euclidean time in Fig. 1 at flow time t/a 2 = 0.5, 1.0, 1.5, 2.0. As the gradient flow goes ahead statistical quality of the correlation function is improved. In the middle of the Euclidean time 5 ≤ τ/a ≤ 7 the function is flat within one sigma with high statistical precision for t/a 2 ≥ 1.5, which we consider a realization of the conservation law on lattice. On the other hand the correlation function is far from being flat near the boundary where we set the point source. This is considered to be due to the a 2 /τ 2 lattice artifact.
Rotational symmetry
Under the three dimension rotational symmetry the spatial component of the correlation function should take the form In Fig. 2 we plot the left hand side of (19) as a function of the Euclidean time. The signal becomes better as we proceed the gradient flow. The linear combination is consistent with zero for all region within 1σ -1.5σ.
Linear response relations
In this subsection we test linear response relations using the entropy density. The entropy density is expressed in three different ways. First it is written as (ϵ + p)/T by using the Maxwell's relation and the integrable condition for the entropy which is given by the expectation value of one point function of the energy momentum tensor. On the other hand starting from a linear response of the pressure against a variation of the temperature the entropy density is given by a derivative of the energy momentum tensor Substituting the statistical physics relation for the expectation value we have the second expression in terms of the two point function of the energy momentum tensor The last representation is given by using a linear response to the infinitesimal Lorentz boost The correlation functions (23) and (24) are plotted as a function of the Euclidean time in the middle and right panel of Fig. 1 for i = 2. According to the discussion given in subsection 4.1 three data at 5 ≤ τ/a ≤ 7 are employed and fitted by a constant. The results are plotted in the left panel of Fig. 3 as a function of the flow time by blue down triangles (23) and red up triangles (24) for the entropy density divided by the temperature (ϵ + p)/T 4 . The contribution from the one point function (20) is averaged over space-time and is plotted by black circles. After taking the t → 0 limit (8) three contributions give the entropy density (ϵ + p)/T 4 . We adopt the same strategy used in Refs. [10,11] to take the limit. The flowed operator (9) evaluated on the lattice behaves as follows as a function of the flow time where A µν appears due to the lattice artifact before taking the continuum limit. S µν and R µν are higher dimensional irrelevant operator. What we find in our data is the linear window where the nonlinear contributions a 2 /t and t 2 are negligible and data behaves linearly in the flow time. In the figure the linear window is indicated by the black and red vertical dotted lines for the one point function (20) and two point correlators (23), (24). The data are fitted linearly in t within the window to take the t → 0 limit. The entropy density given by the linear fit is plotted by the corresponding filled symbols near the origin. Three different representations of the entropy density are consistent with each other within the statistical error. This indicates a success of the evaluation the energy-momentum tensor correlation functions.
Specific heat
The specific heat is given by a response of the energy against a variation of the temperature Inserting the statistical physics relation it is given in terms of the two point function of the energy momentum tensor The correlation function is plotted in the left panel of Fig. 1. Here again data at 5 ≤ τ/a ≤ 7 are taken and fitted by a constant. The result are plotted in the right panel of Fig. 3 as a function of the flow time. As in the previous subsection we find the linear window indicated by the red vertical lines and perform the linear fit to take the t → 0 limit. The resultant specific heat is plotted by the filled black circle at the origin, whose value is c V = 38(27).
Conclusion
The two point correlation function of the renormalized energy-momentum tensor is calculated in N f = 2 + 1 QCD on lattice. On the lattice we confirmed the conservation law is realized apart from the point source. The spatial rotational symmetry is well shown within the statistical error. The nonperturbative renormalization on lattice is done by applying the gradient flow to the gluon and quark fields and taking a → 0 and then t → 0 limits. Both limits are realized by making use of the linear window and applying the linear fit. This procedure is tested using the entropy density. Three difference representations are consistent with each other in the t → 0 limit. Finally we calculate the specific heat by applying the method.
Future application shall be derivation of the bulk and shear viscosity using the corresponding two point correlation functions shown in Fig. 4. We are also planning to calculate the correlation function on gauge configurations with physical quark mass, which is used for a derivation of equation of state in Ref. [15]. | 3,138.8 | 2017-11-07T00:00:00.000 | [
"Physics"
] |
Costing recommended (healthy) and current (unhealthy) diets in urban and inner regional areas of Australia using remote price collection methods
Objective: To compare the cost and affordability of two fortnightly diets (representing the national guidelines and current consumption) across areas containing Australia’s major supermarkets. Design: The Healthy Diets Australian Standardised Affordability and Pricing protocol was used. Setting: Price data were collected online and via phone calls in fifty-one urban and inner regional locations across Australia. Participants: Not applicable. Results: Healthy diets were consistently less expensive than current (unhealthy) diets. Nonetheless, healthy diets would cost 25–26 % of the disposable income for low-income households and 30–31 % of the poverty line. Differences in gross incomes (the most available income metric which overrepresents disposable income) drove national variations in diet affordability (from 14 % of the median gross household incomes in the Australian Capital Territory and Northern Territory to 25 % of the median gross household income in Tasmania). Conclusions: In Australian cities and regional areas with major supermarkets, access to affordable diets remains problematic for families receiving low incomes. These findings are likely to be exacerbated in outer regional and remote areas (not included in this study). To make healthy diets economically appealing, policies that reduce the (absolute and relative) costs of healthy diets and increase the incomes of Australians living in poverty are required.
Dietary risks are among the leading risk factors for premature morbidity and mortality globally (1,2) . These risk factors comprise diets high in discretionary foods and beverages (which contain excessive amounts of added sugars, saturated fats, salt and/or alcohol) and diets low in whole grains, fruits, vegetables, legumes, nuts and seeds and dairy (i.e. healthy or recommended foods and beverages) (2,3) . Despite international health agendas having long identified the need to create healthy food environments to shift population diets towards healthier patterns and reduce the global burden of disease (4,5) , high-level actions have been inadequate (6,7) . There is widespread recognition by public health experts, and members of the public, that the cost and affordability of foods and beverages continue to pose major obstacles to the purchase and consumption of healthy foods and diets over unhealthy foods and diets (8)(9)(10) .
The 2020 United Nations Report on The State of Food Security and Nutrition in the World -Transforming Food Systems for Affordable and Healthy Diets clearly conveys how food and beverage costs and their affordability are key determinants of malnutrition globallyincluding both undernutrition and obesity (8,11) . The findings from the report estimate that in 2017, healthy diets (which include eating diverse and culturally specific foods from multiple food groups) were priced 60 % higher than diets that only met the essential nutrient requirements (based on a few food items) across 170 low-, middle-and highincome countries. Moreover, healthy diets were estimated to be unaffordable for 3 billion people. It should be noted that the affordability of foods, beverages and diets is a function of both their cost and a household's income. With recent global events, such as the COVID-19 pandemic and climate change (e.g. the 2019-2020 Australian bushfires), severely impacting employment and food systems, we may see unfavourable impacts on people's incomes and the price of foods and beverages (8) . Combined, these factors highlight the urgent need for ongoing monitoring of the price and affordability of healthy and current (unhealthy) diets.
While a number of studies have assessed the cost and affordability of healthy and unhealthy foods, beverages and diets internationally and within Australia (12)(13)(14)(15) , surveys have been infrequently conducted and have typically been limited to defined geographical regions. In part, food and beverage price monitoring has been limited by the resource-intensive nature of data collection, which traditionally requires researchers to physically travel to different retail outlets to collect prices in-store. To reduce the resource intensiveness of food and beverage price data collection and increase the scale of data collection across geographical regions, there is potential to capitalise on the increasing availability of online food and beverage price data (12,16) .
The use of a wide variety of tools and methods to measure the cost or affordability of healthy and unhealthy foods, beverages and diets has also limited the comparability of analyses across different regions (17) . In light of this, the Healthy Diets Australian Standardised Affordability and Pricing (HD-ASAP) protocol (18) was developed to provide a standardised and optimal approach to assessing the cost, cost-differential and affordability of the current (unhealthy) Australian diet (19) and a healthy (recommended) diet (3) . Using the HD-ASAP protocol, we recently compared estimates of the cost and affordability of healthy and current (unhealthy) diets using data collected in-store with data collected online and by phone calls in Victoria, Australia (12) . This pilot study illustrated that, in major cities and inner regional areas where major supermarkets are present, collecting data online and by phone calls can be both reliable and significantly less resource-intensive compared with traditional in-store methods. Low-resource food and beverage price monitoring have been recognised by multiple United Nations organisations and international experts as pivotal to informing appropriate regulatory measures that promote and protect the affordability of healthy diets, over and above unhealthy options, globally (8,16,20) .
To date few studies have estimated the cost and affordability of healthy and current (unhealthy) diets across an entire nation owing to the difficulty and resources required to collect such data in-store across geographically disperse areas. Whilst optimal methods to cost diets continue to evolve within Australia, one the most geographically disperse and diverse regions in the world, they have never been applied to more than eighteen local areas in one or two of Australia's eight States or Territories (i.e. small scale piloting) (21) . We aimed to address this gap by upscaling our reliable lower-resource price monitoring methods to determine whether the cost, cost differential and affordability of healthy (recommended) and current (unhealthy) diets varied across areas where major supermarkets are present in Australia. It is important to note that due to the absence of major supermarkets, outer regional, remote and very remote areas were not included in the current analysis.
Study design
A cross-sectional study was conducted using publicly available food and beverage price data collected online if available (supermarket, alcohol and some fast-food chains) and by phone calls from all other outlets (fast-food chains, independent bakeries, fish and chip shops and convenience stores). Food and beverage price data were collected over a 2-week period in May 2019. This period did not include any festive events, such as Easter, that may have affected the results.
The healthy diets Australian standardised affordability and pricing protocol The current study was guided by the HD-ASAP protocol for measuring the cost and affordability of healthy (recommended) and current (unhealthy) diets in Australia, which has been described in detail elsewhere (18) . The healthy diet reflects the recommendations of the Australian Dietary Guidelines (3) (and comprises forty-three representative foods and beverages across seven food groups). The current (unhealthy) diet reflects mean dietary intakes reported for selected age/gender groups in the National Nutrition and Physical Activity Survey of the Australian Health Survey 2011-2013 (19) . The current diet comprises the same core items as the healthy diet in different amounts plus thirty-two representative unhealthy foods and beverages in amounts that exceed dietary recommendations (see Table 1). For example, the diets assume that a household currently consumes 0·9 kg of bananas per fortnight compared with 5·5 kg required to meet recommended fruit intakes and 0·6 kg of table sugar compared with no added sugar as per dietary recommendations (specific diet details have been published elsewhere (14) ).
Both diets are adjusted to reflect fortnightly consumption for a reference household consisting of four people: an adult male 31-50 years old, an adult female 31-50 years old, a 14-year-old boy, and an 8-year-old girl. For this family, 35 % of the energy provided by the current diet is derived from 'discretionary' foods and beverages, that is, those that are not required for health and are high in added sugars, saturated fats salt and/or alcohol (19) . Compared with the current diet, the healthy diet is more environmentally sustainable (3) , requiring less water, supporting greater biodiversity and having a lower carbon footprint (on average 25 % less Green House Gas emissions) (3,22) . The total energy content of both household diets are similar; 33610 kJ/d for the healthy diet and 33860 kJ/d for the current (unhealthy) diet (18) .
To cost the healthy and current (unhealthy) diets, the prices of a predetermined list of foods and beverages (Table 1; full protocol also includes product brands and sizes that are commonly purchased by Australians, including fast-food and alcohol) are collected from specific food retailers in a selected area (18) (described in further detail below).
Sample
We used simple stratified random sampling to select a sample of areas where food and beverage stores and prices were collected. The Statistical Area 2 (SA2) geographical unit was used for sampling. These units are Australian Bureau of Statisticsdefined areas that represent communities (between 3000 and 25 000 people) that interact socially and economically (23) . All SA2s across the eight Australian States and Territories were eligible for inclusion except for SA2s where major supermarkets do not operate. This resulted in the exclusion of eighteen non-spatial special purpose SA2s and SA2s for the 'Other Territories' (Jervis Bay, Cocos (Keeling) Island, Christmas Island and Norfolk Island). All remaining SA2s were classified twofold: according to relative socio-economic disadvantage and remoteness. SA2s were classified into quintiles of relative disadvantage at the State/Territory level based on the Index of Relative Socio-economic Disadvantage (IRSD; Q1: most disadvantaged, Q5: least disadvantaged (24) ). By remoteness, SA2s were classified using the Accessibility/ Remoteness Index of Australia (ARIA þ ) (25) . The two major supermarket chains of interest were only found in 'Major cities' (ARIA þ 1) or 'Inner regional areas' (ARIA þ 2), thus our study was restricted to these regions and could provide no information about food prices in outer regional, remote or very remote areas. If a state or territory did not include an IRSD or ARIA þ stratum, the stratum was excluded from the study.
To systematically sample SA2s, each SA2 was assigned a consecutive number, within each IRSD quintile and ARIA þ category, for each Australian State or Territory. We aimed to sample one SA2 to align with each IRSD and ARIA þ stratum using a random number generator. The inclusion of selected SA2s was confirmed if it contained the two major Australian supermarket chains and their alcohol chains. When a selected SA2 did not meet this requirement (ARIA þ 1/major cities = 200 SA2s excluded, 15 %; ARIA þ 2/inner regional areas = 103 SA2s excluded, 21 %), a new SA2 was randomly selected.
Our sample of retailers included the two major supermarkets and their alcohol chains (which possess approximately two-thirds of the grocery market share and are most likely to represent food and beverage prices in Australia (26) ). Non-supermarket retailers that sell commonly consumed fast-food items (27) (McDonald's, Domino's, independent bakeries, fish and chip shops and convenience stores or gas service stations such as 7 Eleven for a pre-prepared chicken sandwich) were also included and sampled using Google Maps, with the most central retailers selected. These selected retailers were all required to be within seven kilometres of the centre of the SA2 (as per the original HD-ASAP protocol (18) ).
Data collection
Prices for the predetermined list of foods and beverages were collected online from two supermarkets, two alcohol Table 1 Foods included in the healthy diets Australian standardised affordability and pricing (HD-ASAP) protocol*
Foods and beverages included in the healthy (recommended) diet
Foods and beverages included in the current (unhealthy) diet, in addition to foods included in the healthy diet in reduced amounts † (12) (adapted from Lee et al. (18) ).
†Unhealthy diets reflect current food and beverage consumption levels for a reference family across the Australian population according to the 2011-2013 Australian Health Survey (19) .
stores and one of each of the fast-food chains (as specified). The researcher's geolocation was set to the selected SA2 (on the online platform) before price collection began. Foods and beverages were located using the website's search function. When an item was not listed online, prices for the cheapest, similarly sized, branded item were recorded (as per HD-ASAP approach). Prices for included foods from independent bakeries (beef pie), fish and chip shops (small/minimum chips) and convenience stores (chicken sandwich), as well as roast chicken from a supermarket, were collected by phone call. In Tasmania, data collection was affected by the absence of one of the alcohol retail chains; therefore, alcohol prices from a single retail chain were used when estimating diet costs for Tasmania. One Tasmanian SA2 (in the second SEIFA quintile) was excluded due to a data collection error for one major supermarket chain and nonsupermarket items.
Data analyses
The mean cost, cost differential and affordability of the healthy and current (unhealthy) diets were calculated for each State and Territory. A standardised HD-ASAP template was used to convert the price per unit for each food and beverage item to the price per edible gram or millilitre ($AUD). The prices of supermarket and alcohol items were averaged across the two stores in each SA2 and supplemented with prices collected from individual fast-food stores. For each food or beverage item, this was then multiplied by the amount recommended (healthy) or currently consumed (unhealthy) by the reference household for a fortnight. These costs were then summed to determine the diet costs. No consistent trends were observed for differences in diet costs across IRSD categories or major cities/inner regional ARIA þ areas (see online supplemental Table S1). As such, the results were aggregated at the State level.
The absolute cost differential of the healthy and current (unhealthy) diet for each State and Territory was calculated in dollars (by subtracting the mean cost of the healthy diet from that of the unhealthy diet) and as a relative percentage difference (mean differential in dollars divided by the mean cost of the unhealthy diet). We additionally calculated and reported the mean fortnightly costs of key food groups within each diet, for the reference household. These included fruits; vegetables and legumes; grains and cereals; meats, nuts, seeds and eggs; milk, yoghurt and cheese; alcoholic beverages; take-away foods and soft drinks. The State-level estimates of the cost of a healthy diet, a current (unhealthy) diet and each food group are represented as means and standard deviations across all SA2s in each State or Territory. Mean costs were compared between States and/or Territories using Mann-Whitney U tests (α level of 0·05) in Stata 16.
The mean affordability of the healthy and current (unhealthy) diets in each State and Territory was first assessed against the national poverty line. In 2017-2018, this was $960/week for a couple with two children (not taking into account housing costs) (28) . Affordability was also assessed against a national indicative low disposable household income that estimates a minimum wage-based household disposable income calculated in line with the HD-ASAP protocol (see online supplemental Table S2) (18) . For both denominators, a diet affordability threshold of 30 % was used (i.e. diet costs should not exceed 30 % of the minimum incomes available to Australians experiencing economic hardship) (18) .
Finally, to enable area-level comparisons, affordability was assessed against the median total gross household income (which is the only readily available metric) for each State and Territory (Australian Bureau of Statistics, 2016 Census (29) ). Total gross household income was adjusted by the wage price index as per the HD-ASAP protocol (18) .
Results
A total of 51 SA2s were sampled in the current study (see online supplemental Tables S3 and S4). This varied across States and Territories, from two eligible SA2s in the Australian Capital Territory (ACT) (where there were no SA2s classified in the IRSD Q1-3 or inner regional strata) to 10 SA2s in Victoria (one per IRSD quintile across major cities and inner regional ARIA þ areas). Each SEIFA quintile contained 8-12 SA2s and major cities/inner regional areas each contained 26 SA2s. Food and beverage price data were collected from a total of 455 retail stores (n 102 supermarket, n 98 alcohol, n 51 McDonald's, n 51 Domino's, n 51 bakeries, n 51 fish and chips and n 51 convenience stores).
Diet costs and cost differentials
For all States and Territories, the mean fortnightly cost of the current (unhealthy) diet for the reference household of four people was more expensive than that of the healthy diet (Table 2). There was some variation in the costs of the diets (healthy and unhealthy) within and between States and Territories ( Fig. 1; Table 2). The absolute and relative cost differentials (between a healthy and unhealthy diet) were lowest for the Northern Territory (NT) ($139·38 per fortnight; 19 %) and highest for Queensland ($159·89 per fortnight; 21 %). The healthy diet was $16·78 (3 %) per fortnight more expensive in Victoria, where it was most expensive ($602·72 per fortnight), compared with Western Australia, where it was cheapest ($585·94 per fortnight). The current (unhealthy) diet was $33·32 (5 %) more expensive per fortnight in the NT, where it was most expensive ($764·68), compared with Tasmania, where it was cheapest ($732·85). Table 3 outlines the cost of each food group, within the healthy and current (unhealthy) diets, across each State and Territory. Within the healthy diet food groups, the cost of grains and cereals and meats, nuts, seeds and eggs were similar across all States and Territories. In comparison, variations of most to least expensive were in the cost of milk, yoghurt and cheese (16 % more expensive in Western Australia compared with New South Wales), fruit ($11-12 per fortnight or 15-16 % more expensive in the NT and ACT compared with Queensland) and vegetables and legumes (9 % more expensive in Victoria compared with South Australia). The costs of alcoholic beverages and soft drinks were relatively stable across each State and Territory. Take-away foods were most expensive in the NT and cheapest in Tasmania (14 % cost difference).
Diet affordability
When assessed against the national poverty line (same across all States and Territories), both the healthy and current (unhealthy) diets were considered unaffordable across all major cities and inner regional areas (costing 31 % and 38-40 % of the value of the poverty line, respectively) ( Table 2). Against the indicative low disposable income (also the same across the nation but higher than the poverty line), the healthy diet was considered affordable (25-26 % of income), but the current (unhealthy) diet was not (31-33 %).
When assessed against the median gross household income (noting that gross income overrepresents the affordability of diets compared with disposable income) that varied between States and Territories, there was greater variation in the affordability of the diets as a percentage of income (Table 2). While the healthy diet was considered affordable for all major cities and regional areas across States and Territories, it made up only 14 % of gross income in both the ACT and the NT (noting that state-level gross incomes can overrepresent disposable incomes for lower income households, particularly in remote areas) compared with 25 % in Tasmania. Similarly, the current (unhealthy) diet cost only 17-18 % of the gross incomes in both the ACT and the NT compared with 31 % in Tasmania.
Discussion
In 2019, an Australian household would find healthy diets to be less expensive and more affordable than current (unhealthy) diets in all major cities and inner regional areas sampled. The higher cost of the current diet (compared to the healthy diet) is due to the additional costs incurred from the consumption of excessive amounts of unhealthy foods and beverages. This occurs despite the healthy and current diets being similar in energy content; per reference household, the current diet provides 33 860 kJ/d and the recommended diet provides 3600 kJ/d (18) . Similar results have been observed in several studies in Australia (12,21,30) and New Zealand (31) .
This evidence is contrary to existing perceptions and literature that suggest healthy diets are more expensive than current (unhealthy) diets by on average by $US 1·50/person/d (10,32) . In comparison, standardisation of our estimates indicates that healthy diets are more expensive than current (unhealthy) diets by approximately $US 2/person/d in Australia. One explanation for this discrepancy reflects the differences in methodological approaches, particularly given the wide range of selected food and beverage items that make up the different diets across studies (33) . Notably, unlike many other methods, the HD-ASAP protocol includes alcohol and high intakes of fast foods when estimating current (unhealthy) diet costs, but not in the healthy diets as these items are not included in Australia's national dietary guidelines (3) . This adds additional costs to the current (unhealthy) diet (i.e. alcoholic beverages comprising 12 % of the current food budget in Australia (21) ), making the healthy diet relatively more affordable than when they are not included. Importantly, the relative affordability of healthy compared with current (unhealthy) diets in Australia is likely, in part, attributed to the 10 % Goods and Services Tax (GST) exemption for all basic, healthy foods (34) . Furthermore, whilst we found that healthy diets were more affordable compared with unhealthy (current) diets across Australia, few methodological approaches consider the time costs associated with preparing healthy foods (35) or how price promotions encourage consumers to make purchasing decisions (often for unhealthy options) on a food-by-food rather than whole-of-diet basis (10,36) . These factors are also likely to contribute to current perceptions around the relative affordability of current (unhealthy) over healthy diets and interplay with many other factors to influence food consumption practices (e.g. food access, income, etc.) (10,21) .
We must note that our results do not reflect State-or Territory-wide diet cost or affordability estimates as we could not sample outer regional, remote or very remote areas where there are no major supermarkets. Nonetheless, we demonstrated that both healthy and current (unhealthy) diets would be unaffordable across all Australian jurisdictions for households living on the poverty line. Interstate affordability differences were primarily observed when assessed against median gross household incomes. That is, a healthy diet would be most affordable in major cities and inner regional areas in the ACT and NT (where median gross household incomes are highest) and least affordable in Tasmania. These findings highlight the need to ensure all households receive adequate incomes that provide the opportunity to afford and consume a healthy diet.
Diet cost variability in Australia
When assessing the average fortnightly cost of a healthy diet in major cities and inner-regional areas, we found that it differed between States and Territories, with a maximum difference of $16·78 ($585·94 in Western Australia to $602·72 in Victoria, equating to an additional $436 per annum for a family living in a major city or inner regional area in Victoria). These differences were largely driven by milk, yoghurt and cheese prices. Across 1 year, milk, yoghurt and cheese in the healthy diet would be approximately $354 more expensive for a family living Table 3 Mean fortnightly costs ($AUD) of key food and beverage groups in current (unhealthy) and healthy amounts consumed by a reference household of two adults and two children, by state and in a major city or inner-regional area in Victoria compared with Western Australia. Between States and Territories, the maximum difference in the average fortnightly cost of the current (unhealthy) diet was $33·32, with differences predominantly driven by variations in the price of take-away foods. The cost of take-away foods would be approximately $540 more expensive, per annum, for a family in inner regional areas (no other ARIA þ areas sampled) of the NT compared with Tasmania.
Diet affordability in Australia
When assessed against the national poverty line and indicative low disposable household income, both healthy and current (unhealthy) diets for Australian families living in major cities and inner regional areas were consistently unaffordable in all States and Territories. Our estimates in this national study align with our previous findings for eight areas with major supermarkets in Victoria in 2018, whereby healthy and current (unhealthy) diets were found to cost 33 % and 40 % of the national poverty line (income) and 26 % and 31 % of the indicative low disposable household income (12) . Additional evidence has suggested that diets are largely unaffordable in other parts of Australia where incomes are low, including in outer regional Victoria and remote communities (13,15,37,38) . Emerging international evidence is beginning to elaborate on our understanding of income-driven or income-related food insecurity, which is thought to affect one in ten people across sixteen European countries (39) .
Household incomes in Australia
We further exemplified how income can drive differences in diet affordability by examining interstate differences according to median gross household income (the only available state-level metric). Our analyses suggest that healthy and current (unhealthy) diets were most affordable in major cities and inner regional areas of the NT and ACT where median gross household incomes are highest. Yet these results are likely to conceal income inequalities between professionally employed (e.g. mining) groups and other groups experiencing socio-economic disadvantage in the NT (inequalities that are also likely to exist in other jurisdictions). Of particular concern is how the median weekly gross household income for Aboriginal and Torres Strait Islander peoples in the NT ($1225/ week) (40) is 38 % less than the median weekly household income for the whole territory ($1983/week) (41) . When using the median gross household income for Aboriginal and Torres Strait Islanders in the NT, a healthy diet becomes overtly unaffordable (48 % of this income, aligning with previous research (15) ). Literature increasingly points towards the importance of income and poverty as significant determinants of diet affordabilitywhich in turn underscores the need to better consider food and beverage prices from a social systems perspective.
Strengths and limitations
A key strength of the current study was the collection of data online and by phone calls, which reduced the financial expenditure and time required for food and beverage price collection, enabling simultaneous data collection across all Australian States and Territories for the first time.
The reliability of these estimates can be inferred from a number of previous studies which have used the in-store HD-ASAP approach across a smaller number of areas (12,42) . For example, using data collected in-store in 2015, Lee et al. found that a healthy diet cost $603 and $627 in two major cities in New South Wales (compared with our state-wide estimate of $594 in 2019) and current diets cost $730 and $761 (compared with our state-wide estimate of $748) (42) . Nonetheless, our sampling revealed that only major cities and inner regional areas contained both major supermarkets, which precluded diet pricing evaluations across three other ARIA þ classifications (outer regional, remote and very remote areas). The presence of major supermarkets was particularly limited in the NT (and completely absent in the Territory's most disadvantaged IRSD quintiles) compared with most other States and Territories (see online supplemental Tables S3 and S4). Our price collection methods are further hindered in outer regional and remote areas by the absence of online food and beverage pricing data for the dominant independent grocers, including Independent Grocers of Australia.
Implications for research and policy
Additional research is required to extend our low-resource data collection methods to parts of Australia (and the world) that do not have major online supermarkets, including remote communities where small stores are the main source of food and food prices have been repeatedly shown to be up to 60 % more expensive (37,43,44) . Citizen science or crowdsourcing approaches may be one useful way to engage community members in price collection using novel digital platforms (45,46) . Such approaches can also empower everyday citizens to be agents of change and contribute to the development of local food and economic policies that protect and promote diet-related health. In the meantime, our study strengthens the rationale for governments to continue funding in-store data collection of food and beverage prices in rural, remote and very remote settings.
Traditional food and beverage price monitoring methods, including the HD-ASAP approach, are also typically limited in their inclusion of diet variety and actual consumption/expenditure practices. Our methods have the potential to address these issues into the future; for example, by developing large food price data sets to examine the cost and affordability of different diet patterns, product types and prices, thereby leading to estimates that may better reflect the variability of population diets.
To inform more robust diet affordability calculations into the future, the available income data will need to be improved to include measures of median disposable household income (according to socio-economic position, remoteness and Aboriginal and Torres Strait Islander status). Comprehensive monitoring of food and beverage costs and their affordability can ultimately inform food pricing policy actions that can improve the healthiness of population purchasesespecially among those experiencing socio-economic hardship (37,43,47) . This may include providing data to show exactly how more unaffordable healthy diets would become if the GST base was extended to include basic healthy foods or identifying specific food groups where pricing policies may be targeted (e.g. Sugar Sweetened Beverage taxes and regulations on supermarket price promotions) (16) . Social protection policies should also be revised to improve the social determinants of health (i.e. income, employment and housing) and the affordability of basic necessities in Australia (38,39,48) .
Conclusions
Our study demonstrates that whilst the cost of a healthy and current (unhealthy) diet may be similar across major cities and inner regional areas (n fifty-one areas which have major supermarkets) in all eight Australian States and Territories, differences in diet affordability are apparent. Notably, a healthy diet remains unaffordable for families living below the poverty line, who are also at greater risk of diet-related ill-health (49) . To reduce inequities in dietrelated disease and death in Australia, it is essential that food, social and economic policies are enacted to promote the economic appeal of healthy over current (unhealthy) diets and ensure that everyone receives a sufficient income which supports key opportunities to be healthy. This is particularly important now as the world experiences radical shifts in pricing and income structures due to unprecedented climate and health pandemics (38) . The development and application of robust food and beverage price monitoring systems (to inform effective policy actions) is also arguably more important now, than ever before. | 7,116.2 | 2021-09-21T00:00:00.000 | [
"Economics",
"Medicine"
] |
Data of fluorescence, UV–vis absorption and FTIR spectra for the study of interaction between two food colourants and BSA
In this data article, the fluorescence, UV–vis absorption and FTIR spectra data of BSA-AR1/AG50 system were presented, which were used for obtaining the binding characterization (such as binding constant, binding distance, binding site, thermodynamics, and structural stability of protein) between BSA and AR1/AG50.
Experimental factors
The solution of BSA was prepared in phosphate-buffer (0.05 M NaH 2 PO 4 -Na 2 HPO 4 , pH¼ 4.8, 5.5, 6. 3 Value of the data The data are intuitionistic for readers to compare the binding affinity of AR1/AG50 with BSA; The data are helpful to readers for understanding further the related parameters calculated; The data may be of great help to study in detail the similar systems.
Data
The interaction data of BSA with AR1/AG50 were determined using Cary Eclipse fluorescence spectrofluorimeter (Varian, USA), UV-3600 spectrophotometer (Shimadzu, Japan) or Nicolet-6700 FTIR spectrometer; and these data were shown as fluorescence quenching spectra, the Stern-Volmer plots in the absence and presence of ethanol/NaCl, RLS spectra, UV-vis absorption spectra, UVmelting profiles, synchronous fluorescence spectra, and FTIR spectra. Corresponding parameters were calculated based on the interaction data.
In addition, to make the figures in the text become clearer, all of the figures were processed by Photoshop 8.1 software. ChemOffice 2008 was used for drawing the structures of acid red 1 and acid green 50 (Fig. 1). from J&K Scientific Ltd. (Beijing, China) and Acros Organics (New Jersey, USA), respectively. And all other chemicals were analytical reagent grade.
Fluorescence quenching of BSA by AR1/AG50
3.0 mL BSA solution (2.0 μM) was titrated by successive additions of AR1/AG50 solution with the concentration of 3.0 Â 10 À 4 mol L À 1 at different conditions (pH ¼ 4.8, 5.5, 6.3 or 7.4, T ¼293, 298, 304 or 310°K, c(NaCl)¼0.0, 0.04, 0.09 or 0.15 M, and/or ethanol content (%)¼0%, 5% or 10%), and the final concentration of AR1/AG50 was kept at 11.54 Â 10 À 6 mol L À 1 . The fluorescence quenching of BSA with the addition of AR1/AG50 was recorded in the range of 300-500 nm by Cary Eclipse fluorescence spectrofluorimeter (Varian, USA). The width of the excitation and emission slit was adjusted at 5 nm, and the excitation wavelength was selected at 280 nm. The temperature of samples was kept by recycle water during the whole experiment. All fluorescence titration experiments were done manually by 50 mL microsyringe.
The figures of fluorescence quenching spectra ( Fig. 2) were made using Origin 7.5.
2.2.7. The effect of ethanol on the quenching plots of BSA-AR1/AG50 system The measured fluorescence quenching data of BSA by AR1/AG50 without or with ethanol content (5% or 10%) at pH¼4.8, 5.5, 6.3 and 7.4 were corrected [1] and fitted by Origin 7.5 based on Eq. (1) (Fig. 8). FTIR spectra in the 1800-900 cm À 1 region for free BSA (0.2 mM), BSA-AR1 and BSA-AG50 complexes (the molar ratio of BSA to AR1 or AG50 is maintained at 1:1), and their corresponding difference spectra were indicated in the figure. The contribution of AR1 or AG50 is subtracted from the different spectra in this region. Table 3 The binding constants K, binding sites number n and thermodynamic parameters for the BSA-AR1/AG50 system at different conditions. Systems NaCl (M) UV-3600 spectrophotometer at pH 4.8 and 7.4, respectively; and corresponding absorption spectra were fitted using Origin 7.5 based on Eq. (2) (Fig. 9).
The synchronous fluorescence spectra
Synchronous fluorescence spectra of BSA (2.0 μM) with the increasing AR1/AG50 concentration (0-75.00 μM) at Δλ¼15 and 60 nm were recorded using Cary Eclipse fluorescence spectrofluorimeter in the range of 250-500 nm. Other scanning parameters were the same as those of the fluorescence titration experiments. Besides, corresponding figures (Fig. 13) was made by Origin 7.5. 2.2.13. FTIR spectra FTIR spectra of free BSA (0.2 mM), BSA-AR1 and BSA-AG50 complexes (the molar ratio of BSA to AR1 or AG50 is maintained at 1:1) were recorded on Nicolet-6700 FTIR spectrometer via the attenuated total reflection (ATR) at a resolution of 4 cm À 1 and 64 scans in the range of 400-4000 cm À 1 at room temperature. The corresponding absorbance contributions of buffer and free AR1/AG50 solutions were recorded and digitally subtracted with the same instrumental parameters, and their FTIR spectra (Fig. 14) was done by OMNIC.
The parameters of S-V plot
The parameters of fluorescence quenching for the BSA-AR1/AG50 system at different conditions were calculated using the S-V equation [1].
2.2.15. Effect of pH, NaCl and ethanol on the binding parameters of BSA-AR1/AG50 system The binding parameters of the two systems (Tables 2-4) were calculated using double logarithm regression curves, Debye-Hückel limiting law and Van't Hoff equation based on the data of fluorescence quenching at different conditions, respectively [2,3]. Table 7 The binding rate constants k and corresponding statistical parameters for the BSA-AR1/AG50 system at different conditions. | 1,145.2 | 2016-06-23T00:00:00.000 | [
"Chemistry"
] |
IMPLEMENTATION OF SCIENTIFIC APPROACH TO INCREASING JUNIOR HIGH SCHOOL STUDENTS' MATHEMATICAL PROBLEM SOLVING ABILITY
Article history: This research aims to analyze the learning of class VIII students at Nur Al Rahman Middle School in Cimahi City on linear equations of two variables system material using a scientific approach. In learning mathematics using a scientific approach, especially in linear equations of two variables system material, it is hoped that students can play an active, creative, skilled, and noble character, find and solve problems. The research method used was classroom action research with the subjects studied being 15 class VIII students. This research was conducted for 2 cycles with data collection techniques, namely observation, documentation, and tests in the form of essays that match the indicators of mathematical problem-solving ability. The increase in students' mathematical problem-solving abilities can be seen from the achievement of the percentage of test questions on the problem-solving abilities of the 2 cycles carried out. The instrument used in the research results found that the problem-solving ability skills in linear equations of two variables system material were 33% for cycle I, then 80% in cycle II. This means that learning mathematics in linear equations of two variables system material using a scientific approach has a better influence on students' mathematical problem-solving.
INTRODUCTION
Education is an effort to form human resources that can improve the quality of life.Therefore, improving the quality of education is something that must be done irrationally.In the process of learning science, mathematics is considered the queen or mother of knowledge, namely the source of other knowledge.In other words, many sciences whose discoveries and developments depend on mathematics (Aisyah, Yuliani, & Rohaeti, 2018).
In Indonesia, education is given top priority.This is marked by the government's efforts to provide an education budget of 20% of the State Revenue and Expenditure Budget (APBN).This refers to the 1945 Constitution article 31 paragraph 1 which states that every citizen has the right to education.Mathematics is used by every human element, be it housewives, employees, traders and others, mathematics is part of education and is made one of the compulsory subjects in schools.This is in line with the statement (Aripin, 2015) that mathematics is a human activity.Everyone carries out mathematical activities, starting from housewives, traders, employees, students, mathematicians, etc., according to their individual needs.In mathematics subjects students must be able to master related subject concepts, so that students can understand a subject matter in mathematics to be able to understand and think creatively in solving the problems they face (Aripin & Purwasih, 2017).One of the objectives of learning mathematics in Kurtilas and KTSP 2006 includes understanding mathematical concepts and their relationships and applying them in various problem solving precisely and thoroughly (Hendriana, 2017).One of the abilities in mathematics is the ability to solve problems (Hidayat & Sariningsih, 2018).
Problem solving ability is one of the objectives of learning mathematics that must be achieved by students.In everyday life, consciously or unconsciously, every day we are faced with various problems that require problem-solving skills.By solving problems students will learn to devise appropriate strategies to solve the problems they face.problem solving is considered as the heart of learning mathematics in line with Burchartz & Stein's statement (Yazgan, 2015) problem solving always plays an important role, because all mathematical creative activities require problem solving actions.
To find out students' mathematical problem solving abilities, they must be faced with mathematical problems (mathematics questions).By facing math problems, students will try to solve problems using all the schemes that are in them.Problem solving involves interaction between schemas (knowledge) possessed by students and application processes that use cognitive and affective factors in solving problems (Webb, 1979).
The system of two-variable linear equations is one of the mathematics subject matter studied at the junior high school level, especially in class VIII.The system of two-variable linear equations is material that students must understand, because SPLDV material will be prerequisite material for material to be studied later, but the results of Rahayu Ningsih and Qahar's research show that students' ability to solve problems related to SPLDV, especially questions that are in the form of the story is classified as less than expected (Indriani, 2018).Therefore, an appropriate approach is needed that can increase student motivation and learning outcomes in SPLDV material.
The researcher conducted interviews at SMP IT Nur Al Rahman in Cimahi City and interviewed the mathematics teacher who taught there, the results of the interviews obtained by the teacher had complaints that during the learning process the teacher gave teaching that was not focused on students so that students had difficulty understanding the material that had been explained by the teacher .In SPLDV material students are required to try to independently solve problems, be it problems in contextual form or mathematical models, because when solving SPLDV problems students must be able to construct themselves in order to be able to choose the right method to use for a given problem.The learning model applied by the teacher is a lecture model, so students are less active during class learning.In order to achieve the goals of learning mathematics, one of which is the ability to solve mathematical problems, it is necessary to provide new innovations to LKPD which aim to construct students' knowledge.The innovation carried out in the LKPD is in the form of using a learning model or strategy which is used as the basis for developing LKPD.LKPD will be more optimal if it is based on one of the learning models or strategies that have the aim of increasing students' problem-solving abilities and teaching how to solve a problem.One model or learning strategy that can be used to achieve this goal is through a scientific approach (Zulfah et al., 2018).According to Polya (Lismaya, 2019), the model used in learning mathematics and which is very important in developing mathematical abilities is a scientific approach.
According to Hosan (Septiety & Wijayanti, 2020) revealed that the scientific approach means one of the stages specifically made to make students active in learning by constructing.Meanwhile, according to Daryanto (Najmul & Wadi Hairil, 2020) the scientific approach has several objectives which include (1) assessing students' intellectual abilities (2) directing students to be able to solve mathematical problems (3) instilling the importance of learning in students (4) learning outcomes that can maximized (5) train students' self-confidence (6) develop student character.
Based on the purpose of the scientific approach, namely solving mathematical problems, it can be said that the scientific approach is used to assist students in understanding the material of the system of two-variable linear equations (SPLDV).This is in accordance with the results of research (Azzahro, Dian, & Sujiran, 2019) which revealed that a scientific approach can make it easier for students to construct understanding in solving mathematical problems, especially the material of the two-variable linear equation system (SPLDV).
Polya in (Marlina, 2011) defines four steps that can be taken so that students are more focused in solving mathematical problems, namely understanding the problem, devising plan, carrying out the plan, and looking back which is defined as understanding the problem, making plans, implementing plans, and look back at the results obtained.According to (Sukayasa, 2012) with Polya's steps students will get used to working on questions that do not only rely on good memory, but students are expected to be able to relate them to real situations they have experienced or have thought about.Students can also have traits that can appreciate the usefulness of mathematics in life, namely having curiosity, attention and interest in learning as well as being tenacious and confident in problem solving.According to (Sukayasa, 2012) problem solving phases according to Polya are more popularly used in solving mathematical problems than the others.Perhaps this is caused by several things, including: (1) the phases in the problem solving process that Polya proposed were quite simple; (2) the activities in each phase stated by Polya are quite clear and; (3) the phases of problem solving according to Polya are commonly used in solving mathematical problems.
Based on the background of the problems above, the formulation of the problem in this research activity is obtained, namely, can a scientific approach improve the mathematical problem solving skills of IT junior high school students in Cimahi on SPLDV material?From the formulation of the problem, there is a goal of this research, which is to find out whether a scientific approach can improve the mathematical problem solving skills of SMP IT students in Cimahi City on SPLDV material.
METHOD
This research is a Class Action Research (PTK) or Class Action Research (CAR).This research is used to provide the right strategy in delivering material that will be used in learning so as to motivate students to continue to be active when learning takes place.The model used in Classroom Action Research (CAR) is the spiral model.Kemmis & Taggart (Arikunto, 2006) explains that the spiral model consists of four stages, namely planning, implementing actions, observing or observing and reflecting.The stages that must be achieved by students in each indicator, including: 1.The stage of understanding the problem.This stage has the goal of knowing the students' ability to understand the problem of collecting information on the problem and then converting it into a mathematical model to determine the value.2. Stage of planning completion.This stage has the goal of knowing the ability of students to choose a strategic plan in solving the appropriate problem and then develop procedures for solving the problem.3. The stage of implementing the plan.This stage has the goal of knowing the ability of students to carry out the plans that have been compiled to get the right results.4. Check back stage.Then at this last stage the aim is to find out the ability of students to identify problems, collect all information, and conclude solutions to problems.
The instruments used in the test are as follows: Problem 1: Bayu was told by his mother to go to the market to buy two types of fish, single and tuna, his mother only gave Rp. 30,000 and all of them had to be bought for both types of fish.At a fish selling place, Bayu found the following prices: The price for 6 single fish and 3 tuna is Rp.24,000 The price for 8 single fish and 2 tuna is Rp.20,000 If each type of fish is the same size, how many fish of both types can Bayu buy?Problem 2: A trader sells all the skipjack tuna and mangrove crabs he gets for IDR 600,000.The price of 2 mud crabs is IDR 12,000 and the price of 3 skipjack tuna is IDR 60,000.If he only sells 2/5 of the mangrove crabs and 1/3 of the skipjack tuna, then he can collect as much as IDR 110,000.How many each of skipjack tuna and mangrove crabs did the trader sell?Problem 3: The number of women compared to the number of men who attend the launching ceremony for a motorboat is 2 : 5.If 6 of the men present leave the event before it is finished, then the ratio of the number of women and men present will be 1 : 2. How many people attend attend the ceremony before anyone left the event?
This problem-solving ability research sheet was prepared by researchers adapted from the TPMM scoring referring to problem-solving indicators from Polya (Marlina, 2011).The data analysis technique was carried out through 3 (three) stages, namely checking the results of student answers, presenting test data, and drawing conclusions from the research.To find out the percentage of each type of answer error, use the formula: = 100% Information : Q: Percentage n : Total Score N : Maximum Score
Results
This study used classroom action research which consisted of 2 cycles, where each cycle consisted of the stages of planning, implementing the action, observing the results of the action and reflecting.At the planning stage of cycle I, a field survey was carried out to see what material was being studied with the model used by the teacher, the results obtained from the field survey were the material being studied, namely the Two-Variable Linear Equation System (SPLDV) material.After that, the class action was carried out by teaching SPLDV material with a scientific approach and also carrying out tests in the form of essays in accordance with the indicators of mathematical problem solving ability.Based on the results of the analysis carried out, it can be seen in table 1 that the highest percentage tends to be at a low ability level, the stage of understanding problems in students at a relatively low ability level is 40.00%, because students answer tend to be direct in solving problems without writing down anything understood first.In making a settlement plan, students who belong to the lower abilities are higher, namely 66.66%.This shows that students have not been able to develop plans that will be used in solving problems.The percentage of students in the problem-solving ability level at the stage of carrying out the settlement plan is also at a low level, namely 66.66% and there are no students who are in high ability.And for the percentage of students in checking again is very large at a low level of 80.00%.The following is one of the test results on students in cycle I who are at a low ability level.
Figure 2. Results of cycle I student answers
In figure 2, the results in cycle I were still not satisfactory, students still could not solve the problems on the test questions given.It can be seen that students can only achieve indicators of understanding the problem but cannot continue with answers, so other indicators of solving mathematical problems are not achieved.Students who are classified as having high abilities reveal that at the stage of carrying out the settlement plan, high accuracy is needed.Because if you do just a little wrong it will make all the answers wrong.While students with moderate abilities revealed that at the stage of carrying out the completion plan there were many steps that had to be taken from eg, changing into a mathematical model, eliminating, and substituting.Even if it is wrong to change from a verbal sentence to a mathematical model, then the end result can be all wrong.Then do the reflection by doing the second cycle in order to get satisfactory results.So that researchers can see how far the problem-solving abilities of students are by using this scientific approach in the matter of the Two-Variable Linear Equation System (SPLDV).
In the second cycle, the researcher gave LKPD with completion steps using a scientific approach to the SPLDV material.In learning students are asked to work in groups, when working on the LKPD students are assisted by researchers in the learning process.After doing the learning, the students took the test again in the form of an essay in accordance with the indicators of mathematical problem-solving ability and the results of the students were obtained in the second cycle.The following is a table of student percentages in the level of problem solving ability in cycle II with the Polya model.Here it can be proven that these students experienced an increase in mathematical problem solving abilities according to the indicators given.The highest percentage tends to be at a high level of ability, the stage of understanding problems in students at a relatively high level of ability, namely 80.00%, because students are able to answer directly solving problems by writing down what they understand first.In making a settlement plan, students who are classified as high ability become 53.33%.This shows that students are able to develop plans that will be used in solving problems.The percentage of students in the problem-solving ability level at the stage of carrying out the settlement plan is also relatively higher at the high level, namely 66.66%, although there are still students who are at medium and low levels.And the percentage of students who checked again was at a moderate level, namely 46.66%.Here is one of the test results on students in cycle II.
Figure 3. Results of cycle II student answers
In Figure 3 above there are the results of students' work on the Two Variable Linear Equation System (SPLDV) material.There it can be seen that students have been able to work on the questions in accordance with the indicators given so that there is an increase in students working on these questions compared to the results in cycle I.cycle I, then this research is sufficient until cycle II.
Discussions
From the data shown above the results of observations made to students of IT Nur Al Rahman Middle School on student learning outcomes, it was found that there were many students with high, medium and low ability levels and it can be seen from the percentages in the table above, this shows that learning using a scientific approach in SPLDV material carried out in the first cycle was unsatisfactory because it did not match the desired expectations, namely students with low ability levels were more than students who had high ability levels.
In the next stage, after carrying out learning in the first cycle, the results of which were unsatisfactory, the researcher reflected, namely the researcher evaluated the learning in cycle I, the researcher concluded that there were weaknesses in cycle I and there were still many students who had low ability levels.From the results of the tests that have been given to students which show the percentage of students who have not achieved completeness in accordance with the problem solving indicators, so the researchers added an approach to the learning process, namely a scientific approach.Then all the deficiencies that existed in cycle I can then be determined in cycle II.
This was done because the researchers felt that in the learning process in cycle I, many students were at a low level of ability and also many did not have the motivation to learn and felt bored
Journal of Innovative Mathematics Learning
Volume 7, No. 1, March 2024 pp 84-93 91 with the learning approach that they were used to getting from math class teachers, so that learning outcomes The results obtained for achieving problem-solving skills in cycle I were unsatisfactory, as seen from the percentage of students who were still at a low level of ability.Therefore, researchers felt the need for improvement in cycle II to eliminate the boredom experienced by students and to increase students' motivation towards learning.In mathematics SPLDV material, the goal is to get better student learning outcomes in the two-variable linear equation system material, so further action is needed to achieve the expected final result, namely achieving an increase in problem-solving skills on student learning outcomes in SPLDV material in the form of word problems.
Based on reflection on the actions of cycle I and through discussions with the teacher who taught them, the researcher and the teacher concluded that grade VIII IT Nur Al-Rahman Middle School students actually had an understanding of this material, but the teacher said that the lack of seriousness of students' interest in learning caused the results to be lacking.satisfactory, from that rationale researchers and teachers agree that there must be the right approach so that students' interest in learning increases.The researcher took the initiative to add an approach to learning, namely a scientific approach.So that the learning outcomes that were not as expected in the previous cycle were better, the researchers finally took the initiative to add learning videos to complement the scientific approach that had been applied to learning in the first cycle.
The test results show several completion plans written by students in solving the problems given.From the test results, the average student solves problems with strategies for making mathematical models, eliminating, and substituting.Problem solving strategies and steps based on the Polya model were not previously taught to students so that students did not follow the four stages of problem solving according to Polya.Students tend to directly plan problem solving and carry out problem solving without writing down what is understood in the problem, while some students do not write down how to plan the solution first.Overall students can solve problems using the Polya model.Solving problems with the Polya model makes it easier for students to solve problems.This is relevant to research conducted by (Komariah, 2011) as a whole student activities during learning activities through the application of problem solving the Polya model are in the good category.
Based on the data from the analysis of answers to written tests and interviews, it was shown that in general, students of all levels of mathematical ability were able to understand the problem very well, both students with high (T), medium (S) and low (R) abilities were able to meet the ability indicators problem solving, namely understanding the problem very well, being able to identify the elements that are known, the elements that are asked, and the adequacy of the elements needed to solve mathematical problems in SPLDV material.So as to be able to mention what is known, and what is asked.This is relevant to research conducted by (Rahmawati, 2017) on the application of the Polya model and is in a very good category.
Student test results also show several settlement plans written by students in solving the problems given.From the test results, the average student solves problems with strategies for making mathematical models, eliminating, and substituting.Problem solving strategies and steps based on the Polya model were not previously taught to students so that students did not follow the four stages of problem solving according to Polya.Students tend to directly plan problem solving and carry out problem solving without writing down what is understood in the problem, while some students do not write down how to plan the solution first.Overall students can solve problems using the Polya model.Solving problems with the Polya model makes it easier for students to solve problems.This is relevant to research conducted by (Najmul & Wadi Hairil, 2020) overall student activity during learning activities through the application of problem solving the Polya model is in the good category.
Based on the results of the researcher's analysis, when viewed from the learning process and student learning outcomes using a scientific approach has increased, the low ability of students' mathematical problem solving in SPLDV material is caused by the way the teacher provides teaching that is not focused on students so that students find it difficult to understand the material that has been explained by the teacher .In SPLDV material students are required to try to independently solve problems, be it problems in contextual form or mathematical models, because when solving SPLDV problems students must be able to construct themselves in order to be able to choose the right method to use for a given problem.We can see from Tables 1 and 2 that there is an increase in students' abilities obtained in solving SPLDV problems.
CONCLUSION
Based on the results of the research and discussion that has been carried out on the actions of the two cycles, we can see that the scientific approach can improve students' mathematical problem solving abilities.This can be proven in the percentage of cycle I on all indicators at a low level, while in cycle II there is an increase in all indicators to a high level.
Figure 1 .
Figure 1.Research Spiral Model from Kemmis & Taggart This type of research is descriptive qualitative which is to analyze students' abilities in solving problem solving questions aimed at obtaining an overview of students' mathematical problem solving abilities.The subjects in this study were class VIII students at SMP IT Nur Al-Rahman Cimahi consisting of 15 students.The time for carrying out this research is in November in semester I of the 2022/2023 Academic Year.The research instrument used consisted of tests in this study to measure students' skills in solving word problems.(1) The test in the researcher contains questions in the form of descriptions which have been checked for validity, reliability and distinguishing power.The form of description questions was chosen to collect data regarding students' problem-solving abilities, (2) anget used was a mathematical disposition questionnaire (3) The observation method was carried out by researchers during the learning process which aims to observe the process of implementing mathematics learning.Student answers were analyzed through four indicators, namely (1).Indicators of understanding the problem include: (a) knowing what is known and asked about the problem and (b) explaining the problem according to their own words.(2).Indicators of making plans include: (a) simplifying problems, (b) being able to make experiments and simulations, (c) being able to find sub-objectives (things that need to be looked for before solving problems), (d) sorting information.(3).The indicators of carrying out the plan include: (a) interpreting the problems given in the form of mathematical sentences, and (b) implementing strategies during the process and calculations.(4).Indicators of looking back include: (a) checking all the information and calculations involved, (b) considering whether the solution is logical, (c) looking at other alternative solutions, (d) reading the questions again, (e) asking yourself whether the questions already answered.
The next stage was to observe class action by examining student answers.reflection.The following table shows the percentage of students in the problem-solving ability level based on the Polya model.
Table 2 .
Ability Level of Cycle II StudentsBased on table 2, we can see that students in the SPLDV learning evaluation test increased. | 5,871.6 | 2024-03-12T00:00:00.000 | [
"Mathematics",
"Education"
] |
Endothelial EphrinB2 Regulates Sunitinib Therapy Response in Murine Glioma
Vascular guidance is critical in developmental vasculogenesis and pathological angiogenesis. Brain tumors are strongly vascularized, and antiangiogenic therapy was anticipated to exhibit a strong anti-tumor effect in this tumor type. However, vascular endothelial growth factor A (VEGFA) specific inhibition had no significant impact in clinical practice of gliomas. More research is needed to understand the failure of this therapeutic approach. EphrinB2 has been found to directly interact with vascular endothelial growth factor receptor 2 (VEGFR2) and regulate its activity. Here we analyzed the expression of ephrinB2 and EphB4 in human glioma, we observed vascular localization of ephrinB2 in physiology and pathology and found a significant survival reduction in patients with elevated ephrinB2 tumor expression. Induced endothelial specific depletion of ephrinB2 in the adult mouse (efnb2i∆EC) had no effect on the quiescent vascular system of the brain. However, we found glioma growth and perfusion altered in efnb2i∆EC animals similar to the effects observed with antiangiogenic therapy. No additional anti-tumor effect was observed in efnb2i∆EC animals treated with antiangiogenic therapy. Our data indicate that ephrinB2 and VEGFR2 converge on the same pathway and intervention with either molecule results in a reduction in angiogenesis.
Introduction
Gliomas are brain tumors derived from brain glia cells and represent 80% of primary brain malignancies [1]. Treatments are slowly advancing, and primary care consists of maximal surgical resection in combination with radiation and chemotherapy [2,3]. Gliomas are strongly vascularized and show infiltrative behavior along blood vessels [4]. Therefore, anti-vascular therapy was hypothesized to be highly effective in these tumors [5]. The high expectations for antiangiogenic therapy in glioma were unable to be upheld as shown by evidence from multiple clinical trials in which treatment resistance was highly abundant [6][7][8][9][10][11]. As such, further research is needed to identify mechanisms of resistance. Preclinical data have identified the involvement of the cell-cell interaction surface receptor EphB4 [12,13]. EphB4-ephrinB2, receptor-ligand interaction is essential during development as the genetic elimination of either gene results in embryonic lethality [14][15][16]. Moreover, ephrinB2 was identified to play a critical role in the regulation of VEGFR2 signaling during vasculogenesis and angiogenesis [17]. The ephrinB2 PDZ domain is required to control VEGFR2-ephrinB2 dimerization and internalization in endothelial cells [17]. Hence, ephrinB2 regulates VEGFR2 s main downstream signaling pathway. These findings indicate that antiangiogenic therapy and ephrinB2 blockage converge on the same downstream pathways. We therefore hypothesized that genetic ephrinB2 depletion and antiangiogenic therapy result in similar phenotypes. Additionally, we aimed to research additive anti-tumor effects of genetic ephrinB2 depletion and antiangiogenic therapy.
Mouse Breeding and Knockout Induction
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Tamoxifeninducible endothelial specific ephrin-B2 knockout mice (efnb2 i∆EC ) and efnb2 lox/lox littermates were described previously [25]. Knockout was induced in adult animals of both sexes according to the Jackson Lab guideline: using five sequential injections of 2 mg tamoxifen/mouse/day (T5648, Sigma-Aldrich, Burlington, MA, USA). Five days post final injection the animals were prepared for surgery. Control animals (efnb2 lox/lox ) received identical treatment.
Fluorescent Microscopy and Image Analysis
The slices were imaged using a Zeiss fluorescent microscope with AxioVision Software (Observer Z1, 5× EC Pln N, 5×/0. 16 sections was based on the CellProfiler pipeline developed by Tian et al. [25]. The analysis pipeline was added to the Supplemental Material.
Orthotopic Intrastriatal Implantation Model
Animals of both sex, 8-11 weeks old, were obtained from the FEM (in-house animal facility). Anesthesia was induced by i.p. injection of 70 mg/kg Ketamine hydrochloride (Ketavet, Zoetis, Ilford, UK), 16mg/kg Xylazine (Rompun 2%, Bayer, Berlin, Germany) water solution. The animals were mounted in a stereo tactic frame, the skull was exposed, and hole was drilled with a 23G needle (1 mm rostral/2 mm lateral of bregma). One microliter of 2 × 10 4 /µL (PBS) tumor solution was incrementally injected over 5 min with a Hamilton syringe lowered 3 mm into the brain parenchyma. After the injection the syringe was retracted slowly over five minutes. Orthotopically implanted GL261 cells were described to form tumors with high grade glioma characteristics in immunocompetent mice [27][28][29][30]. The incision was closed and allowed to recover on a heating plate. Mice were injected with Phenoxymethylpenicillin (InfectoCilin, 5 Mega, InfectoPharm Arzneimittel und Consilium GmbH, Heppenheim, Germany) intramuscularly and the drinking water was enriched with tramadol.
MRI Quantification of Tumor Growth Dynamics
The MR imaging protocol has been described previously [26]. Briefly, we performed volumetric measurements of tumor mass 7-, 14-and 19-days post implantation using a 7 Tesla rodent MRI (PharmaScan 70/16 US, Bruker BioSpin MRI GmbH, Paravision 5.1, Ettlingen, Germany). A T2-weighted sequence was used showing the brain from olfactory bulb to cerebellum. Volumetric analysis was performed using ImageJ software.
Antiangiogenic Treatment: Sunitinib Therapy
Sunitinib (Pfizer, New York, NY, USA) dissolved in dimethyl sulfoxide (DMSO) was administered intraperitoneally (i.p.) at (80 mg/kg bodyweight/day) for 6 or 5 consecutive days as described previously [31]. Temodal therapy experiments in murine glioma suggests optimal efficacy on tumor growth when therapy is administered during the period of exponential growth [32]. Placebo animals were injected i.p. with a weight-adjusted volume of DMSO.
Intravital Epi-Illuminating Fluorescence Video Microscopy in a Chronic Cranial Window Model
Cranial window surgery was described previously [12,26]. Briefly, general anesthesia was induced. The animal was placed in a stereotactic frame and an approximately 5 mm diameter craniotomy in the central region of the calvaria anterior of the bregma was performed. The dura mater was removed, and the spheroid was placed subdurally onto the cortical surface. The glass window was sealed and the skin was closed with single sutures. The animal was allowed to recover for 9 days. On days 9, 12 and 14 post surgery, the newly formed, invading vasculature in the circumfluence of the tumor spheroid was imaged as described previously [26].
Intravital Epi-Illuminating Fluorescence Data Analysis
The CapImage 8.0 software (CAPIMAGE-Cyntel-Software-Engineering, Heidelberg, Germany) was used to analyse tumors' marginal and central areas separately in 4-5 observations. With computer assistance microcirculation in the characteristics of the newly formed tumor vessels was quantified as total vascular density (TVD, cm/cm 2 ), functional vascular density (FVD, cm/cm 2 ) and perfusion index (PI = FVD/TVD).
Randomization and Statistical Evaluation
The animals were randomly allocated to efnb2 i∆EC or control. The experimenter was blinded for MRI-volumetry and for intravital fluorescence video microscopy. Therapy Life 2022, 12, 691 4 of 12 application was not blinded (technical limitation of the orange colored sunitinib solution) Statistical analysis was performed with GraphPad Prism 9 (San Diego, CA, USA). For statistical testing one-way ANOVA and two-way ANOVA with Sidak's multiple comparison test was used. Results are presented as mean and standard deviation unless stated otherwise, p < 0.05 is considered significant.
Expression Analysis of EFNB2 and EPHB4 in Physiology
We first analyzed the expression of EFNB2 and EPHB4 in normal brain tissue. EFNB2 expression gradually declines with age, whereas EPHB4 expression increases just before birth (35-36 pwc, Figure 1a). The murine expression in development is comparable, which makes the mouse an adequate model system to study ephrinB2 and EphB4 (Figure 1b). Histological staining of ephrinB2 showed strong expression in endothelial cells (red arrow) and neurons (pink arrow) as previously described ( Figure 1c) [17,33]. EphB4 expression in human tissue was primarily located on glial cells (yellow arrow) and some neurons (pink arrow, Figure 1c). In accordance with previous findings the expression analysis of endothelial cells of different vascular compartments revealed high ERNB2 expression in arterial endothelial cells (aEC) and elevated expression of EPHB4 in venous endothelial cells (vEC, Figure 1d) [34]. Interestingly, metabolically active endothelial cells (EC1-3) show elevated levels of EPHB4. Based on this finding we performed a metabolomics search that identified only one metabolic pathway associated with EPHB4: the well-known protein tyrosine phosphate modifications induced after EPHB4 activation ( Figure S1a). Tamoxifeninduced endothelial specific EFNB2 knockout in the adult mouse resulted in no changes to the vascular morphology of naïve brain tissue (Figure 1e). The area occupied by vessels, the average diameter and the circumfluence of the CD31 positive vessels remain similar (Figure 1e).
Expression Analysis of EFNB2 and EPHB4 in Pathology
We screened different cancer entities for the expression of EPHB4 and EFNB2. While most cancers show an upregulation of both genes, in glioma significant upregulation was found for EPHB4 but not EFNB2 (Figure 2a, detailed plotting available in Figure S1b). Histological analysis of glioma tissue shows increased, global expression of EFNB2 and some EPHB4 expression in tumor cells (Figure 2b). Increased EFNB2 expression in glioma was shown to result in a significant reduction in patient survival. EphB4 had no effect on patient survival (Figures 2c and S1c). Histological detection of ephrinB2 in glioma was very broad (Figure 2b); detailed analysis of different tumor compartments shows variable expression of ephrinB2 and EphB4. Endothelial cell rich tumor compartments, specifically, hyperplastic blood vessels and microvascular proliferative areas show increased EFNB2 and EPHB4 expression (Figure 2d). These, endothelial cell driven, EFNB2 expression changes further materialized in a different dataset, where increased endothelial cell infiltration was positively correlated with EFNB2 expression (Figure 2e). Here, EPHB4 expression showed no significant expressional changes depending on endothelial cell infiltration (Figure 2e). Based on the previous literature, ephrinB2 and VEGFR2 are known interaction partners and their dimerization is necessary for VEGFR2 internalization and angiogenic signaling [17]. We therefore examined the expression correlation of FLT1 and KDR (genes coding VEGFR2) with EFNB2 and found a positive correlation for both genes in human samples (Figure 2f).
Changed Glioma Growth Kinetics and Sunitinib Therapy Response after EFNB2 Knockout in Endothelial Cells
We continued to investigate endothelial expression of ephrinB2 in brain pathology. To this end, we injected GL261 murine glioma cells extrastriatally in control efnb2 lox/lox and efnb2 i∆EC animals and performed T2 weighted MRI 7, 14 and 19 days after tumor cell injection in combination with antiangiogenic therapy (experimental overview Figure 3a). We observed different tumor growth kinetics in efnb2 i∆EC animals ( Figure 3b). Growth was Life 2022, 12, 691 5 of 12 found to be linear, independent of the treatment group in efnb2 i∆EC animals. In control animals, glioma growth was exponential in the placebo group compared to the sunitinib group in which this growth kinetic was interrupted with therapy induction. (Figure 3c). A significant reduction in tumor growth was found in the sunitinib efnb2 lox/lox group. OL-Oligodendrocytes; EC-Endothelial cells; AC-Astrocytes; v-venous; c-capillary; a-arterial; aa-arteriolar; 1,2,3-subtypes according to [23,24]). (e) Brain endothelial cells show no change in vascular morphology after ephrinB2 knockout. (f) Automated quantification of the area occupied, diameter and circumference showed no change in vascular characteristics after induced endothelial ephrinB2 knockout (efnb2 lox/lox vs. efnb2 i∆EC , Student's two-sided unpaired t-test). On the other hand, EphB4 expression had no effect on the glioma infiltration behavior of endothelial cells. Tumor purity is a major confounding factor in this analysis since most cell types are negatively correlated with tumor purity. Therefore, we use the partial Spearman's correlation to perform this association analysis (p > 0.05 = not significant, p < 0.05, rho < 0 = negative correlation, p < 0.05, rho > 0 = positive correlation). (f) The VEGFR2 subunits KDR and FLT1 correlate with the expression of ephrinB2. The partial Spearman's correlation was used to perform this association analysis (p > 0.05 = not significant, p < 0.05, rho < 0 = negative correlation, p < 0.05, rho > 0 = positive correlation).
Life 2022, 12, 691 7 of 12 efnb2 i∆EC animals and performed T2 weighted MRI 7, 14 and 19 days after tumor cell injection in combination with antiangiogenic therapy (experimental overview Figure 3a). We observed different tumor growth kinetics in efnb2 i∆EC animals (Figure 3b). Growth was found to be linear, independent of the treatment group in efnb2 i∆EC animals. In control animals, glioma growth was exponential in the placebo group compared to the sunitinib group in which this growth kinetic was interrupted with therapy induction. (Figure 3c). A significant reduction in tumor growth was found in the sunitinib efnb2 lox/lox group. Linear growth expansion of GL261 glioma was observed after endothelial specific ephrinB2 knockout (efnb2 i∆EC ). Exponential growth characteristics were found in control animals (efnb2 lox/lox ). The size between placebo control efnb2 lox/lox and sunitinib control efnb2 lox/lox (**** p < 0.0001) tumors and sunitinib control efnb2 lox/lox and placebo efnb2 i∆EC (* p = 0.0107) tumors was significant (two-way ANOVA, Sidak post hoc multiple comparisons test).
Functional Characterization of Glioma Angiogenesis in efnb2 i∆EC Animals with Sunitinib Therapy
Nine days after frontal cortical placement of a GL261 tumor spheroid, the angiogenic phenotype was investigated using intravital epifluorescence microscopy. Similar to the intra striatal tumor injection experiments, control and efnb2 i∆EC mice received antiangiogenic and placebo therapy. Considering the increased number of cells used for inoculation in these animals, we started the therapy 9 days post implantation. Baseline imaging was performed at day 9, and consecutive observations followed at day 12 and 14 (experimental protocol Figure 4a). We visualized changes in vascular density, perfusion, blood velocity, blood flow and diameter (Figure 4b and Figure S2). Apart from the changes in vascular diameter, efnb2 i∆EC resembled most of the vascular phenotypes observed in the control antiangiogenic therapy group (Figure 4c-e). Moreover, sunitinib had an add on effect in the ephrinB2 knockout animals in the total vascular density (TVD) measurements (Figure 4c). Vascular morphology (diameter) revealed interesting dynamics with placebotreated efnb2 i∆EC animals showing a significant increase in diameter 12 days after spheroid implantation and a further reduction 2 days after compared to the relatively unchanged average diameter in efnb2 i∆EC animals and placebo control animals (Figure 4f). sunitinib therapy in controls resembled the well-characterized resistance formation phenotype with large vessel diameters 14 days post-therapy, after an initial reduction in blood vessels 3 days post-therapy (Figure 4f). quantified in control and efnb2 i∆EC animals revealed a significant difference in placebo control tumors 12 and 14 days (*** p = 0.0005 and **** p < 0.0001) after implantation. Sunitinib therapy showed a significant effect in control animals at both timepoints (*** p = 0.0003 and **** p < 0.0001). Sunitinib therapy in efnb2 i∆EC animals significantly reduced TVD at day 14 compared to sunitinib control efnb2 lox/lox and placebo efnb2 i∆EC animals (* p = 0.036 and * p = 0.0229). Reduced TVD in placebo control animals was found on day 12 and 14 after tumor implantation (*** p < 0.0001 and **** p < 0.0001). (d) Functional vascular density (fVD) quantified showed a significant therapeutic difference in control tumors after 12 days (*** p = 0.0005) and recovery after 14 days. Endothelial ephrinB2 depletion (efnb2 i∆EC ) reduced fVD 12 and 14 days after surgery (*** p = 0.001 and ** p = 0.0072) with an add on effect observed under sunitinib therapy (day 12: **** p < 0.0001, day 14 **** p < 0.0001). EphrinB2 depletion in endothelial cells (efnb2 i∆EC ) showed an additional fVD reduction 14 days after tumor
Discussion
Based on TCGA and other public datasets, we identified ephrinB2 as a driver of gliomagenesis. In-depth analysis revealed invading endothelial cells to express high amounts of ephrinB2. These findings align well with the established vascular phenotype in glioma induced by VEGFA secretion and the observed vascular VEGFR2 regulation function of ephrinB2 [17]. We further strengthened this argument by correlating the expression of the two VEGFR2 subunit genes FLT1 and KDR with the expression of EFNB2 in Glioma.
In vitro analysis of ephrinB2 in endothelial cells found binding to CD31 [35]. EphB4 positive endothelial cells segregated to small compartments when co-cultured with ephrinB2 positive cells [36]. Hence, the authors concluded that this pathway restricts intermingling of arterial cells expressing EFNB2 with venous cells that express EPHB4 in vasculogenesis. Previous in vivo reports identified vascular ephrinB2 to be essential in embryonic development and modulation of pericyte recruitment [37,38]. However, the brain vasculature was unaffected in the adult animal after Tamoxifen induced ephrinB2 knockout as previously described [39]. We therefore continued our investigation regarding endothelial ephrinB2 in glioma in combination with antiangiogenic therapy. We observed that endothelial ephrinB2 knockdown changed the growth behavior of gliomas and reduced susceptibility to antiangiogenic therapy through vascular morphology alterations. We hypothesize that the interference in endothelial VEGFR2 signaling induces hypoxic conditions similar to the therapeutic blockade of VEGF [40,41]. In consequence, other proangiogenic pathways are activated [42,43]. These signaling pathways are not as potent in providing nutrients so that tumor growth cannot occur exponentially [44]. Hence, slower, more physiological growth is observed.
Intravital microscopy studies confirm this hypothesis. The alternations in microvascular flow in efnb2 i∆EC animals reconstitute the changes observed with the antiangiogenic agent sunitinib. Sunitinib was favored over Bevacizumab in this study based on the reported lack of immune neutralization of murine VEGFA by Bevacizumab [45]. Sunitinib reproduced the well reported antiangiogenic effects in murine glioma [31,46]. However, introducing this therapy in glioma spheroid bearing efnb2 i∆EC animals had little to no additional effect even though multiple off-target effects of sunitinib have been reported [47]. We consider two hypotheses explaining this observation; first, sunitinib exerts its therapeutic potency primarily by inhibition of VEGFR2 in this spheroid model of murine glioma. Second, ephrinB2 has more diverse molecular membrane interaction partners than previously anticipated and the knockdown affects multiple downstream pathways [40,48].
Both are subject to further investigation. In this study we lack molecular downstream analysis to verify the exact downstream convergence of ephrinB2 and VEGFR2 signaling; another limitation is the lack of clinical translation through therapeutic interventions with ephrinB2-EphB4, a field of active research. Most ephrinB2-EphB4 therapeutics today target the EphB4 receptor [40]. Only the preclinical agent EphB4-FC targets the extracellular domain of ephrinB2 with unknown downstream consequences in vivo. Internalization of ephrinB2-FC/EphB4-FC has been reported in vitro and it is unknown if this results in the activation of downstream effectors or inhibition of the ephrinB2-EphB4 signaling [49][50][51]. Recently, the promising bimodular inhibitor BIDEN-AP based on TNYL-RAW has been developed that inhibits both ephrinB2 forward and EphB4 reverse signaling [52].
However, ephrinB2-EphB4 signaling is complex: it can lead to tumor progression and tumor arrest/reduction, depending on the cancer entity, ligand expression, receptor expression, internalization status, forward-reverse signaling balance, the microenvironment the tumor grows in and many other unidentified aspects. We extensively reviewed this phenomenon in the neurooncological context, concluding that more molecular tools must be developed to address the Janus-faced nature of ephrinB2-EphB4 signaling in cancer [40].
In summary, our findings indicate that ephrinB2 and VEGFA converge on similar downstream pathways in endothelial cells in vivo. Interference with either molecule resulted in comparable pathological and vascular changes.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/life12050691/s1, Figure S1: (a) Metabolic pathway analysis of EphB4 tyrosine receptor kinase shows involvement in general cytosol metabolism, (b) Detailed expression analysis of ephrinB2 and EphB4 in different tumor types (red) and controls (blue), (c) Survival probability depending on EFNB2 expression obtained from the Protein Atlas Dataset; Figure S2: (a) Blood velocity was unimpaired in the different groups, (b) Blood flow was unimpaired in the different groups.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 4,481.4 | 2022-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Does Iris Change Over Time?
Iris as a biometric identifier is assumed to be stable over a period of time. However, some researchers have observed that for long time lapse, the genuine match score distribution shifts towards the impostor score distribution and the performance of iris recognition reduces. The main purpose of this study is to determine if the shift in genuine scores can be attributed to aging or not. The experiments are performed on the two publicly available iris aging databases namely, ND-Iris-Template-Aging-2008–2010 and ND-TimeLapseIris-2012 using a commercial matcher, VeriEye. While existing results are correct about increase in false rejection over time, we observe that it is primarily due to the presence of other covariates such as blur, noise, occlusion, and pupil dilation. This claim is substantiated with quality score comparison of the gallery and probe pairs.
Introduction
Human growth or aging from newborn to toddler to adult to elderly is a natural phenomenon. This process leads to changes in different characteristics such as height, weight, face, gait, and voice. Several of these characteristics are being used as biometric identifiers. In literature, it is well established that over a long period of time, some biometric modalities such as face and voice can change, thereby reducing the recognition performance. On the other hand, iris is considered to be one of the most accurate and stable biometric modalities [1].
Daugman mentioned that iris is well protected from the environment and stable over time [1,2]. This fact is also supported with the case study of Sharbat Gula, the Afghan girl whose iris templates were matched after the age difference of 18 years [3]. Owing to these characteristics of iris recognition, it is now used for authentication in several large scale government identification projects [4,5]. However, recent research has claimed that iris recognition accuracy degrades over time [6][7][8][9][10]. Tome-Gonzalez et al. [6] studied the effect of time on the BiosecureID database with time lapse of maximum four months. The authors used Masek's iris matcher [11] to investigate the effect of aging and analyzed that the intra-class variability increased over time with very little change in the impostor distribution. However, the time lapse considered for this study is very short (four months) and it is not justifiable to attribute aging to be the cause of performance reduction. Baker et al. [12] analyzed aging in iris recognition for multi-year time lapse. 6,797 iris images of 23 subjects were captured using the LG2200 iris camera. To evaluate the false nonmatch rate (FNMR) across time, images were collected from the same subjects first at an interval of less than 120 days and then at an interval of more than 1200 days. The images used in this study were manually screened for quality checks and the performance was evaluated using Neurotechnology VeriEye SDK [13] along with two other matchers. The authors inferred that factors such as pupil dilation, contact lens, occlusion, and sensor aging could not account for increase in false non match rates. Fairhurst et al. [14] studied aging on 79 users with 632 images. They modified Masek's iris segmentation to reduce the segmentation errors and improve iris recognition accuracy. The authors concluded that dilation decreases with age thereby reducing the matching performance over time. Fenker and Bowyer [10,15,16] performed experiments with images pertaining to 322 subjects captured over a period of three years. They concluded that false non-match rate increases with time because of template aging. Ellavarason and Rathgeb [17] re-investigated the two year time lapse database used by Fenker and Bowyer [8] with six different iris feature extraction algorithms. They also observed that change in FNMR from short to long time lapse can be attributed to template aging. Sazonova et al. [18] examined the effect of elapsed time on iris recognition on 7628 images from 244 subjects acquired over a time lapse of two years at Clarkson University. The authors also considered the impact of quality factors such as local contrast, illumination, blur, and noise on the performance of iris recognition. VeriEye SDK and modified Masek's algorithm were used for generating match scores and the significance of quality factors for recognition was also analyzed. They observed that the performance of both the matchers degrade with time. Recent research on aging by Czajka [19] used a dataset of 571 images collected from 58 eyes with up to eight years of time lapse acquired from 2003 to 2011. The results obtained using three different matchers and genuine scores exhibit template aging. The authors claimed that more accurate matchers are highly vulnerable to aging. Rankin et al. [9] performed another study for aging using visible spectrum images in which the images were acquired from both the eyes of 119 subjects. Even for a short time difference of six months, 32 out of 156 comparisons resulted in false rejections. This performance was obtained by applying both local and non-local operators. These error rates are very high compared to other studies. In response to Rankin et al. [9], Daugman and Downing [20] pointed out that their error rates were constant at all points in time studied, namely about 20%, showing no change in recognition accuracy over time. Recently, on two time-lapse private datasets collected by law enforcement agencies, using a complex regression analysis, National Institute of Standards and Technology (NIST) IREX report [21] suggests that population-averaged recognition metrics are stable, consistent with the absence of iris ageing. It can be analyzed from the literature that researchers do not have a consensus on iris template aging. It is our assertion that proper analysis is required to understand the impact of aging on iris recognition. The objective of this study is to use the publicly available iris aging databases to understand iris aging and reasons for degradation in performance. In our experiments, it is observed that the increase in false rejection is due to poor acquisition, presence of occlusion, noise, and blur. The quality values of the falsely rejected gallery-probe pairs further substantiate the fact that the quality of iris images taken from two different sessions are different in comparison to the genuinely accepted pairs.
Materials and Methods
This research re-investigates the challenge of iris template aging [6][7][8][9][10]17]. The databases and algorithms used in this research are briefly explained below.
Ethics Statements
All the experiments for this study are approved by the IIIT-Delhi Ethics Board. The iris databases are obtained from the CVRL Lab, University of Notre Dame [22], which are prepared as per the UND IRB guidelines with written consent obtained from the participants.
Databases
Two publicly available iris databases are used to investigate the effect of aging on iris recognition with a time lapse of two years and four years. [12] contains images acquired with the LG2200 iris camera located in the same studio throughout all the acquisitions. A total of 6797 images are collected from 23 subjects (46 irises) in between 2004 to 2008. The age of these subjects ranges from 22 to 56 years where 16 subjects are male and 7 are female.
Commercial Matcher
Iris recognition is performed using the commercial VeriEye SDK [13], that has shown good performance in the state-of-art evaluations by NIST [23]. VeriEye contains advanced segmentation, enrollment, and matching routines. For segmentation, VeriEye uses active shape models that accurately detect contours of the irises which are not perfect circles. The enrollment and matching routines are fast and yield very high matching performance/accuracy.
Experimental Protocol
The experimental protocol used to perform the experiments are explained below for each database.
1. ND-Iris-Template-Aging-2008-2010: The protocol followed for this database is same as provided by Fenker and Bowyer [10]. All the possible genuine comparisons are provided as part of the protocol. In the experiments, short refers to images captured within the same year whereas long refers to comparisons across years. The cross session irises for this particular study refers to the images captured over a time lapse of one or two years. 2. ND-TimeLapseIris-2012: The protocol followed for this study consists of two sets of image pairs [12]. The short time lapse set consists of image pairs with no more than 120 days of time lapse between them. The long time lapse set consists of image pairs with more than 1200 days of time lapse. An image instance can participate in multiple short and long time lapse pairs. Each image instance has several associated attributes such as date of acquisition, unit, color, glasses, and contact lens. For a genuine comparison, the units of two iris images must match along with the time lapse mentioned above. However, in the experiments, some false acceptance cases with exceptionally high scores (almost close to genuine acceptance) were observed. On carefully analyzing these images, we observed that there are ground truth errors in the database due to incorrect ID labels. These incorrectly labeled instances belong to ids: 04870d1810 and 04888d395. The cases associated with these incorrectly labeled ids were not considered in this study.
Results
If the performance degradation is caused due to aging, then this should hold true for all genuine comparisons pertaining to an individual across different sessions. Therefore, three sets of experiments are performed to closely study the cause of rejections that happen over time. The detailed description and analysis of each experiment is given below.
Experiment 1: Performance Evaluation
The first experiment is performed to compute iris matching accuracy for both short and long time lapses. Genuine and impostor scores are obtained using the VeriEye SDK on the protocols explained earlier. Table 1 shows the genuine accept rate (GAR) at 0.001% false match rate (FMR) for both long and short time lapses on the ND-Iris-Template-Aging-2008-2010 and ND-TimeLapseIris-2012 databases. The results show that we are able to reproduce the accuracies reported by the original papers. The distribution of genuine and impostor scores are shown in Figure 1. There is no evident shift in the impostor scores whereas the genuine scores show a shift towards the impostor scores for long time lapse. Further, the receiver operating characteristic (ROC) curves in Figure 2 show a slight variation between long and short time lapses. The performance with long time lapse is slightly lower than the short time lapse.
McNemar test [24] shows that at 95% confidence interval, these results are statistically significant. This experiment shows that there is a reduction in the verification results in the long time lapse. However, the cause of shift in distributions or decrement in genuine accept rate cannot merely be attributed to aging. Therefore, the next experiments focus on determining the cause for performance reduction.
Experiment 2: Common Subjects Over Time
It is our hypothesis that for a given subject, if aging exists and if the false rejections can be attributed to aging, then all the iris images of this subject with the same or more time lapse should be rejected. With this hypothesis, we analyze false rejection cases to understand if the rejections are occurring due to aging or any other factor. In the ND-Iris-Template-Aging-2008-2010 database, the subjects that are common over multiple years are selected. There are 34 subjects common to 2008, 2009, and 2010 sessions. These common subjects are chosen to carefully study the cases of rejection and investigate the corresponding cases which are otherwise accepted. Table 1 illustrates the total number of genuine comparisons pertaining to these 34 subjects along with the number of false rejects. Here, all the experiments are performed using a threshold that produces the FMR of 0% in order to solely concentrate on the cause of genuine rejections over a period of time. Similarly, the rejections at 0% FMR from the ND-TimeLapseIris-2012 database are also obtained (all 23 subjects are present in both short and long time lapses). The number of genuine matches and false rejections at 0% FMR are shown in Table 1. false rejections. It is observed that these rejections are also due to noisy gallery or noisy probe instances. Similarly, as shown in Table 1, there are 1280 cases of false rejection for long time lapse in the ND-TimeLapseIris-2012 database. This number is actually very small compared to the total number of genuine matches, i.e., 128,875. Here also, it is observed that the cases are rejected primarily due to variations in quality (quality aspect is discussed as part of Experiment 3).
N Figures 4 and 5 show cases from the gallery image captured in one session and probe images captured in session from another year. It is observed that some probe images of the subject match whereas others from the same session and same subject do not match. Thus, it can be inferred that aging is not the year, two year, and four year differences, also show that the proportions are statistically non-significant.
Experiment 3: Analyzing Quality of Rejected Iris Pairs
From experiment 2, it can be inferred that the performance reduction on the ND-Iris-Template-Aging-2008-2010 and ND-TimeLapseIris-2012 databases is not due to iris template aging. Therefore, to determine the actual cause of degradation, we analyze the image quality of the gallery and probe pairs. The quality of iris images is assessed using the quality assessment algorithm proposed by Kalka et al. [25]. It computes quality metrics such as blur, rotation, off-angle, and occlusion to determine a single composite quality score. The quality values of the gallery and probe images are obtained for the falsely rejected and the corresponding genuinely accepted pairs of these subjects over long time lapse. Let q be the quality of an input iris image. For a gallery and probe iris image pair i, the absolute difference, c i , is calculated as c i = Dq gallery{i {q probe{i D. This Table 2. Difference between the quality scores of the gallery and probe pairs (q q) for experiment 3. absolute difference is calculated for all the selected genuine accept and false reject cases and Vi,q q~medianfc 1 ,c 2 , Á Á Á ,c i g is obtained. Table 2 illustrates the median quality differences for the examined datasets. It can be observed thatq q for falsely rejected pairs is higher than genuinely accepted iris pairs. This observation suggests that the pairs are falsely rejected because of the increased difference in the quality of gallery and probe image pairs. The results of these three experiments put together suggest that the false rejections on the two iris databases are mainly due to occlusion, rotation, blurring, illumination and pupil dilation or constriction.
Discussion and Conclusion
Recent research results initiated the discussion on whether aging affects iris templates or not. While some researchers support that aging affects the performance, others are of the opinion that it does not have a prominent effect. Using publicly available iris template aging databases, this paper shows that the reduced performance of iris recognition may not be caused by aging but due to noise and differences in the quality of gallery and probe pairs. Some of our observations are: N Though, for long time lapse, genuine score distributions demonstrate a shift towards the impostor score distributions, empirical investigation suggests that the rejections are caused by improper capture that leads to occlusion, rotation, blurring, illumination, and pupil dilation or constriction in iris images.
N The analysis also suggests that had aging been the cause of rejections then this should uniformly affect the performance. However, only few samples with time difference are rejected and other samples of the same subject with similar time difference are accepted.
N Existing literature suggests that one of the factors for template aging is pupil dilation-constriction with human growth. While there are reported results in medical literature to support this claim, it is more prevalent in elderly people only. In order to analyze this effect, we should collect iris images of different individuals at 4-10 years apart, specially for people with age of over 50 years.
It is our assertion that iris template aging is an important research problem which requires a longitudinal study; similar to face biometrics where 2-60 years time lapse has been studied. We believe that to conduct a proper study on longitudinal effects, an ideal approach would be to collect a controlled iris database of individuals in different age groups over a period of several years. Such a database can help in understanding the factors that may affect iris recognition performance such as sensor aging, interoperability, human growth (pupil dilation-constriction), and image quality. | 3,794 | 2013-11-07T00:00:00.000 | [
"Computer Science"
] |
The data on metagenomic profile of bacterial diversity changes in the different concentration of fermented romaine lettuce brine
This article describes the data on metagenomic profile of the bacterial community and diversity in the brine of fermented romaine lettuce in two experimental brine salinity (5 and 10%) obtained by Illumina MiSeq sequencing of V3–V4 region of 16S rRNA gene. A total of 98,660 reads (10%) and 95,968 (5%) consisted 38 and 47 consensus lineages (OTUs), respectively. Predominating phyla in 10% were 55.89% Proteobacteria and 41.97% Firmicutes, while predominating phyla in 5% brine were 63.47% Proteobacteria and 34.60% Firmicutes. The predominating lower taxa in 10% brine were Haererehalobacter salaria (54.19%) and Bacillaceae (33.2%), while in 5% brine were Enterobacteriaceae (46.67%) and Bacillus (21.53%). Leuconostoc (6.97%) and Lactococcus (3.97%) only appeared in 5% brine. The data presented here is the first metagenomic profile of romaine lettuce fermentation in different brine concentration and will serve as a baseline to understand the shifting of bacterial diversity associated with the change in brine concentration.
Data
This dataset describes the effect of different concentration of brine in the fermentation of lettuce to evaluate the shifting of the bacterial community during the fermentation process using metagenomic data. Salt is mainly used in fermentation as preservative because it has a capability to reduce the water activity of foods [1] therefore the type and extent of microbial activities are inhibited. The data presented here was obtained by Illumina MiSeq sequencing of V3eV4 regions of 16S rRNA gene. Rarefaction plots (Fig. 1) revealed that the sequencing coverage were sufficient for data comparison, as all samples entered the plateau phase. The observed OTUs sequences in the curve has not been combined together according to their respective taxonomical assignations, therefor the numbers of OTUs observed are higher than presented in Table 1 (see section 2.3 for the procedure).
After removal of chimera and singleton, the OTUs were clustered according to 97% degree of similarity. The OTUs which belong to the same taxa were then combined together. A total of 6 phyla and 1 unassigned were generated. A total of 38 OTUs from 98,660 reads and 47 OTUs from 95,968 reads was found in 10% and 5% of fermented brine of Romaine lettuce, respectively. As shown in Fig. 2 and Table 1, most bacteria in the 10% fermented brine belonged to Proteobacteria (13 OTUs, 55,140 reads, 55.89% of the 98,660 reads in total) and Firmicutes (15 OTUs, 41408 reads, 41.97% of the total reads). The minority belonged to Cyanobacteria (1 OTU, 1.381 reads, 1.4% of the total reads), Actinobacteria (7 OTUS, 101 reads, 0.10% of the total reads), Planctomycetes (1 OTU, 47 reads, 0.05% or the total reads), Armatimonadetes (1 OTU, 2 reads, 0.002% of the total reads, and unassigned (581 reads, 0.59% of the total reads).
Most bacteria in the 5% fermented brine belonged to Proteobacteria (17 OTUs, 60,909 reads, 63.47% of the 95,968 reads in total) and Firmicutes (21 OTUs, 33,209, 34.60% of the total reads). The minority belonged to Actinobacteria (5 OTUs, 1,021 reads, 1.06% of the total reads), Cyanobacteria (1 OTU, 308 reads, 0.323% of the total reads), Planctomycetes (2 OTUs, 11 reads, 0.01% of the total reads), Armatimonadetes (1 OTU, 5 reads, 0.01% of the total reads), and unassigned (505 reads, 0.53% of the total reads). Cyanobacteria are naturally found in all types of water [2]. The data shows the shift of the bacterial community structure in the 5% and 10% brine. Romaine lettuces were fermented anaerobically in 5% and 10% brine for 4 days at room temperature in the dark. The brine was served as collection of microbiome. Extracted bacterial genomic DNAs were used as templates to amplify V3eV4 region of the 16S rRNA genes. An Illumina two-step PCR protocol was used to prepare the amplicon library. The paired-end sequences were generated in 2x300bp format from MiSeq.
Experimental features
The filtered sequence reads were analysed using bioinformatics pipeline.
Data source location
Department of Biology, Faculty of Mathematics and Natural Sciences, Sam Ratulangi University, Manado e North Sulawesi -Indonesia Data accessibility Data are included in this article.
Value of the data This is the first metagenomic profiling report on Romaine lettuce fermentation in different brine concentration.
The data gives insight on the shifting of bacterial community in different concentration of brine in Romaine lettuce fermentation.
The data can be regarded as base line to the safe Romaine lettuce fermentation.
The top 10 lower taxa including unassigned OTUs are depicted in Fig. 3. The common dominating core genus in both samples was Bacillus. There were also some common lower taxa in both samples, however the amount was very unbalanced. One was found very high in 5% brine, but very low in 10% brine, and vice versa. The data showed a shift in the lower taxa of the bacterial community in both samples. The predominating lower taxa in 10% brine included Haererehalobacter salaria (p:Proteobacteria; c:Gammaproteobacteria; o:Oceanospirillales; f:Halomonadaceae) (54.19% of the total reads), Bacillaceae (p:Firmicutes; c:Bacilli; o:Bacillales) (33.20% of the total reads), and Bacillus (5.8% of the total reads). In 5% brine, the predominating lower taxa included Enterobacteriaceae (p:Proteobacteria; c:Gammaproteobacteria; o:Enterobacteriales) (46.67% of the total reads), Bacillus (51.53% of the total reads), Erwinia (8.32% of the total reads), Leuconostoc (6.98% of the total reads), Citrobacter (6.92% of the total reads), and Lactococcus (3.97% of the total reads).
Agrobacterium, Erwinia, Exiguobacterium, Lactococcus, Leuconostoc, and Pseudomonas were not detected in the 10% brine. There are some insignificant reads of Bacillus flexus, Citrobacter, Klebsiella, and Micrococcaceae in 10% brine, however they can be found abundantly and quite abundantly in 5% brine. Haererehalobacter salaria and other Bacillaceae were the most abundant in 10% brine, while Enterobacteriaceae and Bacillus were the most abundant in 5% brine. Alpha diversity indices are shown in Table 2. The p-values corresponding to the F-statistic of one-way ANOVA were lower than 0.05 (except for Chao), suggesting that the treatments were significantly different. All indices are higher in 5% than 10% fermented Romaine lettuce brine. Chao value was not significantly different between the two samples. This means that the diversity of both samples were the same. The diversity was estimated from abundance of individuals belonging to a certain taxa. The dissimilarity (Whittaker b-diversity index) between bacterial communities of the two samples was 0.34831. This indicates that there were differences in composition of lower taxa between the samples as supported by the phylogenetic tree (data not shown).
To the best of our knowledge, this is the first metagenomic profiling of bacterial community structure in different concentration of Romaine lettuce fermented brine. The data shows that the growth of most of lactic acid bacteria (LAB) (such as Lactococcus, Leuconostoc, and some of Bacillus and Enterobacteriaceae) which are beneficial for vegetable preservation as well as health was suppresed in 10% brine. This data will be useful in optimization of brine concentration for safety and quality control of vegetable fermentation.
Sample preparation
Romaine lettuce was used in this experiment. The vegetables were obtained from local hydroponic farm. The leaves were cut out from the stem and put in a fermentation bottle. Ballast glasses were placed on the leaves to prevent them from floating. In each bottle, 5% and 10% brine solution were poured, respectively, to allow the vegetables fully immersed in the brine. Fermentation process was conducted for 4 days in the dark chamber at room temperature (28e30 C). After 4 days, the fermented brines were used for metagenomic analysis.
DNA extraction and metagenome sequencing
The gDNA was extracted using ZymoBiomics DNA Mini Kit (Zymo Research) according to protocol from the manufacturer. The hypervariable V3eV4 regions of 16S rRNA gene were amplified using MyTaq™ HS Red Mix (Bioline, BIO-25044) in Agilent SureCycler 8800 Thermal Cycler. The quality and quantity of the genomic DNA were evaluated by electrophoresis on a 0.8% agarose gel followed by staining with GelRedTM (Biotium, CA, USA) and analysed using NanoDrop 1000 (Thermo Scientific, Wilmington, DE, USA). The amplicon library was prepared using an Illumina two-step PCR protocol. The first stage was aimed to generate PCR products that targeted V3eV4 regions using Nextera-style tag sequences (overhang sequences). The second stage used sample specific Illumina Nextera XT dual indices (Nextera XT i7 index and Nextera XT i5 index). Assessment of the quality and quantity of the final library products was conducted using TapeStation 4200 from Agilent Technologies, Helix-yteTM green ddsDNA quantifying reagent and qPCR using Jetseq library quantification Lo-Rox kit from Bioline (London, UK). Following the protocol from the Illumina, the paired-end sequences were generated in 2 Â 300bp format from MiSeq.
Bioinformatics analysis
The sequence adapters of the raw sequences were removed using Bbmap and merged using BBMerge (BBTools package). The sequences were aligned, trimmed, and chimeras were removed. All reads were clustered into OTU using UCLUST (de novo) at 97% similarity. Singleton and doubletons were removed prior to taxonomy and diversity analysis. The samples then rarefied to the lowest number of reads among all samples. The Greengene Database [3] was used based on RDP Classifier [4] algorithm to annotate taxonomic information for each representative sequence. Because more than one OTU can be assigned to the same species, the OTUs were then combined together for further analysis.
Bacterial community structure analysis
The aand b-indices were calculated using PAST3 software [5]. | 2,151.8 | 2019-06-25T00:00:00.000 | [
"Biology"
] |
A Driving and Control Scheme of High Power Piezoelectric Systems over a Wide Operating Range
Significant variation in impedance under a wide range of loads increases the difficulty of frequency tracking and vibration control in high-power piezoelectric systems (HPPSs). This paper proposed a wide operating range driving and control scheme for HPPSs. We systematically analyzed the impedance characteristics and deduced the load optimization frequency. In order to provide sufficient drive capability, the inverter combined with an LC matching circuit is configured. With the aid of a transformer ratio arm bridge (TRAB) combined with a proposed pulse-based phase detector (PBPD), the proposed scheme can control the vibration amplitude and keep parallel resonance status under a wide range of loads. Experiments conducted under actual operating conditions verify the feasibility of the proposed scheme under the modal resistance range from 7.40 to 500 Ω and the vibration range from 20% to 100%. Moreover, with the aid of a laser displacement sensor, our scheme is verified to have a vibration amplitude control accuracy better than 2% over a tenfold load variation. This research could be helpful for the driving and control of HPPSs operating in a wide range.
Introduction
High-power piezoelectric systems (HPPSs) are used in a large range of applications, such as ultrasonic welding, cutting and actuators [1][2][3][4]. In high-power situations, it is crucial to effectively convert the electrical energy into the mechanical vibration in the piezoelectric transducers (PTs) [5][6][7][8]. Generally, researchers excite PTs at their mechanical resonance frequency, that is, the series resonance frequency ( f s ), where the minimum excitation voltage is required [1,9]. However, as the vibration amplitude increases, an additional loss near f s degrades the performance of the PTs in high power [10,11]. It is attributed to the dielectric loss which is related to the input current in HPPSs [12]. Therefore, exciting the PTs at the parallel resonance frequency ( f p ) with the minimum excitation current can achieve the optimal efficiency. For the HPPSs operating under high vibration amplitude, high excitation voltage is required at f p [12]. However, different materials and processes result in wide range of loads in some ultrasonic machining, such as welding and cutting [13,14]. The load increases the required excitation voltage, leading to the risk of electrical breakdown. Therefore, avoiding the excessive rise up of excitation voltage under high load conditions becomes a challenge for HPPSs. Meanwhile, the wide range of loads has a significant impact on the impedance characteristics, leading to another challenge in providing sufficient drive capability over the operating range.
In most ultrasonic systems, piezoelectric transducers (PTs) need to be excited in the resonant mode [1,9]. Besides, different vibration amplitudes are required for different processing materials [14,15].
For the PT working at light loading conditions, the excitation voltage and current are almost in phase at f s and f p , while the impedance reaches the minimum and maximum near f s and f p , respectively [16]. Therefore, the working frequency can be tracked by a phase locking loop (PLL) or impedance extremum search [17]. Meanwhile, the vibration amplitude can be controlled through current and voltage driving modes at f s and f p , respectively [9,16]. However, as the load increases, two pairs of zero phase frequencies and the impedance extreme frequencies gradually deviate from f s and f p [17]. In order to enlarge the operating load range, schemes based on the impedance or admittance calculation are proposed, such as the maximum target impedance scheme and the admittance circle tracking scheme [1,18]. However, these schemes require complicated software operations to calculate the frequency deviation and the vibration amplitude. With the aid of a transformer ratio arm bridge (TRAB), a vibration amplitude signal is obtained online in an ultrasonic motor control scheme under different operating conditions [19]. Moreover, this signal is in phase with the excitation current at f p , so the parallel resonance detection can be achieved by detecting the phase between the vibration and the excitation current signals. However, the excitation current can be extremely discontinuous and harmonic-rich in the HPPSs over a wide operating range, leading to a new challenge for phase detection.
In order to drive the HPPSs at load optimization frequency under a wide range of loads with controllable vibration amplitudes, a driving and control scheme is proposed in this paper. We first analyze the impact of different loads on the impedance characteristics, and propose the load optimization frequency tracking mode. Second, in order to provide the sufficient drive capability, the effect of the inverter combined with an LC matching circuit is analyzed under different operating conditions. Then, the pulse-based phase detector (PBPD) is proposed to overcome the challenge of phase detection over a wide operating range. Finally, the proposed scheme is verified under actual operating conditions in terms of the frequency tracking and vibration control.
Equivalent Models
An HPPS can be characterized by electromechanical models (A and B) deduced from an electrical model, as shown in Figure 1 [12]. In most ultrasonic systems, piezoelectric transducers (PTs) need to be excited in the resonant mode [1,9]. Besides, different vibration amplitudes are required for different processing materials [14,15]. For the PT working at light loading conditions, the excitation voltage and current are almost in phase at and , while the impedance reaches the minimum and maximum near and , respectively [16]. Therefore, the working frequency can be tracked by a phase locking loop (PLL) or impedance extremum search [17]. Meanwhile, the vibration amplitude can be controlled through current and voltage driving modes at and , respectively [9,16]. However, as the load increases, two pairs of zero phase frequencies and the impedance extreme frequencies gradually deviate from and [17]. In order to enlarge the operating load range, schemes based on the impedance or admittance calculation are proposed, such as the maximum target impedance scheme and the admittance circle tracking scheme [1,18]. However, these schemes require complicated software operations to calculate the frequency deviation and the vibration amplitude. With the aid of a transformer ratio arm bridge (TRAB), a vibration amplitude signal is obtained online in an ultrasonic motor control scheme under different operating conditions [19]. Moreover, this signal is in phase with the excitation current at , so the parallel resonance detection can be achieved by detecting the phase between the vibration and the excitation current signals. However, the excitation current can be extremely discontinuous and harmonic-rich in the HPPSs over a wide operating range, leading to a new challenge for phase detection.
In order to drive the HPPSs at load optimization frequency under a wide range of loads with controllable vibration amplitudes, a driving and control scheme is proposed in this paper. We first analyze the impact of different loads on the impedance characteristics, and propose the load optimization frequency tracking mode. Second, in order to provide the sufficient drive capability, the effect of the inverter combined with an LC matching circuit is analyzed under different operating conditions. Then, the pulse-based phase detector (PBPD) is proposed to overcome the challenge of phase detection over a wide operating range. Finally, the proposed scheme is verified under actual operating conditions in terms of the frequency tracking and vibration control.
Equivalent Models
An HPPS can be characterized by electromechanical models (A and B) deduced from an electrical model, as shown in Figure 1 [12]. In the electromechanical models, the dielectric property is characterized by and . For model A, which is similar to the classic Butterworth-Van Dyke (BVD) model [20,21], the series , and characterize the modal damping, mass and stiffness, respectively. Under the steady state of sinusoidal excitation, the model A can be converted to the model B [19]. In this model, the parameters can be calculated by In the electromechanical models, the dielectric property is characterized by C 0 and R d . For model A, which is similar to the classic Butterworth-Van Dyke (BVD) model [20,21], the series R 1 , L 1 and C 1 characterize the modal damping, mass and stiffness, respectively. Under the steady state of sinusoidal Sensors 2020, 20, 4401 3 of 16 excitation, the model A can be converted to the model B [19]. In this model, the parameters can be calculated by where In this paper, some electrical characteristics under sinusoidal excitation can be expressed in complex vectors form written in bold letters, such as where U 1 is the partial voltage of the parallel RLC part, and U T and I T are the excitation voltage and current of the transducer, respectively. Here, Z 1 = U 1 /I T is defined as the impedance of the parallel RLC part, which satisfies the relationship Then, we define θ to be the phase angle of Z 1 , that is, the phase between U 1 and I T . Therefore, it can be inferred that θ = 0 at f p without the influence of R 1 , because the parallel RLC part resonates at f p according to Moreover, when comparing with the electrical model, it can be noted that the partial voltage u 1 in the model B corresponds to the piezoelectric voltage u p in the electrical model, which is proportional to the vibration amplitude [12]. Therefore, u 1 can be used for vibration and parallel resonance detection without additional calculation.
In this paper, we use a DUKANE 20 kHz 3300 W piezoelectric transducer, a 1:1.5 transducer amplitude transformer, and a ϕ70 mm plastic welding horn to construct a typical HPPS used for ultrasonic welding. The parameters of the HPPS are shown in Table 1. It should be noted that, in actual operating conditions, R 1 increases from 7.40 Ω under no load condition to 200~500 Ω under high load conditions.
Impedance Characteristics Analysis
To analyze the influence of wide range of loads in an HPPS, the variations in U T and I T are calculated under a constant vibration. First, we set U 1 to be a typical value 1700 V, and I T can be calculated as Sensors 2020, 20, 4401
of 16
Then, U T is deduced to be Therefore, the variations in U T and I T under different excitation frequencies are calculated and demonstrated in Figure 2a,b, respectively, in which R 1 is taken as four different typical values of 7.4, 50, 100 and 200 Ω. This analysis shows that I T increases linearly while U T increases slightly at f p as R 1 increases. Therefore, it can be inferred that the wide range of loads leads to an equal variation range of I T at f p . Furthermore, the current variation range is further expanded to over a hundred times when the vibration amplitude is controlled from 20% to 100% in our system. Therefore, the variations in and under different excitation frequencies are calculated and demonstrated in Figure 2a,b, respectively, in which is taken as four different typical values of 7.4, 50, 100 and 200 Ω. This analysis shows that increases linearly while increases slightly at as increases. Therefore, it can be inferred that the wide range of loads leads to an equal variation range of at . Furthermore, the current variation range is further expanded to over a hundred times when the vibration amplitude is controlled from 20% to 100% in our system. Moreover, we analyze the influence of from the perspective of the dielectric voltage drop rate calculated as ⁄ , as shown in Figure 3. It shows that has the greatest impact at , especially under light load conditions. However, the dielectric voltage drop rate decreases to less than 1% near . Therefore, the influence of on the impedance characteristics can be ignored in our scheme. Moreover, we analyze the influence of R d from the perspective of the dielectric voltage drop rate calculated as I T R d /U T , as shown in Figure 3. It shows that R d has the greatest impact at f s , especially under light load conditions. However, the dielectric voltage drop rate decreases to less than 1% near f p . Therefore, the influence of R d on the impedance characteristics can be ignored in our scheme. Therefore, the variations in and under different excitation frequencies are calculated and demonstrated in Figure 2a,b, respectively, in which is taken as four different typical values of 7.4, 50, 100 and 200 Ω. This analysis shows that increases linearly while increases slightly at as increases. Therefore, it can be inferred that the wide range of loads leads to an equal variation range of at . Furthermore, the current variation range is further expanded to over a hundred times when the vibration amplitude is controlled from 20% to 100% in our system. Moreover, we analyze the influence of from the perspective of the dielectric voltage drop rate calculated as ⁄ , as shown in Figure 3. It shows that has the greatest impact at , especially under light load conditions. However, the dielectric voltage drop rate decreases to less than 1% near . Therefore, the influence of on the impedance characteristics can be ignored in our scheme. Further, we analyze the variations of and near from the perspective of the phase under the constant vibration amplitude ( equals to 1700 V), as shown in Figure 4a,b, respectively. The relationships of and are deduced as Further, we analyze the variations of U T and I T near f p from the perspective of the phase θ under the constant vibration amplitude (U 1 equals to 1700 V), as shown in Figure 4a,b, respectively. The relationships of U T and I T are deduced as The curves of shows that increases with at , and decreases with a slope positively related to as increases. Meanwhile, it shows that is very close to the minimum near the zero phase due to the inverse relationship with cos . The analysis above suggests that slight phase difference has little influence on the impedance characteristics. Especially, an appropriate phase difference, such as 20°, can prevent the rise of under high load conditions with a negligible rise of . Therefore, we suggest the load optimization frequency to be slightly lower than with near 20° in our typical HPPS.
Electrical Architecture
The proposed scheme contains a rectifier bridge, a full-bridge inverter, an LC matching circuit and a transformer, as shown in Figure 5. The commercial power (220 V 50 Hz) is rectified into the DC power , and then inverted to the AC power in ultrasonic frequency. A series LC matching circuit is used for DC isolating and harmonic filtering. More importantly, the specific configuration of and is also related to impedance matching and vibration excitation, which are analyzed in the next sector. Since a transformer ratio arm bridge (TRAB) is easy to be intergraded with little impact on the electrical The curves of U T shows that U T increases with R 1 at f p , and decreases with a slope positively related to R 1 as θ increases. Meanwhile, it shows that I T is very close to the minimum near the zero phase due to the inverse relationship with cos θ. The analysis above suggests that slight phase difference has little influence on the impedance characteristics. Especially, an appropriate phase difference, such as 20 • , can prevent the rise of U T under high load conditions with a negligible rise of I T . Therefore, we suggest the load optimization frequency to be slightly lower than f p with θ near 20 • in our typical HPPS.
Electrical Architecture
The proposed scheme contains a rectifier bridge, a full-bridge inverter, an LC matching circuit and a transformer, as shown in Figure 5. The curves of shows that increases with at , and decreases with a slope positively related to as increases. Meanwhile, it shows that is very close to the minimum near the zero phase due to the inverse relationship with cos . The analysis above suggests that slight phase difference has little influence on the impedance characteristics. Especially, an appropriate phase difference, such as 20°, can prevent the rise of under high load conditions with a negligible rise of . Therefore, we suggest the load optimization frequency to be slightly lower than with near 20° in our typical HPPS.
Electrical Architecture
The proposed scheme contains a rectifier bridge, a full-bridge inverter, an LC matching circuit and a transformer, as shown in Figure 5. The commercial power (220 V 50 Hz) is rectified into the DC power , and then inverted to the AC power in ultrasonic frequency. A series LC matching circuit is used for DC isolating and harmonic filtering. More importantly, the specific configuration of and is also related to impedance matching and vibration excitation, which are analyzed in the next sector. Since a transformer ratio arm bridge (TRAB) is easy to be intergraded with little impact on the electrical The commercial power (220 V 50 Hz) is rectified into the DC power U DC , and then inverted to the AC power in ultrasonic frequency. A series LC matching circuit is used for DC isolating and harmonic filtering. More importantly, the specific configuration of L m and C m is also related to impedance matching and vibration excitation, which are analyzed in the next sector. Since a transformer ratio arm bridge (TRAB) is easy to be intergraded with little impact on the electrical circuit [16], it is adopted to detect the partial voltage u 1 online. A tap is drawn from the secondary side of the transformer with the coil turns satisfying n 2 n 3 , and a detection capacitor C d is connected into the circuit, which satisfies Therefore, the bridge voltage, that is, the transformer tap voltage u b satisfies the relationship In the proposed scheme, we configure n 1 = 22, n 2 = 155, n 3 = 4 and C d = 752 nF according to the analysis above.
Electrical Properties
Under the driving of the gate signals, the inverter continuously changes the switching state, as shown in Figure 6. The input current to the HPPS i in rises up in the conduction zones and falls down in the freewheeling zones, then i in becomes zero and the inverter enters high resistance zones. As the duty cycle d of the gate signals increases, the conduction zones become wider, leading to the increase in the vibration amplitude. However, the situation is much different under different d. Therefore, the bridge voltage, that is, the transformer tap voltage satisfies the relationship In the proposed scheme, we configure = 22, = 155, = 4 and = 752nF according to the analysis above.
Electrical Properties
Under the driving of the gate signals, the inverter continuously changes the switching state, as shown in Figure 6. The input current to the HPPS rises up in the conduction zones and falls down in the freewheeling zones, then becomes zero and the inverter enters high resistance zones. As the duty cycle of the gate signals increases, the conduction zones become wider, leading to the increase in the vibration amplitude. However, the situation is much different under different . In order to analyze the electrical properties under the steady state near , the electric architecture is simplified by equivalent transformation, as shown in Figure 7, leading to where is the transformer ratio. , and are the matching inductance, matching capacitance and DC voltage after transformation, respectively. We also define as the sum voltage In order to analyze the electrical properties under the steady state near f p , the electric architecture is simplified by equivalent transformation, as shown in Figure 7, leading to where k is the transformer ratio. L m , C m and U DC are the matching inductance, matching capacitance and DC voltage after transformation, respectively. We also define u C as the sum voltage of C m and C 0 . First, we analyze the situations with low duty cycle . Under these conditions, is always small, leading to negligible . Therefore, in the conduction zones, there exists while there exists in the freewheel zones. and other related waveforms are shown in Figure 6. Since only the fundamental wave can excite the modal vibration, we use the Fourier series to extract it in to consider as the excitation current of the HPPS, which can be calculated as = cos cos + sin sin .
When we define to be in phase with sin , and should also be in phase. Therefore, the first term in Equation (15) equals to zero. On the other hand, the conduction and freewheeling zones are concentrated near the peak of sin . Therefore, Equation (15) can be approximated as By consideration of Equations (13), (14) and (16), it can be deduced that where and correspond to the effects of the conduction zones and the freewheeling zones, respectively, deduced as According to = at , the relationship between and can be derived as First, we analyze the situations with low duty cycle d. Under these conditions, i in is always small, leading to negligible u C . Therefore, in the conduction zones, there exists while there exists in the freewheel zones. i in and other related waveforms are shown in Figure 6. Since only the fundamental wave can excite the modal vibration, we use the Fourier series to extract it in i in to consider as the excitation current i T of the HPPS, which can be calculated as − π ωp i in cos ω p t dt cos ω p t + ω p π π ωp − π ωp i in sin ω p t dt sin ω p t .
When we define u 1 to be in phase with sin ω p t , and i in should also be in phase. Therefore, the first term in Equation (15) equals to zero. On the other hand, the conduction and freewheeling zones are concentrated near the peak of sin ω p t . Therefore, Equation (15) can be approximated as By consideration of Equations (13), (14) and (16), it can be deduced that where A and B correspond to the effects of the conduction zones and the freewheeling zones, respectively, deduced as According to U 1 = I T R 1 at f p , the relationship between U 1 and d can be derived as where Therefore, L m is inversely proportional to U 1 under the same load and duty cycle.
For the conditions where the inverter is driven with high duty cycle d, the capacitor voltage u C is considerable, leading to Equations (21) and (22) in the conduction zones and freewheel zones, respectively. This mechanism leads to the heaping of the current waveform, as shown in Figure 6, which greatly increases the excitation current to the HPPS. Although it is difficult to analyze the current waveform further, the behavior of the inverter can be analyzed from the perspective of the input voltage u in under these situations. The conduction zones gradually dominate in the waveform of u in , approaching to a bidirectional pulse wave with increasing d, as shown in Figure 8. For the conditions where the inverter is driven with high duty cycle , the capacitor voltage is considerable, leading to Equations (21) and (22) in the conduction zones and freewheel zones, respectively. This mechanism leads to the heaping of the current waveform, as shown in Figure 6, which greatly increases the excitation current to the HPPS. Although it is difficult to analyze the current waveform further, the behavior of the inverter can be analyzed from the perspective of the input voltage under these situations. The conduction zones gradually dominate in the waveform of , approaching to a bidirectional pulse wave with increasing , as shown in Figure 8. Here, it is important to configure LC matching circuit to offset the capacitive reactance of and make the circuit purely resistive at , satisfying Therefore, it can be noted that the partial voltage is equal to fundamental wave of the output voltage of the inverter under the matching condition above, satisfying This relationship suggests that the scheme can excite to the maximum value of without being affected by the load. Due to the analyzed above, we configure = 143 and = 870 . Here, we use Here, it is important to configure LC matching circuit to offset the capacitive reactance of C 0 and make the circuit purely resistive at f p , satisfying Sensors 2020, 20, 4401 9 of 16 Therefore, it can be noted that the partial voltage u 1 is equal to fundamental wave of the output voltage of the inverter under the matching condition above, satisfying This relationship suggests that the scheme can excite U 1 to the maximum value of 4 π kU DC without being affected by the load.
Due to the analyzed above, we configure L m = 143 µH and C m = 870 nF. Here, we use MATLAB/Simulink (MathWorks, Natick, MA, USA, 2017b) to simulate the relationship among d, R 1 and U 1 , as shown in Figure 9. It shows that U 1 increases smoothly with d under different loads. This result verifies the drive capability of our scheme under a wide range of loads with adjustable vibration amplitude.
Pulse Based Phase Detector
In order to maintain constant vibration amplitude and keep operating near , the amplitude of and its phase with need to be detected. Owing to the TRAB integrated in our scheme, is extracted via the signal , which is strong and pure under most conditions. Therefore, it can be reliably digitized through the zero-crossing comparator after squelch. Meanwhile, is detected by a feed-through current transformer. However, filtering and digitizing is difficult due to the discontinuous and harmonic-rich in the large variation range.
In this scheme, a pulse-based phase detector (PBPD) is proposed. Since the input current almost occurs in the conduction zones, a PWM gate signal is used instead of the . A classical digital phase detector based on D flip-flops is adopted to generate a phase detection signal. The timing diagram of the relevant signals and the phase detector circuit are shown in Figures 10 and 11, respectively.
In the proposed scheme, we use the gate signal B but not A to avoid the potential competitive risk, and define the pulse center to be 270°. Therefore, the phase can be calculated though the relationship where and are the pulse width and the cycle of the phase signal, respectively. Affected by the variation in the current waveform under different conditions, the phase obtained by PBPD has a small deviation, which will be further analyzed in the experiment.
Pulse Based Phase Detector
In order to maintain constant vibration amplitude and keep operating near f p , the amplitude of u 1 and its phase θ with I in need to be detected. Owing to the TRAB integrated in our scheme, u 1 is extracted via the signal u b , which is strong and pure under most conditions. Therefore, it can be reliably digitized through the zero-crossing comparator after squelch. Meanwhile, I in is detected by a feed-through current transformer. However, filtering and digitizing i in is difficult due to the discontinuous and harmonic-rich in the large variation range.
In this scheme, a pulse-based phase detector (PBPD) is proposed. Since the input current i in almost occurs in the conduction zones, a PWM gate signal is used instead of the i in . A classical digital phase detector based on D flip-flops is adopted to generate a phase detection signal. The timing diagram of the relevant signals and the phase detector circuit are shown in Figures 10 and 11, respectively. Sensors 2019, 19, x FOR PEER REVIEW 10 of 15
Control Realization
Due to the perturbation of and variation of load, a vibration close loop is needed in the proposed scheme. As is deduced to be in positive correlation to , the control logic is established as where is the feedback parameter, , is the vibration target voltage, and , is the proportional control parameter in the vibration amplitude controller.
Moreover, due to the change of temperature and the coupling stiffness caused by the loads, a frequency closed loop needs to be executed in parallel with the vibration close loop. Since is inversely related to frequency and equal to zero at , the control logic is established as where , is the proportional control parameter in the frequency controller.
Control Realization
Due to the perturbation of and variation of load, a vibration close loop is needed in the proposed scheme. As is deduced to be in positive correlation to , the control logic is established as where is the feedback parameter, , is the vibration target voltage, and , is the proportional control parameter in the vibration amplitude controller.
Moreover, due to the change of temperature and the coupling stiffness caused by the loads, a frequency closed loop needs to be executed in parallel with the vibration close loop. Since is inversely related to frequency and equal to zero at , the control logic is established as where , is the proportional control parameter in the frequency controller. In the proposed scheme, we use the gate signal B but not A to avoid the potential competitive risk, and define the pulse center to be 270 • . Therefore, the phase θ can be calculated though the relationship where D and T are the pulse width and the cycle of the phase signal, respectively. Affected by the variation in the current waveform under different conditions, the phase θ obtained by PBPD has a small deviation, which will be further analyzed in the experiment. This signal is also used in the detection of U b , which triggers a T/4 peak sampling timer on the rising edge, as shown in Figure 10.
Control Realization
Due to the perturbation of U DC and variation of load, a vibration close loop is needed in the proposed scheme. As d is deduced to be in positive correlation to U 1 , the control logic is established as where U b is the feedback parameter, U b,target is the vibration target voltage, and K p,d is the proportional control parameter in the vibration amplitude controller. Moreover, due to the change of temperature and the coupling stiffness caused by the loads, a frequency closed loop needs to be executed in parallel with the vibration close loop. Since θ is inversely related to frequency and equal to zero at f p , the control logic is established as where K p, f is the proportional control parameter in the frequency controller.
Frequency Tracking Verification
The experimental setup is demonstrated in Figure 12. Here, the HPPS is fixed on a pneumatic thruster and pressed against a damp cloth. We apply different loads to the HPPS by adjusting the cylinder pressure of the thruster. Meanwhile, the waveforms of U in , I in , U b and the gate signal A are measured by a Tektronix TDS 2024B oscilloscope, and the waveforms of four extreme operating conditions are shown in Figure 13. Here, 100% controlled vibration amplitude corresponds to about 1700 V of U 1 . The sampling period of the oscilloscope is 0.04 µs in the experiments, and the accuracy is 0.04 per division for each channel. This result shows that the center of the pulse coincides with the peak of u b , indicating that the frequency tracking meets the design. Meanwhile, the current waveforms are basically consistent with the analysis. The difference is caused by a weak leaking current in the high-resistance region, which may be attributed to the influence of the parasitic capacitance of the IGBT modules in the inverter.
Frequency Tracking Verification
The experimental setup is demonstrated in Figure 12. Here, the HPPS is fixed on a pneumatic thruster and pressed against a damp cloth. We apply different loads to the HPPS by adjusting the cylinder pressure of the thruster. Meanwhile, the waveforms of , , and the gate signal A are measured by a Tektronix TDS 2024B oscilloscope, and the waveforms of four extreme operating conditions are shown in Figure 13. Here, 100% controlled vibration amplitude corresponds to about 1700 V of . The sampling period of the oscilloscope is 0.04 μs in the experiments, and the accuracy is 0.04 per division for each channel. This result shows that the center of the pulse coincides with the peak of , indicating that the frequency tracking meets the design. Meanwhile, the current waveforms are basically consistent with the analysis. The difference is caused by a weak leaking current in the high-resistance region, which may be attributed to the influence of the parasitic capacitance of the IGBT modules in the inverter. Further, we perform Fourier transform to extract the fundamental wave of , and the actual is calculated and demonstrated in Figure 14. This result shows that the phase difference is about 15°~25° under most operating conditions but relatively large under no load conditions ( equals to 7.40 Ω). It is inferred that the load optimization frequency is slightly lower than , with near 20° in Section 2.2. The experiment result verifies this inference in which the actual excitation peak voltage is 1.76, 1.58, 1.58 and 1.62 kV when equals to 7.40, 50, 100 and 200 Ω, respectively. Further, we perform Fourier transform to extract the fundamental wave of I in , and the actual θ is calculated and demonstrated in Figure 14. This result shows that the phase difference is about 15 •~2 5 • under most operating conditions but relatively large under no load conditions (R 1 equals to 7.40 Ω). It is inferred that the load optimization frequency is slightly lower than f p, with θ near 20 • in Section 2.2. The experiment result verifies this inference in which the actual excitation peak voltage is 1.76, 1.58, 1.58 and 1.62 kV when R 1 equals to 7.40, 50, 100 and 200 Ω, respectively.
Vibration Control Verification
In this experiment, the vibration is tracked in different amplitudes by the proposed scheme and the actual vibration amplitude is measured by a KEYENCE LK-H008 laser displacement sensor. The measurement setup is shown in Figure 15. Each measurement is repeated five times, as shown in Figure 16. The result shows the linear relationship between our controlled vibration amplitude and the actual vibration amplitude. It verifies the reliability of our scheme in vibration control, which can be suitable for different processes.
is calculated and demonstrated in Figure 14. This result shows that the phase difference is about 15°~25° under most operating conditions but relatively large under no load conditions ( equals to 7.40 Ω). It is inferred that the load optimization frequency is slightly lower than , with near 20° in Section 2.2. The experiment result verifies this inference in which the actual excitation peak voltage is 1.76, 1.58, 1.58 and 1.62 kV when equals to 7.40, 50, 100 and 200 Ω, respectively.
Vibration Control Verification
In this experiment, the vibration is tracked in different amplitudes by the proposed scheme and the actual vibration amplitude is measured by a KEYENCE LK-H008 laser displacement sensor. The measurement setup is shown in Figure 15. Each measurement is repeated five times, as shown in Figure 16. The result shows the linear relationship between our controlled vibration amplitude and the actual vibration amplitude. It verifies the reliability of our scheme in vibration control, which can be suitable for different processes.
Vibration Stability under Variable Load
In order to verify the stability of vibration control in varying load condition, we set target vibration amplitude to 30%, and increase the load gradually using water. Meanwhile, the laser senor measures the actual vibration amplitude. The proposed scheme also calculates and records in real time according to the equation: The variations in the actual vibration amplitude and are shown in Figure 17. It shows that gradually increases 10 times after startup, while the fluctuation in the actual vibration amplitude is within 2%. This result verifies the feasibility in a wide range of loads. On the other hand, it also Real Vibra on Amplitude (μm) Figure 16. Real vibration amplitude under different control targets.
Vibration Stability under Variable Load
In order to verify the stability of vibration control in varying load condition, we set target vibration amplitude to 30%, and increase the load gradually using water. Meanwhile, the laser senor measures the actual vibration amplitude. The proposed scheme also calculates and records R 1 in real time according to the equation: The variations in the actual vibration amplitude and R 1 are shown in Figure 17. It shows that R 1 gradually increases 10 times after startup, while the fluctuation in the actual vibration amplitude is within 2%. This result verifies the feasibility in a wide range of loads. On the other hand, it also demonstrates the dynamic adaptability of our scheme. Although the capability of frequency tracking and vibration control is verified, the HPPSs under transient state have more complex characteristics, which should be further considered in the dynamic process.
Conclusion
This paper demonstrates that the proposed scheme is capable of frequency tracking and vibration control under a wide operating range. First, the impedance analysis indicates that the excitation current varies in a wide range under different loads near . The analysis also indicates that a slight deviation in phase affects the impedance characteristics little. Especially, we suggest that the load optimization frequency with about 20° phase difference can help avoid the excessive rise up of the excitation voltage under high load conditions. Second, the electrical architecture is built, and the drive capability of the scheme is verified in wide range of loads and different vibration amplitudes. Then, the pulse-based phase detector (PBPD) is proposed, which can get the phase signal
Conclusions
This paper demonstrates that the proposed scheme is capable of frequency tracking and vibration control under a wide operating range. First, the impedance analysis indicates that the excitation current I T varies in a wide range under different loads near f p . The analysis also indicates that a slight deviation in phase θ affects the impedance characteristics little. Especially, we suggest that the load optimization frequency with about 20 • phase difference can help avoid the excessive rise up of the excitation voltage under high load conditions. Second, the electrical architecture is built, and the drive capability of the scheme is verified in wide range of loads and different vibration amplitudes. Then, the pulse-based phase detector (PBPD) is proposed, which can get the phase signal over a wide operating range with acceptable precision. The experiments verify the feasibility of PBPD and vibration amplitude control under the resistance range from 7.40 to 500 Ω and the vibration range from 20% to 100%. Finally, the experiment verifies that the accuracy of the vibration control is within 2% via a laser displacement sensor under varying load. The proposed scheme could be very useful for the HPPSs working in complex conditions, like ultrasonic welding and cutting. | 9,112.6 | 2020-08-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Opening science: towards an agenda of open science in academia and industry
The shift towards open innovation has substantially changed the academic and practical understanding of corporate innovation. While academic studies on open innovation are burgeoning, most research on the topic focuses on the later phases of the innovation process. So far, the impact and implications of the general tendency towards more openness in academic and industrial science at the very front-end of the innovation process have been mostly neglected. Our paper presents a conceptualization of this open science as a new research paradigm. Based on empirical data and current literature, we analyze the phenomenon and propose four perspectives of open science. Furthermore, we outline current trends and propose directions for future developments.
Introduction
For centuries, science has been based on an open process of creating and sharing knowledge. Over time however, the quantity, quality, and speed of scientific output have changed, as has the openness of science. In the days of Galileo, scientists had to use anagrams to hide from inquisition. Later, scientists used letters to distribute their knowledge amongst colleagues. When in 1665 'Philosophical Transactions' was founded, scientists began to send their insights to scientific journals. In the last century, the number of scientific journals exploded. At the same time knowledge diffusion slowed down. In some fields the peer review process takes several years from first submission to final publication (Björk and Solomon 2013). New IT-based submission and paper tracking platforms hardly improved review times, as reviewers remain to be the bottleneck. Today, more and more academic institutions open up science by employing open access journals, sharing research data, or including others into the research process. But also large firms like Siemens, IBM, or Tesla are part of the open science phenomenon. Instead of patenting knowledge, they publish large parts of their research in order to participate in the scientific community. In doing so they mark findings as state-of-the-art and thus prevent others from patenting them.
Despite recent trends, management scholars have mostly neglected the phenomenon of open science. That is surprising as there is a clear link between the activities in the field of open science and those in the field of open innovation. Science has the purpose of developing a knowledge domain by adding theoretical or empirical insights, whereas innovation has the purpose of developing and bringing to market new offerings such as products or services. The most widely used definition of open science stems from Nielsen (2011): ''Open science is the idea that scientific knowledge of all kinds should be openly shared as early as is practical in the discovery process.'' Open innovation by contrast is defined as ''the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively. [This paradigm] assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as they look to advance their technology'' (Chesbrough 2006, p. 1). Both definitions have the acceleration of a process by sharing knowledge in common. Many scientific findings are later turned into innovations. It is therefore desirable to understand the link between open science and open innovation as the definitions suggest that open science can lead to open innovation.
Current literature on open innovation is predominantly underlying a business-centric view. This view assumes a firm to be mainly motivated by profits. Numerous studies investigated how external ideas are utilized inside companies in order to develop new product offerings (Dahlander and Gann 2010). Additionally, scholars analyzed the possibilities of commercializing internally generated knowledge in form of intellectual property (IP) for profit generation outside the company boundaries (Chesbrough 2003a, b). Research is understood as an enabler to come to new knowledge (Koen et al. 2001). Yet, the very early stages of research and science have hardly been analyzed by the current open innovation literature. In other words, we currently experience a vibrant debate within scientific theory that focuses on open science (Bartling and Friesike 2014;Jong and Slavova 2014). Yet, this debate mostly neglects its interdependency to innovation.
This article provides a conventionalization of open science based on a literature review and semi-structured interviews with CTOs, research managers, open innovation directors, open access leaders, industrial researchers, and scientists. The paper is structured as follows: In the next section (Sect. 2), we review the literature on open innovation and open science to derive similarities and difference between both concepts. In Sect. 3, we describe our methodological approach before (in Sect. 4) we analyze and discuss current trends based on empirical insights. The paper concludes by providing implications and suggestions for future research (Sect. 5).
Research streams in open innovation
The failure of large industrial research labs to drive scientific advancements towards value generation in the early 1980s manifested an anomaly that changed the rules of innovation. Shortly after its foundation in 1984, Cisco started with its open R&D strategy, which ended up in outcompeting the world's largest R&D center, the AT&T's Bell labs. In the Kuhnian sense (1962), this marked a paradigm shift in innovation management. Since then, the practical and academic community called for more open models of innovation (e.g., Chesbrough 2003a, b;Christensen et al. 2005).
Within the last decade of academic research, several special issues on open innovation underpinned a fundamental change in the perception of innovation (see e.g., R&D Management 2006R&D Management , 2009R&D Management , 2010 (1) Integration of external cooperation partners along the value chain. Downstream the value chain, von Hippel's (1986) works on lead user integration highlight the virtue of user collaboration for radical innovation. Numerous studies investigated user characteristics and their impact on the degree of innovativeness, the modality of user integration, and user's motivation to collaborate (Bilgram et al. 2008;Franke et al. 2006;Luethje 2004). The phenomenon of free revealing and the fact that the user is the only external collaboration partner with use-experience makes the user a very valuable partner (Nambisan and Baron 2010;von Hippel and von Krogh 2006). Upstream the value chain, research emphasized the importance of supplier integration. The integration of suppliers into the development process at a very early stage can significantly increase innovation performance in most industries (Hagedoorn 1993). (2) Partnering and alliances. Strong specialization necessitated the need for many companies to collaborate with partner companies from the same or other industries (Hagedoorn 2002;Schildhauer 2011). Especially, the phenomena of cross industry innovation and innovating with non-suppliers was investigated (Howells 2008;Herstatt and Kalogerakis 2005). Also established engineering firms take the role of innovation intermediaries moderating open innovation activities between collaborators (Howells 2006). This indirect opening up of the innovation process is leveraging the cross-industry innovation process, not only in traditional R&D outsourcing modes but also in strategic innovation partnering. (3) Open innovation processes. Open innovation can be subdivided in three core processes: outside-in, inside-out, and coupled. This classification provides guidance on how to complement and extend the internal innovation process by an external Opening science 583 periphery (Gassmann and Enkel 2004). Most large companies such as Siemens and BASF started to developed detailed firm specific open innovation processes. In addition some companies such as Procter & Gamble and Siemens assigned process owners with special positions and titles for open innovation within their corporations. In both corporations these directors attract considerable attention within the company. (4) Open innovation tools. As a means to implement open innovation numerous tools emerged; most of them support the integration of external innovation sources (West and Lakhani 2008). Crowdsouring platforms like InnoCentive, 99design, Jovoto, Nine Sigma, or Atizo bring together solution seekers and problem solvers (Bullinger et al. 2010;Sieg et al. 2010;Dahlander et al. 2008). Thereby, they generate a virtual market place for innovative ideas and problem solutions. Toolkits for masscustomization allow an adaptation of design and product features according to customer preferences based on an iterative creation process (Piller and Walcher 2006). Community based innovation enables companies to use blogs and discussion forums to communicate with a mass of stakeholders outside the company. (5) Open trade of intellectual property. The times where IP was solely used as a means to secure the firm's freedom to operate are over. The more open approach towards IP changed its role and importance within the firm's value creation processes (Pisano 2006). The active use of IP for in-and out-licensing unfolded new business models, which are widely discussed in literature. New phenomena like patent funds, patent trolls and patent donations emerged in recent years and increasingly attracted scientific research (Reitzig et al. 2007;Ziegler et al. 2014). At the moment, there is an ongoing debate amongst policy makers in the European Union whether a financial market for IP should be created. Policy makers in favor of new modes of technology transfer as well as financial institutions interested in new product categories are predominantly driving that process. The overview on the existing research streams in the field of open innovation shows the strong application and commercialization focus of the present literature. But, detailed insights on collaboration and openness in the field of knowledge creation and science are lacking.
Perspectives of open science
In the context of academic and industrial science, the sharing and combination of information is regarded as the core process of knowledge creation (Thursby et al. 2009). As scientific problems are getting more specialized and complex at the same time, it is not surprising that collaboration in science and research expanded in various disciplines within the last decades. For example in sociology, the percentage of co-authored articles almost quintupled in the last 70 years (Hunter and Leahey 2008). Comparable trends were observed in political science (Fisher et al. 1998), physics (Braun et al. 1992), and economics (Maske et al. 2003). Studies even show that authors with a high h-index are those who collaborate widely with others, form strong alliances, and are less likely to be bonded to a certain in-group (Pike 2010;Tacke 2010). According to Merton (1973), the principle of openness has always been an integral part within the academic community. This openness is rooted in a reward system that the first person to contribute new findings to the scientific community receives in return various forms of recognition (Stephan 1996;McCain 1991;Hagström 1965 Here, open has to be seen in contrast to the prior status quo (e.g., a publication was available only to the subscriber of a journal and only after it was published) and not in contrast to 'closed science'. Contradictory, industrial scientists were perceived as being much more concerned about confidentiality as a means to secure future returns on R&D investments (Cohen et al. 2000). Recent studies however indicate that this disparity seems to diminish as increasingly cross-institutional bonds emerged (Murray 2006;Powell et al. 2005). For example, Haeussler (2011) found that for both academic and industrial scientists the likelihood of collaboration and exchange depends on the competitive value of the requested information and on the degree to which the researcher's community conforms to the 'norm of open science' (Rhoten and Powell 2007). Differences between academic and industrial research become blurred (Vallas and Kleinman 2008). Thus, academic and industrial science moved from a ''binary system of public vs. proprietary science to […] arrangements which combine elements of both'' (Rhoten and Powell 2007, p. 346). The convergence of academic and industrial science and the increasing importance of collaboration and openness drive the need to gain more insight in how open science is characterized.
Actors in open science include institutions such as universities and corporations as well as individual researchers. From a value chain perspective, open science includes the very front-end activities of basic science, applied science, and applied research. Despite the contextual backgrounds of academia and industry, research is rather driven by curiosity, reputation, and acknowledgement than by profit and applied oriented thinking. We differentiate four perspectives of open science: Opening science 585 (1) Philanthropic perspective. Doing research requires infrastructural and content-related elements whose access has been predominantly restricted. Current trends foster a democratization of science and research in the sense of distributing scientific content, tools, and infrastructures freely. Many universities started to offer public lectures or courses with the goal in mind to bring science and research closer to society and to market scientific findings. Most of the public lectures are streamed online and thus are globally available (Tacke 2010). Additionally, this trend includes the rise of open access journals that provide users with the non-restricted right to read, download, copy, distribute, print, search, or link to the full texts of articles. As most traditional journals generate revenues based on subscriptions, the majority of open access journals are funded by the authors through publication fees. Within the last years, the visibility and prominence of open access journals significantly increased due to the growing numbers and the establishment of the Directory of Open Access Journals.
(2) Reflationary perspective. Currently, we witness a trend towards making scientific results freely available during pre-publication. Knowledge is shared in a very early stage within the research process. Motives to do so are manifold. Researchers are able to reflect first thoughts, to promulgate preliminary scientific results, and to promote new ideas within the scientific community. Thereby, they signal tacit knowledge and reputation that might attract other researchers and institutions (Hicks 1995). Furthermore, they are capable of actively influencing future research directions and starting new scientific discussions. Colleagues and amateurs are invited to give feedback and to join in for collaborative knowledge creation.
External involvement diminishes problems with respect to local search bias and groupthink many closed scientific research teams suffer from. At the same time the journals and publishers have a self-interest in pre-publications: Papers which have been published before being printed will get cited more often and therefore increase the citation impact and thus attractiveness of the journal. Moreover, the memory and transparency of the World Wide Web allows tracing thoughts and knowledge creation. This minimizes the risk of lost authorship. Comments and evaluations of peers might give guidance in research phases of high uncertainty. (3) Constructivistic perspective. The opening of science and research enables new collaborative forms of knowledge creation. This knowledge creation does not only bring new knowledge into being but also new opportunities for new user models and new businesses. Crowdsourcing is one prominent example: Problem seekers pull for new scientific solutions by broadcasting problems to an unknown mass of potential problem solvers. Virtual rooms are used as an exchange platform where problem seekers and solvers can interact. Small groups form virtual exchange platforms for loose or moderated exchange with the goal of knowledge creation. Open platforms typically address several fields in a more interdisciplinary manner than the typical disciplinary mainstream journals. The integration of more than one scientific discipline under one roof fosters cross-fertilization of researchers and scientists. This interdisciplinary approach enhances technology fusions and the generation of innovative solutions (Kodama 1992). (4) Exploitative perspective. Most researchers are oriented towards the generation of novel scientific findings neglecting real life application. The active sharing and promoting of scientific knowledge enables researchers to close this gap towards application-oriented knowledge exploitation faster. In cooperation with practitioners a common shared construction of new artifacts based on the latest scientific findings is possible.
The Following table provides an overview on open science initiatives with respect to the four just discussed perspectives (Table 1).
Methodology
Given the young nature of the phenomenon, our empirical research mainly relies on a qualitative exploratory research approach based on interviews, Internet research, and document analysis. This triangulated qualitative approach is an appropriate means to navigate unclear boundaries between phenomenon and context in the early stages of research. Our data generally relies on the primary source of semi-structured expert interviews and secondary source of company press releases and Internet research. Between 2008 and 2011, we conducted 38 interviews with different actors in technology intensive industry and academic research, namely CTOs, R&D managers, open innovation directors, senior industry and academic researchers, directors of research institutes, editors and referees of academic journals, and university presidents. This kind of triangulation allows us to minimize the bias of personal perspective and enhance the validity of the information.
At the outset of our interviewee sampling, we scanned our personal contacts, websites of research institutions, research databases, and the public press to identify experts in the field of open science that were most promising in revealing new insights. Our goal was to generate a heterogeneous sample that allowed us to analyze open science from various different viewpoints and validate our results. After each interview, we asked our respondents to name further colleagues who may reveal more insights applying a snowball sampling. No further interviews were conducted when we achieved theoretical saturation and the interviewees did not reveal new empirical insights.
The framework described in chapter 2.3 guided our data acquisition and data analysis. Based on the framework we developed an interview guideline, which we adapted according to new empirical insights that emerged during the research process. Thus, our field data was collected and analyzed iteratively based on reflection of scientific literature. This led to alternations between inductive and deductive procedures (Eisenhardt 1989). To combine the advantages of unstructured and semi-structured interview methods, we started with open-ended questions, followed by a structured questionnaire protocol. Besides asking formal questions regarding the institutions' motivations and barriers of opening up science, the interviewees were also strongly encouraged to provide related examples from their daily business, including current research projects. The intention of the interviews was to identify drivers, inhibitors and current trends in open science and research.
The data was primarily collected by personal face-to-face or telephone interviews, which lasted between 40 and 120 min on average. Each interview was transcribed. The transcripts were transferred into an excel tool to break the interviews down into single statements. Each statement was paraphrased and grouped into categories. The identified trends emerged out of the categories. Two researchers independently analyzed the data allowing cross comparisons that increased the validity of the results.
Academic research representatives of the following institutes were interviewed: In early 2013 we conducted a second field phase and carried out 22 further interviews with experts in the realm of open science. These interviews were mainly conducted to gain insights into how individual researchers deal with the challenges in open science (e.g., open access journals make publications widely available but a researcher's career is not based on availability of research results but rather on the ranking of publication outlets). We used these interviews to verify the present research results 2 years after the initial interviews. All interviews were transcribed verbatim and coded using the software NVivo. Results discussing individual factors concerning open science were published by Scheliga and Friesike (2014).
Emerging trends of open science
The open science paradigm has just paved the way towards a new division of tasks and a new role understanding within scientific research. New links and forms of collaboration emerged within the science community itself but also between academic research and more application-oriented institutions. The times when research institutes demonstrated intellectual fortresses following the goal of Humboldt's knowledge creation as an end in itself seem to be over in most areas. The complexity of scientific problems and the required investments (time, expertise, and materials) to solve them dramatically increased within the last decades and necessitated the breaking of new grounds in external collaboration (Bozeman and Corley 2004). We identified several trends, which underpin the opening of science: ( the Swiss CTI the establishment of a knowledge distribution platform in form of published results is very helpful in order to receive grants. In most EU funded projects the open distribution of knowledge is already a prerequisite to get funding. (2) Role of research institutes: from ivory towers to knowledge brokers. Traditionally, there was a gap between research driven universities and application driven private companies. This gap is diminishing, as the distribution of tasks between academia and industry changes. The tremendous rise of technology transfer fostered by many universities and private companies further closes the gap between science and practice. For instance, the ETH Zurich and IBM jointly operate the Binnig and Rohrer Nanotechnology Center in Zurich. The center provides a common collaboration platform for researchers of both institutions. As equal collaboration partners, both institutions have the right to publish and to commercialize the jointly created IP. This dual relationship increases the pressure on both partners to timely find applications for the scientific findings generated and to commercialize research results. The local consolidation of many highly dedicated innovation teams proofed to accelerate knowledge creation and opens up fast ways for the commercialization of current results. Additionally, mutual career paths in the ETH and IBM emerged that manifest a liaison management between both entities and create spill-over effects especially with respect to the transfer of tacit knowledge. In recent years, the self-conception of many universities and research institutes changed. Many public institutes moved from being a provider of basic research, towards more application centric research. To enable multiplication on a global scale and to merge competences, various research institutes formed networks that aim at providing direct solutions for business problems. The Auto-ID Labs are a prominent example. They represent a leading global network of academic research laboratories in the field of networked RFID. The labs consist of seven renowned research institutesincluding the MIT Lab, the ETH Zurich Lab, the Cambridge Lab, the Fudan Lab, and the Keio Lab-located on four different continents. The goal of the Auto-ID Labs is to architect the 'Internet of things' and to provide an efficient infrastructure, which facilitates new business models and applications on the basis of the RFID technology.
(3) Outsourcing research: from make to buy. The industrial trend of reducing the value chain activities to focus on core competencies has also affected the relationship between private, application oriented businesses and research institutes. Following this trend, many companies cut their expenses in corporate basic research. As a consequence, numerous firms started to outsource research activities: The elevator company Schindler works together with the Institute of Applied Mathematics at the University of Cologne. On the basis of precise requirements, Schindler outsourced the development of genetic algorithms for its latest elevator control systems. In this regard, the Institute of Applied Mathematics became a knowledge and technology supplier at the very front end of Schindler's innovation process. Daimler outsourced many of the telematics research to several research institutes and universities. ABB outsourced its research on inspection robotics for their installations to a joint venture with the ETH Zurich. SAP has set up several decentralized research labs on campuses of universities, e.g., TU Darmstadt, ETH Zurich, and St.Gallen. Novartis is more and more relying on start-up firms and research institutions to fill the technology pipeline in research and preclinical development. Additionally, the outsourcing of research activities offers SMEs new possibilities to overcome the 'liabilities of smallness' (Gassmann and Keupp 2007). Earlier, due to resource constrains, many SMEs were not able to conduct basic research on their own. Thus, outsourcing scientific problems to research institutions allows them to increase their competitive position. (4) Financing of research: from single-source to multiple-source funding. In recent years, the increasing cost pressure on many public households, led to declining budgets in many public research institutions. Formerly being largely financed by public money, many universities are forced to find additional ways for financing research activities and thus progressively seek third party financing. Various universities increased their activities in technology transfer and in IP commercialization. For example, 25 Bavarian academic institutes formed a patent exploitation network, which is coordinated by a patent bureau. Under the roof of the Fraunhofer Institute, BayernPatent is responsible for the IP commercialization. Additionally, it assists inventors with the filing of patents. Thereby, it works closely with local patent attorneys and offices. BayernPatent covers 100 % of the patent filing and maintenance costs and thus minimizes the risk for the academic institutes. Revenues are split equally between the inventor (25 %), the faculty (25 %), the university (25 %), and BayernPatent (25 %). Whereas many universities moved from public to more private funding, numerous corporations made an opposite shift. In the 80's roughly 80 % of Siemens' Corporate Technology was financed by uncommitted corporate funds. Today, more than 70 % have to be financed by Corporate Technology on its own responsibility via third party money or business units. This trend is also reflected by several other large firms such as ABB, Daimler, and Philips, which are forced to collaborate with universities and spend seed money in basic research. This supports the universities in their research. (5) Research culture: from closed disciplinary to open interdisciplinary thinking. For decades, science was predominantly driven by disciplinary research. Within the scientific community, research streams were influenced by few dedicated and topic specific journals. A narrow and disciplinary framing of research articles increased the probability of getting accepted. Additionally, the dogma of 'publish or perish' forced researchers to keep their work secret-at least in the early stage of competition-until submitting it to a scientific journal. This development led to scientific progress but also to disciplinary silos. Within the last decades, the number of interdisciplinary journals grew constantly and new forms of Internet-based collaborations emerged. Offering new ways of publication and collaboration, this development caused a change in thinking towards more open and interdisciplinary research. New scientific cross-links between various research fields offer novel platforms for publication. With this the entire scientific landscape is in constant flux, new research areas emerge and others vanish. (6) Focus of research: from broad universities to specific institutes. Within the scientific landscape the specialization of research institutes in the public but also private sector increased. The requirement to be more cost efficient forced research activities to be closer related to the core competencies-responsible for value creation and profit generation-of the executing institutes. Within the public sector, specialized researches institutes were formed that attract new researchers as well as private companies. Numerous examples can be found in the sector of environmental technologies in Germany. In the private sector, companies deliberately invest in basic research in strategic fields of major importance. For example, Sulzer Innotec became a specialist for computational fluid dynamics. Later, their know-how in simulation software was applied in the development of a wide variety of products. An open science research system does not only improve academic research but holds tremendous potential for industrial applications. Here, we need to learn more on how this interaction can be supported and managed. At large, current scientific contributions are still fragmented and are far away from presenting a holistic picture of open science. Many knowledge gaps within various fields are evident. Clear policy implications are needed to address the issue of individual incentives for scientists. The promising idea of open science comes with a multitude of possible challenges, which need to be addressed in order to make this research paradigm work. The field of open science is still at an early stage and offers a wide field for future research. We invite researchers to contribute to this fascinating area and to help answering the many remaining questions.
Limitations
Given our research design it is helpful to point out several limitations. Firstly, as typical for qualitative research, our study describes a present phenomenon. But we are unable to quantify it. We are unable to provide insights on how 'open' research actually is and how many researchers are engaged in open science. Secondly, this study provides a general overview on this rather young occurrence. We do not compare disciplines or highlight promoting and hindering factors that would explain why certain disciplines are engaging in open science more vividly than others. Thirdly, we do not look at individual drivers. The idea of open science and its key argument (making research processes as transparent and open as possible) is good for society as a whole; yet it does not take into consideration the individual researcher. In many cases what is good for society is not in the best interest of the researcher himself. Career incentives and institutional policies might hinder open science. Fourthly, our data emphasizes the German speaking research and business community. To minimize local bias and ensure global validity, we interviewed managers and researchers with an international background. Practically, we selected researchers that previously published in international journals and manager that work in a global business environment to ensure that the findings are valid beyond the German-speaking world.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 6,650 | 2014-11-25T00:00:00.000 | [
"Business",
"Engineering",
"Economics"
] |
Predictors of Academic Achievement in Blended Learning: the Case of Data Science Minor
This paper is dedicated to studying patterns of learning behavior in connection with educational achievement in multi-year undergraduate Data Science minor specialization for non-STEM students. We focus on analyzing predictors of aca-demic achievement in blended learning taking into account factors related to initial mathematics knowledge, specific traits of educational programs, online and of-fline learning engagement, and connections with peers. Robust Linear Regression and non-parametric statistical tests reveal a significant gap in achievement of the students from different educational programs. Achievement is not related to the communication on Q&A forum, while peers do have effect on academic success: being better than nominated friends, as well as having friends among Teaching Assistants, boosts academic achievement.
Introduction and Background
A ubiquitous proliferation of demand for data science-related competences (data literacy) poses new challenges for universities all over the world. On the one hand, high market demand makes it easy to attract students from different disciplines [1], including non-STEM disciplines [2]. On the other hand, it makes the student audience extremely heterogeneous in terms of disciplinary background and the level of mathematical preparation [3], complicating the pedagogical design.
This paper is a part of the project dedicated to studying the patterns of learning behaviour in connection with educational achievement in multi-year undergraduate data science minor specialization.
Data science is a minor specialization that undergraduate students can choose to study for two years (2nd and 3rd year of a bachelor program). The specialization unites students from different non-STEM programs and departments ranging from economics to oriental studies. Students study the basics of computer science and different methods and techniques related to data science, text mining, and social network analysis. The first cohort of students started their studies in September 2015, and the second cohort started one year later in September 2016.
Traditional face-to-face classroom settings for the data science minor are supported by a virtual learning environment (VLE), a web-based software system that includes an RStudio Server IDE and a Q&A forum. Students can access the same working environment inside and outside the class due to the web-based nature of VLE. Also, the nature of the subject and the setting inform the choice of a blended learning approach for the minor.
One of the major issues on the course is the level of skill disparity among students. Some students, especially those having previous experience with programming and IT (e.g., they had statistics and IT-related courses), tend to grasp the new material quicker, while other students may struggle through the unfamiliar topics. Consequently, the course is supplemented with an online exercise part complementing regular classbased lessons with seven modules, with each module addressing different aspects of data analysis. This strategy supports students who struggle to learn programming and follows recommendations to support advanced students [4]. Each module became available for students after the corresponding topic was covered in class by the instructor and were open until the end of the first semester with an unlimited number of attempts. The completion of more than 60% of the assignments in the additional modules accounted for 10% of the final grade.
Having such a mixed group of students with very diverse levels of math knowledge, different motivation and learning involvement levels, as well as a combination of offline and online learning components, requires the creation of a baseline model connecting different aspects of student learning behaviours with academic achievement while considering student diversity and existing social ties. In addition, the model must answer the following research question: "How are different aspects of students' learning behaviour in a blended learning data science course connected with academic achievement, considering students' background diversity and existing social ties?" This paper discusses our first results of building such a model while combining estimates of peer effects generated by offline ties of nominated friendship and metrics, capturing various aspects of learning behaviour, and partially controlling for the diversity of students' backgrounds in math.
Current Research in Learning Behaviour and Educational Achievement
Previous educational achievement (GPA) is one of the most influential predictors of a student's academic achievement on a given course [3], [4], because it may reflect both the skills student obtained during previous courses and the intention and motivation to succeed in the future courses [7]. GPA also shows students' level of adaptation to the university's academic culture. Several studies have shown higher grades on math-related courses being a positive indicator of advanced potential even on nonmath-related courses [8], [9].
A student's behaviour and academic achievement are influenced by the diversity of social groups he or she is a participant. Different types of social networks (e.g., friendship and advice) have different effects on academic success [10], and the topperforming students can form a rich club where the communication is very active and fruitful, leading to higher final academic achievement of the members [11]. Meanwhile, low performing students are deprived of the access to these networks, having fewer chances to improve academic achievement.
In addition, peer instruction is considered an important aspect of CS courses. Specifically, [12] states this practice has been successful in the case of non-STEM students in CS courses. Students who were affiliated with a group demonstrated higher final academic achievement than those who got through the course by themselves [12].
Furthermore, research shows that the effect of the number of friends on academic achievement is nonlinear. At first, the relationship is positive, but after a certain threshold, the trend changes: having too many connections leads to lower achievement. The maintenance of the large and diverse connections needs resources, such as time and attention, that might be spent studying [10], [13].
The social environment also affects the achievement of students through social comparison mechanisms [14], [15]. Also, comparison with more successful peers increases individual's achievement, although the comparison decreases students selfconcept [16]. However if the gap between the student and the reference group becomes too large, it deprives the student and leads to lower achievement [14].
The third group of relevant studies comes from the rapidly evolving area of learning analytics and educational data mining [17], [18]. Specifically, current developments in the field are connected with the appearance of large amounts of data about students' learning behaviour in VLEs and learning management systems (LMS). The analysis of this data may help in finding non-trivial patterns of students' learning behaviour as well as determinants of academic achievement [19].
The blended model of learning has certain features of both online and offline learning methods. One of the notable features is the automatic grading of a certain type of assignments, which is usually provided by VLEs. Moreover, VLEs allow multiple attempts for the students to successfully accomplish the assignment, leading to two substantial strategies students might employ to accomplish the assigned tasks [20]. The first strategy assumes that students are cognitively involved in the task-solving process, and this strategy can be observed as the low ratio of wrong attempts. In contrast, students might use trial and error approach. The latter strategy might be an indicator of lower course engagement and lead to worse academic achievement in the future, despite the student eventually completing the assignment [20], [21].
In addition, the assessment of students' data in the form of binary outcomes (ratio of successful vs. unsuccessful attempts) despite its simplistic nature might also be a good predictor. In particular, Petersen claims that the existing tools used to predict academic achievement based on programming errors and debugging skills demon-strate major differences in predictive power based on course content and the programming language used [22]. Nonetheless, the type of mistakes adds to the predicting ability of the model. For example, unsuccessful students make syntactic errors (and too many attempts), preventing them from solving the problems [23].
In this regard, the non-obligatory assignments' completion rate seems to be a useful indicator of academic achievement. Meanwhile, [24] that the completion of the additional non-mandatory assignments has a positive effect on academic achievement only after adjusting the results based on previous programming experience.
Another predictor specific for the offline model of learning is class attendance. Research on the effect of lecture attendance on academic achievement in CS courses demonstrated negative relationship [25]. This result is partially explained by the availability of course materials online, opportunities for independent study practise. Home assignment completeness is associated with increased final grade; however, the effect varies considerably based on the year observed [25].
Thus, three main dimensions of learning behaviour and the lives of student that are related to students' achievement are explored in this paper: • Previous achievement that can be operationalized either as general abilities or as an average grade the student got in previous years. Higher abilities indicate higher chances to be successful in each subsequent course. This predictor is found to be more influential than parental status, and it is believed that achievement may reflect student's SES. • Social environment: A student's friends can both enhance the achievement (through networks of help and advice, as well as due to inclusion in the rich club) and decrease it (if the student has too many friends or the gap between her or his abilities and the achievements of others is too large). • Engagement in offline and online learning: As students are more involved in learning and some activities related to the study process, their achievements are higher. The amount of overall training, additional tasks performed by the student, attendance, and other proxies of how much the student is interested in the course should be considered.
Data and Methodology
Data (Table 1) was extracted from VLE and contains three main parts: Students' programming activity logs show how much the student coded overall, which is a proxy of general learning engagement. As a student is more interested in the minor, the more he or she trains the coding skills while trying to solve the tasks and goes beyond the standard tasks given to everybody. The overall number of code lines written by each student was log transformed because of its clearly exponential distribution and wide range from 500 to 20000 lines of code. These programming activity logs also provide data on non-obligatory student activity on academic achievement. Specifically, the data relates how many additional tasks students tried to perform and the percent of wrong submissions.
Q&A forum communication indicates whether and to what extent the student is involved in the online activities, e.g., whether he or she uses the Q&A forum at all, and the intensity of communication measured through the number of posts, answers, comments, and votes.
Survey data with name-generator functions which students were asked to fill in, providing information about their friends among same-year students and among teaching assistants. This information about nominated friends from each participant allows us to test whether friendships with peers are connected with the performance of students. To determine some of the social comparison effects, a variable called grade gap was created, reflecting the difference between the individual achievement of a student and the average achievement of his or her friends.
Previous achievement was measured by a student's previous year GPA. The dependent variable, academic achievement, is operationalized and measured as the sum of the raw scores for the following tests: two midterms and the final exam.
To investigate the predictors' effects on outcome, a robust linear regression with the backward variable selection and Kruskal-Wallis non-parametric test was employed.
Overall, 194 second-year students were officially enrolled in data science specialization, with 189 cases in our data. Due to the small number of students from some educational programs, the programs were analytically split into three groups: • Economics, management, and logistics • Area studies, public administration, history, political science, philology, and law • Sociology Separating students of the sociology program from all other programs is due to the following logic. Conventionally, sociology is closer to the second block of programs; however, based on the exploratory analysis of the third-year students of the data science minor, sociology students reveal traits of students from both blocks of programs. In addition, the division including the economics and similar programs can be inter-preted as a proxy to the level of quantitative skills required for the program because a higher level of math skills and knowledge is expected. In contrast, the group including area studies, philology, and other social and humanities programs is more weakly connected with university-level mathematics.
Analysis and Results
The aim of this work is to determine the main predictors of student academic achievement in the blended learning model. Specifically, this work focuses on the factors associated with the following three dimensions of a student's academic life: • Previous achievement, previous math knowledge (based on student's educational major) • Different types of connections with peers • Off-line and online learning activities. Using robust OLS regression, significant factors are identified and their effects on a student's academic success are compared, providing an overview of the factors relevant to the achievements of students in the blended learning model in a data science minor program in a Russian university.
The robust linear regression results are presented in Table 2 where the final model is a result of a backward selection procedure.
The effect of engagement in learning (measured by code quantity metrics) is significant and positive. Specifically, a 1-point increase in the log of the lines of code written (range of the variable: 4-10) is associated with nearly a 3-points increase in academic achievement. This variable also can serve as an indicator of the self-guided work. Another interesting result is the insignificant effects of the communication activity on the Q&A forum in any form (QA and QA intensity). As shown in Table 1, forum communication is perceived as a proxy to cognitive engagement in the learning process, as substantial discussions require a deep understanding of both theoretical concepts behind data analysis and practical implementation rules. Additional data on reading the Q&A forum, unavailable for this study, could help to explain the relationship between academic achievement and passive consumption of communication.
According to the model, there is no linear connection with the student's previous achievement (GPA) while controlling for other predictors.
The percentage of non-obligatory assignments completed (addit. tasks) and the percentage of wrong submission attempts (wrong attempts) on the online platform are complimentary. The first percentage indicates the overall extra learning activity progress, while the second predictor indicates the strategy used to accomplish assignments. The student may use a number of wrong attempts hoping to get the right solution, or they may invest in deeper understanding of the task and make one or just a few submissions. The strong negative effect on academic achievement corresponds to other studies of blended learning settings [20].
As in [24], our model shows a significant positive effect of non-mandatory exercises: the submission of all extra assignments is associated with a 12.66-point increase in academic achievement.
Sociology students (program -soc) have no statistically significant direct increase in terms of academic achievement in comparison to students from other social science & humanities programs, although students from educational programs with larger mathematics components (program -ecmanlog) have approximately a 5-point increase in mean academic achievement while controlling for other predictors. Another result is the difference in effects of attendance for the programs with a larger mathematics component. Table 2 shows that the interaction of attendance with the educational program has a significantly lower effect (attendance: program interaction term) on academic achievement among students from economics, management, and logistics departments in comparison with social and humanities programs (excluding sociology). Existing research in the field considering CS1 courses shows a negative association between lecture attendance and academic achievement [25]. In our case, one possible explanation is that the academic performance of economics and management students, whose majors more efficiently prepare them to study data science, depends less on in-class participation in their learning.
Furthermore, student gender (gender) does not demonstrate either a significant direct connection nor an indirect (moderated) effect.
To study the peer effects, the difference in achievement between those stated they have no friends (no friends) and those who stated they have friends was analysed; two subsamples were then compared using the Kruskal-Wallis test. According to the results, the null hypothesis was rejected (Kruskal-Wallis chi-squared = 10.401, df = 1, p-value < 0.01, Ml = 30, M2 = 37, n1 = 22, n2 = 170), and students with access to a friends network are characterized to have higher academic achievement than their peers without nominated friends, confirming the findings in [11].
The grade gap with the friends (grade gap) demonstrates a significant positive connection with academic achievement. Students who got higher grade than their friends did in the first semester tend to have an additional increase in academic achievement.
Conclusion
In this research, we analysed the main predictors of academic achievement for the 2nd year non-STEM undergraduates from a blended data science course. The factors related to initial mathematics knowledge, specific traits of educational programs, online and off-line learning engagement, and connections with peers were considered.
Social sciences and humanities educational programs' students have significantly lower achievement than students from economics, management, and logistics programs, moreover the latter demonstrated weaker positive connection between class attendance and academic achievement.
Also, communication on the educational Q&A forum, which is in the literature associated with higher cognitive involvement, showed no statistically significant connection with academic achievement with control for other predictors, requiring a deeper exploration of this and other achievement factors. Many of the active forum users also show a high level of engagement in coding and online exercises. Thus, a form of the Matthew effect can be proposed here: students engaged in the subject get even more actively engaged, while others do not benefit from additional forms of task discussion.
In addition, connections with peers have a significant effect on a student's achievement. A student being more successful than their friends lead to an additional increase in academic achievement. These effects seem to work for any students regardless of the educational level and specialization. One of the possible explanations is that peer support, especially when peers have more expertise in the subject, boosts the self-confidence of a student. Self-concept is also enhanced by the positive comparison with others. Specifically, being better than the frame of reference makes a student more confident in any academic domain.
This study represents our first attempt to build a baseline model of academic achievement in a blended setting of an introductory data science course and has some limitations.
First, this work mostly focuses on metrics of learning behaviour and does not consider the effects of demographic factors while they have shown to have minor effects (in comparison with GPA) in university settings [6].
Second, with regard to the insignificance of previous achievement effects, scenarios with relationships between previous academic success and observed academic achievement being non-linear or mediated by other variables should be further explored. While there is a significant medium correlation between previous academic success and academic achievement (r = 0.43, n = 189, p < 0.01), it also correlates significantly with engagement in learning, e.g. code lines (r = 0.52, n = 189, p < 0.01), and class attendance (r = 0.44, n = 189, p < 0.01).
Lastly, the crude measure of peer effect was employed without further investigation of their mechanisms using social network analysis models of peer effects.
We plan to continue our work in this direction, including measures of motivation, self-regulation and social network analysis-based measures of peer effects, and explore the ways to support student course motivation [26], [27] via educational technology [28], [29] interventions. Ksenia Tenisheva is a senior lecturer in Department of Sociology at National Research University Higher School of Economics, Soyuza Pechatnikov, 16, in St. Petersburg, Russia. She also works as a junior research fellow at the Sociology of Education and Science Laboratory. She has a PhD degree in Sociology from HSE University. | 4,632.6 | 2019-03-14T00:00:00.000 | [
"Education",
"Computer Science",
"Mathematics"
] |
Parton Shower and NLO-Matching uncertainties in Higgs Boson Pair Production
We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. We observe large uncertainties even in regions of phase space where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.
Introduction
In the Standard Model (SM) the production of a pair of Higgs bosons at a hadron collider proceeds dominantly through the annihilation of gluons. As there is no direct coupling between the Higgs boson and gluons this process is mediated by intermediate massive quark loops. The top-quark loop contributions dominate by far, due to the large Yukawa coupling. Bottomquark loops only contribute at the 1 % level to the total cross section at leading order (LO) and can thus safely be neglected in most situations. The scattering amplitudes relevant for the calculation of the top-quark contributions up to next-to-next-to leading order (NNLO) are known in the approximation of an infinitely heavy top-quark (commonly referred to as the HEFT approximation) [11,14,13,26,25,12]. However, the validity of this approximation is questionable for Higgs boson pair production due to the large momentum transfer required to produce the Higgs bosons. Techniques to systematically improve upon it were extensively studied in [19,33,24,26,25,32]. The full result, which is exact in the mass of the top-quark, was known only to leading order (LO) [16,23,38] until recently. This is due to the complexity of computing the next-to-leading order (NLO) virtual corrections which feature two-loop, four-point integrals with both massive internal propagators and massive external lines. They have recently been calculated through the numerical evaluation of all relevant two-loop integrals as part of the full NLO calculation in [6,5].
At small transverse momenta p HH ⊥ of the Higgs boson pair, the accuracy of any fixed-order calculation is spoiled by the presence of large logarithms of the form α n s log(p HH ⊥ /m HH ) m .
They can be resummed to all orders using analytical resummation techniques which have been applied to Higgs boson pair production in [18]. Alternatively, parton shower simulations can be employed. In addition to providing a reliable transverse momentum spectrum at small p HH ⊥ , they also provide results that are fully differential in the kinematics of any soft and collinear QCD radiation. Standard techniques exist for the consistent matching of NLO fixed order calculations to parton shower simulations [20,34]. They were recently applied to Higgs boson pair production in reference [27], where the MC@NLO and POWHEG matching techniques were used to combine the fixed-order NLO calculation with the Pythia parton shower [41,40]. The results of [27] suggest that the parton shower matching can have sizeable effects not only in the region of small p HH ⊥ , but also in the region of large p HH ⊥ , where one would expect the fixed-order calculation to be reliable and the approximations inherent to parton shower simulations to break down. These effects even exceed the scale uncertainties of the fixed-order calculation.
In this publication we aim to critically assess the origin and size of the aforementioned effects and associated uncertainties. For this purpose we implemented a fully generic and process independent NLO subtraction along with the corresponding parton shower matching techniques for loop-induced processes in the Monte Carlo event generator Sherpa [21]. This allows us to perform our studies using the two different showers that are implemented in Sherpa [39,42] within the same parton shower matching framework.
This publication is structured as follows. In Section 1 we describe in detail the setup of our calculation along with a brief review of the MC@NLO matching technique and the parton showers we used for our studies. We present the results of our simulations in Section 2, focusing on the origin and size of uncertainties that are inherent to the matching technique applied. We also point out crucial differences that arise when going from the HEFT approximation to the full calculation. Our conclusions are presented in Section 3.
Fixed-Order NLO Calculation
For the virtual two-loop amplitude we utilize the result of reference [6,5], retaining the full finite top-quark mass effects. This amplitude was obtained by numerically evaluating all relevant 2loop 4-point Feynman diagrams with up to 4 scales. We adopt the input parameters of reference [5], with G F = 1.1663787 × 10 −5 GeV −2 , the mass of the top-quark set to m t = 173 GeV, the Higgs boson mass set to m H = 125 GeV, and their widths neglected. We also adopt the choice of reference [5] for the factorization and renormalization scales µ F = µ R = m HH /2. Perturbative uncertainties in the fixed-order part of the calculation are estimated by independently varying these scales through factors of 0.5 and 2. All studies are performed with hadronic center-of-mass energy √ s = 14 TeV. The NLO virtual amplitude is provided in the literature in the form of an interpolation grid in two Mandelstam variables, based on a fixed number of pre-computed phase-space points [27]. We extract the finite part of the UV renormalized virtual amplitude in the Conventional Dimensional Regularization (CDR) scheme with residual IR singularities subtracted according to the Catani and Seymour scheme [10], as required by the Sherpa event generator, using relations (2.5) and (2.6) of reference [27].
The leading order one-loop squared amplitudes for the Born process and real emission contributions are provided by OpenLoops [9]. For the evaluation of tensor and scalar one-loop integrals, we employ the Collier library [15], CutTools [37], and OneLOop [44,43].
For the regularization and numerical cancellation of infrared divergences in the real-emission part of the calculation we employ the dipole subtraction scheme of Catani and Seymour [10]. We have re-implemented this scheme within Sherpa in a fully generic and process-independent way for loop-induced processes. This implementation is qualitatively equivalent to the implementation in one of Sherpa's internal matrix element generators Amegic++ [31,22], apart from the fact that color-and spin-correlated amplitudes are to be provided externally through generic interfaces. Through a dedicated interface to OpenLoops and the aforementioned tools, NLO calculations for loop-induced SM processes are thus fully automated (given the availability of the virtual two-loop corrections) in Sherpa and will become available in a public version of the code. We have validated our implementation in Sherpa by comparing our results for the total cross section, for the differential Higgs boson pair invariant mass distributions, and for the differential single Higgs boson transverse momentum distributions to those published in reference [5].
Parton Showers
We consider two parton showers for matching to the fixed-order NLO calculation. Both algorithms are dipole-type showers in which QCD radiation is generated coherently off color dipoles spanned by pairs of pre-existing partons. Both showers are publicly available as part of the Sherpa event generator package.
Due to the dipole character of the parton showers, their splitting kernels can be used for the purpose of fixed-order NLO subtraction, thus simplifying the implementation of parton shower matching. The CS shower [39] directly uses the splitting kernels of the original Catani-Seymour subtraction scheme for parton evolution. The Dire shower [42] uses splitting kernels that are modified in such a way as to reproduce the collinear anomalous dimensions of the DGLAP equations. For NLO matching to the Dire shower, we use a modified version of the original Catani-Seymour subtraction scheme that reflects these changes in the splitting kernels [29]. The kernels of both showers approximate real emission amplitudes arbitrarily well in the limit of soft and collinear momenta. Away from the soft and collinear regions, however, they differ.
A further crucial difference between the two algorithms is the choice of evolution variable, which we generically denote by t in the following. The choice of evolution variable together with the shower starting scale µ PS dictates how much of the phase space away from the soft and collinear regions is available to the parton shower since the starting scale implements the following phase space constraint: For the discussion of the evolution variables we focus on the first (hardest) emission in the production of a color-neutral final state of invariant mass Q 2 = m 2 HH . We illustrate the kinematics of the first emission, producing a final state parton with momentum p j from the collision of two incoming massless partons with momenta p a and p b , in Figure 1. It is useful to consider the variables v and w, which are closely related to the standard Mandelstam variablest = (p a − p j ) 2 , Due to momentum conservationŝ +t +û = Q 2 which implies, In terms of v and w, the evolution variable in Dire is given by The functional form (4) implies that t Dire < Q 2 4 due to equation (3). It follows that for a parton shower starting scale of µ 2 PS = Q 2 4 , Dire behaves like a "power shower" in the sense that it populates the full phase space since t Dire < Q 2 4 and thus (1) is trivially fulfilled. In the CS shower, the evolution variable is given by This implies that, for a given kinematic configuration, t CSS is typically larger than t Dire , such that for a given value of µ PS the emission phase space of the CS shower is more restricted than that of Dire. It is worth noting that µ 2 PS = Q 2 4 in particular does not correspond to a "power shower" when employing the CS shower. This choice in fact severely constrains the emission phase space, since v + w can get close to 1 and thereby give rise to large values of t CSS .
NLO Parton Shower Matching
In the following we will focus on the MC@NLO matching prescription [20] using the notation of [30] with no distinction between fixed-order NLO subtraction terms D (S) and parton shower matching terms D (A) since we use the parton shower splitting kernels both for parton evolution and for infrared subtraction and keep all phase space constraints explicit. We thus denote the sum of subtraction terms as a function of the real emission phase space by D(φ R ), where the real emission phase space φ R = φ B × φ 1 can be decomposed into the Born phase space φ B and an extra one-particle emission phase space φ 1 . We then define the fixed-order differential seed cross sectionsB(φ B ) and H(φ R ) in terms of the leading order (Born) term B(φ B ), the UV-subtracted virtual corrections V (φ B ), and the real-emission contributions R(φ R ) bȳ where t(φ R ) is the map from a kinematic real emission configuration to the parton shower evolution variable t. The Heaviside function Θ(µ 2 PS − t(φ R )) in (7) implements the constraint (1). For notational convenience, we will omit the explicit φ R -dependence and write t(φ R ) = t in the following. In terms of the quantities introduced above, the fixed-order total NLO cross section is given by In MC@NLO, we generate events according to where t 0 is the infrared cutoff scale of the parton shower and the modified Sudakov form factor ∆(t 0 , t 1 ), which gives the probability for no emission to occur between scales t 0 and t 1 for the first parton shower step, is given by The first line of (9) corresponds to events that have Born kinematics at the level of the fixed-order seed event with weightB (S-events). They either don't undergo any emission above the infrared parton shower cutoff scale t 0 (first term in the square bracket) or they undergo their hardest emission at some scale t between µ 2 PS and t 0 (second term in the square bracket). The second line of (9) corresponds to events with real-emission kinematics at the level of the fixed-order seed event and weight H (H-events). All events are treated further by the parton shower precisely as in the leading order case, apart from the S-events that haven't undergone any emission, which are kept as they are.
Since the square bracket in (9) integrates to 1, the total cross section and any observable that is insensitive to QCD radiation is unaltered in MC@NLO compared to the fixed-order NLO result. In fact, it can be shown that a MC@NLO event sample will reproduce the fixed-order NLO result event to order α S relative to the Born for any infrared safe observable [20]. The parametric NLO accuracy is therefore not spoiled by the parton shower matching.
Parton Shower Matching Uncertainties
As stated in the previous section, NLO parton shower matching according to the MC@NLO method preserves the parametric accuracy of the fixed-order NLO calculation. Deviations from fixed-order results can numerically be nonetheless significant [30]. Such differences reflect genuine parton shower matching uncertainties, they can be particularly prominent for observables that are sensitive to real emission configurations and thereby to the interplay between parton shower emissions and fixed-order real emission configurations. We will therefore focus on the p HH ⊥ distribution in the following section, comparing MC@NLO matched parton shower simulations to fixed-order results with both the Dire and the CS shower.
In order to formally compare the MC@NLO result to a fixed-order prediction for this spectrum, we first consider a generic observable O that is insensitive to kinematic Born configurations. For such an observable we need to take into account H-events and parton shower emissions off Sevents. At order α S relative to the Born we have where the first integral corresponds to S-events in which the parton shower has generated a nonvanishing value of O and the second integral corresponds to H-events, where a non-vanishing value of O is implied by the real-emission kinematics of the fixed-order seed event. In the tail of the distribution where we can neglect the Sudakov suppression and set ∆ = 1, we obtain after plugging in the definition of H: To order α s we haveB = B and the first integral cancels as required by the matching conditions, thus restoring the fixed-order result. This explicitly demonstrates how variations in the parton shower contributions induced by S-events are subtracted by the MC@NLO subtraction terms D in the definition of H. Numerically, however, this cancellation can be severely spoiled, potentially leading to large deviations from the fixed-order result. For the deviations to be significant the term on the first line of equation (13) must be similar in size to the fixed-order term on the second line of (13). One can therefore expect large deviations from the fixed-order calculation only if both of the following conditions are met: 1) The factorB −B dressed with the parton shower splitting kernels ( D B in (13)) is comparable in magnitude relative to the real-emission matrix elements in R. This depends on the size of the NLO corrections that enterB and on the splitting kernels in the phase space region of hard emissions.
2) The phase space of interest is accessible to the parton shower so that the first integral in (13) has support in that region. This depends on the choice of µ PS and on the shower (through the functional form of t(φ R )).
The formally sub-leading contributions originating from the parton shower matching in the first integral in (13) are, to a large extent, controlled by the choice of µ PS . To access the matching uncertainties we will therefore vary this parameter by factors 2 and 0.5. With two different parton showers at our disposal we have an additional handle on these uncertainties through the functional form of t(φ R ). The nominal choice for µ PS in the CS shower will be µ PS = m HH /2, in line with µ R and µ F . As outlined above, such a choice would open up the entire emission phase space in case of the Dire shower. Our nominal choice for the Dire shower will therefore be µ PS = m HH /4, which allows us to perform both up and downwards variations.
Based on the argument presented above, one might expect to see large parton shower contributions in the high-p T tails of other processes with large K-factors. However, it is important to note that for such effects to be visible theB − B factor must remain large, relative to the real-emission matrix element, also when multiplied by the parton shower splitting kernels. In single Higgs boson production through gluon fusion, for example, one might anticipate a large shower contribution in the tail of the Higgs transverse momentum spectrum due to the large NLO K-factor. However, in this case the parton shower splitting kernels underestimate the realemission matrix elements significantly, such that the parton shower contributions in the tail are very small in an MC@NLO-matched calculation [1,30]. For the parton showers considered in this work, this holds even when taking into account the full top mass dependence in the realemission matrix elements, which reduce the size of R by more than an order of magnitude in the tail of the transverse momentum distribution.
Leading Order Results
We start our discussion with predictions obtained in the most simple setup, using leading order matrix elements for inclusive Higgs boson pair production supplemented by a parton shower. This type of simulation will be referred to as "LO+PS" in what follows. Since the transverse momentum of the Higgs boson pair is zero at leading order, any non-zero value of this observable is entirely generated by the parton shower. As a reference, we use a fixed-order prediction obtained by simulating the process p p → H H j with leading order matrix elements. Figure 2 and 3 show the result of our comparison both in the HEFT approximation and in the full theory, respectively. Comparing the full SM and the HEFT approximation, we observe qualitatively different parton shower effects. In the HEFT approximation, both parton showers significantly underestimate the fixed order spectrum in the tail of the distribution. Even if the full phase space is made available to the showers they do not reproduce the slowly falling transverse momentum spectrum predicted by the fixed-order HEFT matrix elements. To show this for the CS shower we display also LO+PS results obtained by setting the parton shower starting scale to the hadronic center-of-mass energy µ PS = √ s. In the case of the Dire shower, the full phase space is already available for µ PS = m HH /2, which corresponds to the upper edge of the uncertainty band.
In the full SM, by contrast, for large enough values of the parton shower starting scale µ PS both parton showers overestimate the fixed-order prediction. For the CS shower, this effect is restricted to smaller transverse momenta, due to the choice of evolution variable. If we lift any phase space restriction in the CS shower, by setting µ PS = √ s, we observe that in the tail of the distribution the shower overestimates the fixed-order predictions by more than an order of magnitude. The upper edge of the Dire shower uncertainty band also overestimates the fixed-order prediction, although this feature is not as pronounced as for the CS shower.
It is therefore evident that the Born matrix elements dressed with splitting kernels can strongly overestimate the real emission matrix elements in the full SM and strongly underestimate the real emission matrix elements in the HEFT. The former effect is, however, to a certain extent limited by the phase space constraint implemented through the parton shower starting scales.
Naively, the large differences between the HEFT and the full SM simulations may seem surprising. However, the high-energy behaviour of the HEFT real emission amplitudes is unphysical because the momentum transfers vastly exceed the top-quark masses which have been integrated out in the HEFT approximation. As a result, the spectrum calculated at fixed-order falls off extremely slowly in the HEFT and the parton shower kernels thus tend to underestimate the spectrum in the tail. A similar slow fall off can been observed in the HEFT approximation for the Higgs boson transverse momentum in single Higgs boson production [4,17,7,36,8].
NLO Results
We start the discussion of NLO-matched parton shower simulations with the results of a HEFT treatment, shown in Figure 4. As discussed in Section 2.1 (and shown in Figure 2) the combination of Born matrix elements and parton shower splitting kernels strongly undershoot the full real-emission matrix elements when employing the HEFT approximation. We therefore expect the tail of the distributions to converge to the fixed-order result both for the CS shower and the Dire shower. As shown in Figure 4, this is indeed the case. In addition to a more precise description of the tail (compared to the LO+PS type simulations) we observe a reduction of the parton shower starting scale uncertainties. The individual variations of H and S-event contributions are of order one for some values of p HH ⊥ but cancel to a large extent in the sum. Moving on to the discussion of results in the full SM, we remind the reader of our findings in the corresponding LO+PS type simulations. In the full SM, the parton shower splitting kernels in combination with Born matrix elements overestimate the real-emission matrix elements (see Figure 3). The parton shower effects in the tail of the p HH ⊥ distribution are therefore large. As shown in Figure 5, this also holds at NLO. The parton shower matched results converge to the fixed order result in the tail for nominal choices for µ PS . Upward variation of µ PS , however, (indicated by the upper edge of the blue uncertainty bands) lead to parton shower effects of up to +100 % even in the tail of the distribution. As shown in the lower panels of Figure 5, the excesses in the tail are indeed generated by parton shower emissions off S-events. The extent of these effects is limited by the phase space available to the parton shower, as determined by the choice of µ PS and the functional form of the evolution variable. We observe that results generated using the Dire shower, particularly for larger values of µ PS , have a different shape than those generated with the CS shower.
For the MC@NLO algorithm, by construction, the large parton shower effects in the tail should be cancelled to first order in α S . As outlined in Section 1.4, any mis-cancellation is due to a numerically large discrepancy between B andB. We demonstrate this explicitly in Figure 6, where we show modified Dire MC@NLO with B substituted forB, leading to a complete cancellation of the first integral in (13). This procedure eliminates large parts of the excess in the tail independently of µ PS , as anticipated. Variations in S-and H-event contributions remain large, as shown in the lower panels of Figure 6, but they cancel in the sum. The procedure of replacingB with B would, of course, spoil the NLO accuracy of any inclusive observable but allows us here to demonstrate the origin of the discrepancy between the showered and fixed-order results in the tail of the p HH ⊥ distribution.
In the HEFT approximation one may naively expect effects of similar size in the tail of the distributions. As demonstrated using a LO+PS simulation and as shown in Figure 2, however, the fixed-order real emission contributions completely dominate in this region. The bulk of the contributions in the tail are hence generated by the second integral of (13). As a result, the relative impact of parton shower effects in the tail remains small as shown in Figure 4.
Comparison to the Literature
In Figure 7 we compare our results for the p HH ⊥ spectrum to the NLO parton shower matched results presented in reference [27]. These results were obtained with the Pythia 8 shower [41,40] interfaced to MadGraph5 aMC@NLO [3,28] and POWHEG BOX [2] for matching according to the MC@NLO method and the POWHEG method [34], respectively. In MadGraph5 aMC@NLO the nominal value of µ PS is set randomly in the interval [0.1H T /2, H T /2], where H T is the sum of the transverse masses of the Higgs bosons. For the simulations based on MadGraph5 aMC@NLO and Pythia we show uncertainty bands that were obtained by varying the nominal parton shower starting scale by factors of 2 and 0.5. The POWHEG matching prescription can be recovered within the MC@NLO framework by setting the parton shower starting scale to the collider energy µ PS = √ s and by setting D = R [30]. Therefore, lacking a natural equivalent to µ PS in the POWHEG framework, we compare only to nominal POWHEG predictions produced with the hdamp parameter set to 250 GeV, as described in [27].
Focusing on the region p HH ⊥ > 100 GeV, we note that all MC@NLO predictions considered here are generally compatible within the uncertainty bands. However, the agreement between the nominal results of our simulations and the fixed-order result is much better in the tail. The POWHEG results exhibit a very large excess in the tail that is not covered by the uncertainty bands of our Sherpa predictions. Similar discrepancies between MC@NLO and POWHEG have been observed in the context of other processes [1,30] and can be attributed to the large phase space available to the parton shower as a result of setting µ PS = √ s and the numerically large discrepancy betweenB and B [35]. As described in Section 1.2, the former can be achieved in Dire by setting µ PS = m HH /2, which is represented by the upper edge of the uncertainty band around the Dire prediction. We note that the shape of this curve is in fact most similar to the POWHEG prediction in the tail of the distribution.
Comparing the different uncertainty bands themselves, we observe large differences. The shape of uncertainty bands obtained with MadGraph5 aMC@NLO and with the CS shower are somewhat similar with a peak around 300 GeV but the size of the uncertainty band around the MadGraph5 aMC@NLO result is much larger throughout. The uncertainties on the Dire prediction describe a more evenly shaped band.
Differences in the region of small transverse momenta are not fully covered by the µ PS -variation bands. Although we expect these variations to be indicative of NLO-matching uncertainties, we do not expect them to cover all parton shower uncertainties. These include, but are not limited to, ambiguities in the choice of a kinematic recoil scheme and ambiguities in the choice of the renormalization scales for the strong coupling in the splitting kernels.
In Figure 8 we show a comparison to the calculation of reference [18] which employed analytic next-to-leading-log (NLL) resummation techniques instead of parton showers. We observe good agreement within the uncertainties except near the peak region and the region around p HH 100 GeV where we find discrepancies of about 5 % that are not fully covered by our uncertainty bands. Taking into account the resummation scale uncertainties on the analytic results (not shown in Figure 8), which are of the order of 3 % in the peak region and 10 % near p HH ⊥ = 100 GeV [18], we consider the observed agreement satisfactory.
Other Observables
Having discussed the Higgs boson pair transverse momentum at length, we briefly discuss parton shower effects on a number of other observables.
Lorentz invariant observables that depend only on the momenta of the Higgs bosons are not affected by the parton shower. The kinematics of the Higgs boson pair is only altered by emissions off dipoles spanned by two initial state partons. The recoil generated by such an emission affects the Higgs boson pair only through a Lorentz boost. Lorentz-invariant quantities like the Higgs boson pair invariant mass are therefore not affected by the parton shower. Our simulations were checked by inspecting the Higgs boson pair invariant mass distributions, we observe agreement within the statistical uncertainties of well below one percent. Figure 9 shows the leading jet transverse momentum p j ⊥ . As opposed to the Higgs boson pair, the parton emitted in the hardest emission off an S-event and the final state parton in an H event is affected strongly by secondary emissions because the kinematics are altered by final-final and final-initial dipoles. Such emissions decorrelate the leading jet and the Higgs boson pair momenta. Parton shower effects are therefore qualitatively different. The effect of the parton shower are generally moderate and the spectrum remains compatible with the fixedorder prediction within the scale uncertainties. It is worth noting that even the full µ PS -variation bands remain within the uncertainty bands of the fixed-order calculation. This picture drastically changes when considering observables that are more sensitive to high multiplicity final states. As an example we consider the differential H T distribution, defined as the scalar sum of jet transverse momenta: where the index i labels all jets in the respective event. In a parton shower event, the total energy is typically distributed among a larger number of jets than in a fixed-order calculation. Their scalar contributions to H T are added, giving rise to larger values of H T than for the fixed-order NLO calculation. We show this effect in Figure 9. Comparing the Dire and CS shower predictions, we note that the uncertainty bands overlap but that the shower starting scale variations are much larger for the CS shower.
In Figure 10 we show the azimuthal separation between the Higgs bosons ∆φ HH . At leading order, the momenta of the Higgs bosons are perfectly correlated due to momentum conservation. Only in events with additional radiation can one observe a non-trivial distribution of the azimuthal separation between the Higgs bosons. As shown in Figure 10, parton shower corrections to the fixed-order result are mostly covered by the fixed-order uncertainties except in the region of ∆φ HH = π which corresponds to back-to-back configurations and which is sensitive to soft QCD emissions.
Also shown in Figure 10 is the transverse momentum of a randomly chosen Higgs boson. The effect of a parton shower emission on the transverse momentum of a given Higgs boson is random, either decreasing or increasing its value. If the distribution was completely flat, any parton shower effects would therefore average out. Since the distribution is falling, the parton shower effects of increasing the transverse momenta of low-p ⊥ Higgs bosons is not counter-balanced by the effect of decreasing the transverse momenta of high-p ⊥ Higgs bosons, thus inducing a slope relative to the fixed-order result. This effect is small but clearly visible in Figure 10.
Conclusions
We have presented a study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC. We assessed these uncertainties by matching the fixedorder NLO calculation to two dipole shower algorithms in the Sherpa event generator according to the MC@NLO framework. The interplay between fixed-order real emission contributions and parton shower emissions was studied in detail through variations of the parton shower starting scale. We find large matching uncertainties that exceed the fixed-order uncertainties even in regions of phase space where the fixed-order calculation is well motivated and where parton shower matching effects are expected to be small. Our nominal predictions are in good agreement with the fixed-order result in these regions, however. A comparison to MC@NLO matched results from the literature revealed qualitative differences which are, nevertheless, compatible within the large uncertainties. We observe larger differences in a comparison to POWHEG predictions in the tail of the transverse momentum spectrum, where POWHEG overestimates the fixed-order spectrum by a factor of 2. We find reasonable agreement throughout between our results and those obtained through analytic resummation techniques. | 7,553 | 2017-11-09T00:00:00.000 | [
"Physics"
] |
Photo-Responsive Carbon Capture over Metalloporphyrin-C60 Metal-Organic Frameworks via Charge-Transfer
Great efforts have been devoted to the study of photo-responsive adsorption, but its current methodology largely depends on the well-defined photochromic units and their photo-driven molecular deformation. Here, a methodology to fabricate nondeforming photo-responsive sorbents is successfully exploited. With C60-fullerene doping in metalloporphyrin metal-organic frameworks (PCN-M, M = Fe, Co, or Ni) and intensively interacting with the metalloporphyrin sites, effective charge-transfer can be achieved over the metalloporphyrin-C60 architectures once excited by the light at 350 to 780 nm. The electron density distribution and the resultant adsorption activity are thus changed by excited states, which are also stable enough to meet the timescale of microscopic adsorption equilibrium. The charge-transfer over Co(II)-porphyrin-C60 is proved to be more efficient than the Fe(II)- and Ni(II)-porphyrin-C60 sites, as well as than all the metalloporphyrin sites, so the CO2 adsorption capacity (CAC; at 0 °C and 1 bar) over the C60-doped PCN-Co can be largely improved from 2.05 mmol g−1 in the darkness to 2.69 mmol g−1 with light, increased by 31%, in contrast to photo-irresponsive CAC over all C60-undoped PCN-M sorbents and only the photo-loss CAC over C60.
Introduction
Traditional technologies of adsorption separation such as pressure swing adsorption, temperature swing adsorption, and vacuum swing adsorption always require large power quantities [1][2][3][4].Stimuli-responsive adsorption separation is promising to provide high adsorption capability and selectivity but without additional power burden, such as guest-molecular and temperatureresponsive adsorption [5][6][7].In view of the great market prospect of solar energy, how to introduce optical factors in the adsorption separation are attractive, and thus, great efforts have been devoted to the study of photo-responsive or photo-stimuli sorbents [8][9][10][11].Majority of current photo-responsive sorbents are designed to exploit the deformation of photochromic units [12][13][14], such as azobenzenes, diarylethenes, and spiropyrans, which are treated as the guest molecules to be integrated within the host materials like metal-organic frameworks (MOFs), covalent-organic frameworks (COFs), and mesoporous zeolites [15][16][17].Once the photochromic units deformed owing to photo-stimulation, e.g., the trans-to-cis azobenzene and the open-to-close diarylethene ring with the ultraviolet (UV) irradiation, the steric effect or the polarity nearby the adsorption sites can be changed, resulting in the photo-modulated adsorption capability of the host material [18][19][20][21][22].However, in most cases, the photo-responsive sorbents so prepared only exhibit the adsorption capability decrease with light, owing to the limitations of the deforming mechanism.
The photo-modulated adsorption capability does not perforce depending on the deformation of photochromic units when we realize that the specific adsorption capability exhibited by an adsorption site is ultimately attributed to its specific electron density distribution (EDD), which is invariable only at ground states, namely, Hohenberg-Kohn theorem [23].Once the adsorption site is excited by photo-stimulation, the EDD can be evolved, and thus, the modulated adsorption capability can be expected even without molecular deformation [24,25].Anyway, it is difficult to dramatically alter the EDD because the electron hole is largely overlapped for the local excitation with long lifetimes, whereas a charge-transfer (CT) excitation that can effectively alter the EDD is easily quenched, which cannot meet the timescale of microscopic adsorption equilibrium.The areas of artificial photosynthesis and organic photovoltaics related to light harvest and conversion afforded us experiences [26][27][28].In these areas, the electron donor-acceptor systems were fabricated to attain long-lived CT states.In particular, fullerenes, as efficient π-electron acceptors due to highly delocalized π-electrons over the 3-dimensional π-sphere, can be coupled with porphyrins as the electron donors to form diverse dyads with efficient CT and energy-transfer processes [29].For example, the CT state that resulted from the excitation of a triad, constructed by linking C 60 and bis (3,4,5-trimethoxyphenyl) aniline to Al(III)-porphyrin, lies energetically 1.50 eV above the ground state [30].
In this research, we expand the methodology of photoinduced CT state to the photo-responsive adsorption separation for the first time.Without photochromic units deforming, the methodology of nondeforming photo-responsiveness to efficiently modulate the EDD of adsorption sites is realized through photoinduced CT.We employed the metalloporphyrin MOF of PCN-222-M [PCN-M for short, and M = Fe(II), Co(II), or Ni(II)] and the fullerene of C 60 to fabricate corresponding composite material, code-named CPCN-M, which are then used as the sorbents for the photoinduced selective adsorption of CO 2 (Fig. 1).As a promising approach for carbon capture, adsorption separation has drawn much attention [31,32].In particular, the direct air capture (DAC) as the new generation of carbon capture technology may have to mimic the biological photosynthetic carbon sequestration [33,34].For example, microalgae cultures have promise as a CO 2 sink for atmospheric carbon and as a sustainable source of food and chemical feedstocks [35].Therefore, a sorbent that can exhibit enhanced CO 2 adsorption capability in the sunlight sounds inspiring.Here, all the C 60undoped PCN-Ms do not exhibit noticeable changes of CO 2 adsorption capability with UV-visible (UV-Vis) irradiation; meanwhile, the CO 2 uptake capacity over C 60 only decreases with the UV-Vis irradiation.In contrast, their composite sorbents CPCN-Ms exhibit marked photo-responsiveness in terms of CO 2 adsorption capability.For instance, the CO 2 adsorption capacity (0 °C, 1 bar) of CPCN-Co is elevated from 2.05 mmol g −1 in the darkness to 2.69 mmol g −1 with the UV-Vis irradiation, increased by 31%.Moreover, even though the porphyrincoordinated metals in CPCN-Ms, i.e., Fe, Co, and Ni, only differ by an electron, we prove that their CT modes and resultant macroscopical adsorption performances with the UV-Vis are totally different from each other.
Materials characteristics
Unless otherwise noted, the mass ratio between C 60 and PCN-M for CPCN-M is 1:8, which exhibits better adsorption capability and more obvious photo-responsiveness than those with other mass ratios, as discussed in the following section.The crystalline grains of as-prepared CPCN-Ms are neat and uniform, and introducing C 60 does not influence the successful crystallization (Fig. S1).It can be further confirmed through x-ray powder diffraction (XRPD).The XRPD patterns of CPCN-Ms, which are also not changed by the type of the porphyrin-coordinated metal, are consistent with that of PCN-M reported (Fig. S2) [36][37][38].The successful preparation of CPCN-Ms can be proved by the Fourier transform infrared (FTIR) spectra.The characteristic band of C 60 at 1180 cm −1 is retained for the FTIR spectra of CPCN-Ms, in which the characteristic bands located at 1006 cm −1 indicate that the presence of porphin-N-metal bonds can also be seen (Fig. S3).In the high-resolution electron microscopy (HREM) images (Fig. S4), the structured C 60 aggregation attached to the MOF grain crystals can be seen occasionally, and a considerable number of C 60 molecules ought to be monodispersed among the crystal lattices of the MOFs, manifested as bright spots with the size of 0.7 nm at the accelerating voltage of 200 kV [39,40].Moreover, owing to the doped C 60 molecules, the observed lattice distance (ca.2.6 nm) of CPCN-M is stretched compared to the theoretical value (ca.2.1 nm) of the PCN-M [200] plane, and the lattice is somewhat distorted.The crystal forms of different CPCN-Ms are similar to each other, as well as their textural properties.As shown in Fig. 2A, all the N 2 adsorption-desorption isotherms of CPCN-Ms feature typical IV type and exhibit a steep increase at low P/P 0 and an increase at P/P 0 = 0.25, suggesting microporosity and mesoporosity.The Brunauer-Emmett-Teller (BET) specific surface areas (S BET ) of CPCN-Fe, CPCN-Co, and CPCN-Ni are 2100, 2160, and 2130 m 2 g −1 , respectively (Table S1).Note that S BET of C 60 is lower than 10 m 2 g −1 (Fig. S5), so the high S BET values of CPCN-Ms are mainly from the host PCN-Ms.There are 2 types of pores for all CPCN-Ms, with sizes of 1.4 and 3.0 nm (Fig. 2A, inset), assigned to their triangular micro-and hexagonal meso-channels, respectively.The undifferentiated textural properties of CPCN-Ms ensure that their adsorption performances only depend on the intrinsic activity of the adsorption site, i.e., the metalloporphyrin ring, either at the ground state or at excited state.
Although the dispersed C 60 ought to interact with the host PCN-M via van der Waals' forces [41], it can be proved that the interaction is intense.According to the thermogravimetric (TG) profiles, the weight loss at the temperature range higher than 650 °C can be observed for all CPCN-M samples, implying the intense interaction between C 60 and PCN-M (Fig. S6).Moreover, the residual weights shown in the TG profiles conform to the theoretical ash contents of CPCN-Ms.The H-nuclear magnetic resonance (H-NMR) provides more convincing evidence.Taking the H-NMR of PCN-Co and CPCN-Co, for example (Fig. S7), owing to the shielding effect caused by C 60 , the chemical shift δ of the porphin ring-H moves to higher magnetic field compared to that of PCN-Co, and correspondingly, the δ of both the proximal and the distal phenyl-H moves to lower magnetic field due to the deshielding effect.The intense interaction between C 60 and the host PCN-M further influences the presentational valences of the porphyrincoordinated metals.In Fig. 2B, obvious shake-up peaks can be observed in all x-ray photoelectron spectroscopy (XPS) spectra of CPCN-Ms, indicating that the experimentally introduced metals exist in the status of high valence, i.e., Fe(II), Co(II), or Ni(II), and the bivalence exhibits as the deconvolved peaks under their 2p3/2 bands correspondingly, such as those located at 708.7 and 709.9 eV for Fe(II), 781.7 eV for Co(II), and 852.3 eV for Ni(II), and other deconvolved peaks located at lower binding energies indicate the lower presentational valences for all CPCN-Ms.For example, the peaks located at 707.7 and 706.8 eV for CPCN-Fe should be caused by the porphyrin ring with large conjugate orbitals donating electrons and C 60 -porphyrin ring cooperatively donating electrons to the coordinated Fe(II), respectively.The C 60 -porphyrin ring cooperatively donating electrons is so intensive that the coordinated Fe(II) exhibits a tendency to be reduced to Fe(0) [42].Such intense interaction between C 60 and the porphyrin-coordinated metals is crucial for the photoinduced adsorption because the core metal is the decisive factor for CT and the formation of excited states.
Photoinduced adsorption
We have previously proved that although the metalloporphyrin and their derivatives can be excited to diverse orders with the UV or UV-Vis irradiation, de-excitation processes such as internal conversion, vibrational relaxation, and intersystem crossing must be rapid, and ultimately, the most stable low-order excited states will alter the EDD dominantly [24,25].Without molecular deformation, the excitation-altered EDD substantially different from the EDD at the ground state must take an effective CT as a prerequisite, and the lifetime of the low-order excited state must be long enough to meet the timescale of molecular adsorption equilibrium (~10 −6 s) [43,44].With the 420-nm irradiation, the phosphorescent radiation related to the low-order excited states can be detected for all CPCN-Ms (Fig. S8).The corresponding phosphor decay profiles demonstrate that the effective lifetimes of the excited states for CPCN-Fe, CPCN-Co, and CPCN-Ni have reached 41, 44, and 35 μs, respectively (Fig. 2C).The lifetimes are durable enough to meet the molecular adsorption equilibrium.In contrast, although the phosphorescent radiation of C 60 can also be detected, its effective lifetime is only 6 μs (Fig. S9).Moreover, in view of the poor textural property of C 60 as mentioned above, it is safe to say that the isolated C 60 aggregation, if any, would not perturb the investigation for the adsorption capabilities of the composite sorbents either at the ground state or at the excited state.
Owing to the intrinsic high S BET value and microporosity, CPCN-Ms exhibit selective adsorption of CO 2 at ground states.As shown in Fig. 3A and Table S1, the adsorption capacities of CPCN-Fe, CPCN-Co, and CPCN-Ni in the darkness for CO 2 reach 2.08, 2.05, and 2.43 mmol g −1 at 0 °C and 1 bar, whereas those for N 2 are only 0.20, 0.14, and 0.13 mmol g −1 , respectively.The initial selectivity of CO 2 toward N 2 calculated with ideal adsorption solution theory can reach 124, 132, and 144, respectively (Fig. S10).As for the photoinduced adsorption experiments, the UV-Vis light with the wavelength of 350 to 780 nm was employed to sufficiently excite the sorbents because CPCN-Ms exhibited strong absorption at a wide UV-Vis range (Fig. S11).With the UV-Vis light, the CO 2 adsorption capacities of CPCN-Fe and CPCN-Co are elevated to 2.50 and 2.69 mmol g −1 at 0 °C and 1 bar, increased by 20% and 31%, respectively, while that of CPCN-Ni decreases to 1.99 mmol g −1 at 0 °C and 1 bar, with the loss rate of 18% instead (Fig. 3A, D, and E and Table S1).Compared to the CO 2 adsorption markedly changed As for the undoped materials, i.e., C 60 , PCN-Fe, PCN-Co, and PCN-Ni, none of them exhibits obvious photo-responsiveness in terms of selective CO 2 adsorption (Fig. 3B and C).It is no wonder that the CO 2 uptake capacity of C 60 at 0 °C and 1 bar is only 0.11 mmol g −1 in the darkness, and then decreases to 0.06 mmol g −1 with the UV-Vis irradiation, because its textual property is so poor as mentioned above, and as discussed below, the decrease of adsorption activity over C 60 at the excited state must cause the CO 2 desorption.In contrast to the obvious photodesorption of CO 2 over C 60 , a noteworthy fact is that the CO 2 adsorption isotherms with the UV-Vis are almost coincided with those in the darkness for all PCN-Ms.This means that the photo-responsiveness of pristine PCN-Ms, if any, is too weak to dramatically alter the CO 2 adsorption capability.These control experiments further verify the necessity of doping C 60 in PCN-Ms to generate effective photo-responsiveness. Anyway, the doping amount of C 60 should be optimized to balance the CO 2 adsorption capacity and the photo-responsiveness, and thus, the C 60 / PCN-M mass ratio of 1:8 is proved to be appropriate (Fig. S12).The photo-responsiveness exhibited by CPCN-Ms ought to be strong enough, which is observable even at 25 °C (Fig. S13).Among the CPCN-M sorbents, the photoinduced CO 2 adsorption performance of CPCN-Co is competitive with other representative photo-responsive CO 2 sorbents during the past decade, especially when we consider that majority of the reported photo-responsive sorbents had to depend on deforming units, and the UV-Vis irradiation only caused the desorption over those sorbents (Table S2) [19,20,[45][46][47][48][49][50][51][52].Moreover, as the representative CPCN-M sorbent, CPCN-Co shows ideal ex situ reversibility, of which CO 2 adsorption capacity both at the ground state and with the UV-Vis irradiation can be well maintained even after five cycles (Fig. S14).
Mechanisms of photo-responsiveness
As mentioned above, owing to the C 60 doping in the host PCN-Ms, all CPCN-Ms exhibit markedly nondeforming photoresponsiveness in terms of CO 2 adsorption capability, while the photo-responsiveness of CPCN-Fe and CPCN-Co exhibits as photo-gained CO 2 adsorption capability, and that of CPCN-Ni presents as the photo-loss on CO 2 uptake capacity.In contrast, none of the host PCN-Ms exhibits obvious photo-modulated effects on CO 2 adsorption compared to that without UV-Vis light.The underlying mechanisms are revealed through the calculations based on density functional theory (DFT) and time-dependent DFT (TDDFT).Here, tetraphenylporphyrin-M (TPP-M, M = Fe, Co, or Ni) is employed to simulate the corresponding adsorption site of PCN-M [24], and the C 60 -TPP-M architecture (CTPP-M) is used to simulate that of CPCN-M, referring to Guldi's literature [41], of which C 60 is located at the c axis of the porphin ring.
As for the non-bonded interaction like adsorption, we have proved that the nature of adsorption site can be indicated by the molecular surface electrostatic potential (ESP) [21,24,53].The negative ESP area can be seen over TPP-Ms even at the ground state, which is mainly contributed by the conjugated electrons of porphin-N atoms, and the negative ESP interacts with the electron-deficient π 3 4 orbitals of the CO 2 molecule as strong van der Waals' force to capture it (Fig. S15).In contrast, the electron-full π 2 4 and σ 2 2 orbitals of N 2 molecule are much less prone to be induced by the negative ESP, which results in the high selectivity of CO 2 toward N 2 over the adsorption sites.Once TPP-M is excited, the original EDD at the ground state can be altered such that the maximal negative ESP is enhanced, which can induce and capture the CO 2 molecule more intensely.However, the enhancement of the negative ESP seems to be limited.For example, the maximal negative ESP over the TPP-Co is only changed from −30.0 meV at the ground state to −30.8 meV at the excited state.Such enhancement level might be just enough to offset the repulsion among the CO 2 molecules that are more gathered nearby the adsorption site due to the induction force enhanced with the negative ESP.Therefore, there is no significant difference between the CO 2 adsorption isotherms in the darkness and those with the UV-Vis light.As for CTPP-Ms, the excited states alter the EDD and the resultant ESP over the porphin ring powerfully due to the presence of C 60 .As Fig. 4A The very different response modes of abovementioned adsorption sites to the UV-Vis excitation should be ascribed to the core metals and whether there is C 60 .With the electron-hole distribution analysis for all the adsorption sites at excited states [54,55], both electrons and holes are mainly contributed by the core metals (Table S3).The DFT calculations prove that all TPP-Ms and CTPP-Ms are inner-orbital configurations, so the electronic configurations of the coordinated metals should be Fe-d[22110], Co-d[22210], and Ni-d [22220].Because there are 2 and 1 unpaired d-electrons for Fe and Co, respectively, they are prone to accept the conjugated electrons from the porphin ring once excited.In contrast, Ni is prone to donate its electrons at the excited state.This is why the core metal can participate in the EDD alteration via excitation.As for TPP-Ms, the rearranged electrons are enriched over the core metals, resulting in an enhanced negative ESP (Fig. S16).However, both electrons and holes at excited states are mainly contributed by the core metals, and the electron hole is largely overlapped for TPP-Ms; in other words, the CT state over TPP-Ms is inapparent, so the enhancement of the negative ESP over TPP-Ms is limited.
As for CTPP-Ms, the d orbitals of the core metal are coupled with the 3-dimensional π-orbitals of C 60 at the c-axis direction, which makes it possible to realize the CT state.As shown in Fig. 4B, the holes at excited states contributed by Fe and Ni are coupled with those by C 60 , respectively, as well as the electrons at excited states contributed by Co.Although both the electrons and holes of the excited CTPP-Ms are still mainly contributed by the d-orbitals of the core metals, the symmetry of these areas is largely broken both at the c-axis direction and on the porphin ring, realizing the CT states.As a result, the variation of dipole moment μ with respect to the ground state for CTPP-M is much larger than that for the corresponding TPP-M due to CT at the c-axis direction, and the electron/hole delocalization indexes of the excited CTPP-M are all less than those of the excited TPP-M, indicating more delocalized electron/hole of CTPP-M than that of TPP-M due to CT on the porphin ring (Table S3).The excitation resultant CT states over CTPP-Ms are different from each other due to the different electron configurations of the core metals.The coordinated Fe ion of CTPP-Fe with 2 unpaired d-electrons can accept the electrons not only from the porphin ring but also from C 60 at the excited state; meanwhile, the electrons from the porphin ring cannot be completely donated to the core Fe owing to the mismatched symmetry between orbitals, so the final result is that CT occurs not only between TPP-Fe and C 60 but also between different areas on the porphin ring (Fig. 4B).The Co ion in CTPP-Co with one unpaired d-electron only causes electron donating from the porphin ring, and C 60 also provides π-orbitals to jointly accept the electrons such that the electrons enriched at the c-axis direction are beneficial to the enhancement of negative ESP.The fully paired electrons of the Ni ion in CTPP-Ni once excited must be donated, and the hole coupled with C 60 decentralizes the electrons on the porphin ring severely, so the negative ESP nearby the core Ni is weakened at the excited state.In addition, C 60 itself cannot act as the CO 2 adsorption site, in view of its nearly neutral ESP at either the ground or excited state (Fig. S17), of which electron hole is almost completely overlapped in all directions on the C 60 sphere (Fig. S18).In fact, the maximal negative ESP over C 60 is even increased from −2.0 meV at the ground state to −0.2 meV at the excited state (Fig. S17), clearly indicating the decrease of its adsorption activity toward CO 2 at the excited state and explaining its reduced CO 2 adsorption capacity with the UV-Vis irradiation.Moreover, as a carbonaceous material, C 60 may have the heat effect caused by the UV-Vis irradiation, with which the CO 2 capacity is reduced.
Discussion
The nondeforming photo-responsiveness of metalloporphyrin MOFs, i.e., PCN-222-Fe(II), PCN-222-Co(II), and PCN-222-Ni(II), is exploited in this study, with doping C 60 in the host MOFs to construct the composite sorbents.In contrast to the current methodology to fabricate photo-responsive sorbents depending on the mechanical deformation of photochromic units, the nondeforming photo-responsiveness here is based on markedly changing the EDD over the metalloporphyrin adsorption sites via effective CT caused by excited states.Owing to the intense interaction of metalloporphyrin-C 60 and their orbitals coupling, an effective CT path is built among the porphin ring, the coordinated metal, and C 60 .The unpaired d-electrons make the porphin ring coordinated Fe(II) and Co(II) the electron-acceptors, over which the excited electrons can be enriched to enhance their adsorption activity, whereas the porphin ring coordinated Ni(II) with fully paired d-electrons acts as the electron-donor instead such that its excited electrons are decentralized to reduce the adsorption activity severely.In addition to the effective CT that modulates the EDD of adsorption sites, the low-order excited states of the composite sorbents possess long lifetimes, which meet the timescale of molecular adsorption equilibrium.As a result, the CO 2 adsorption capabilities of all the composite sorbents are obviously changed, and especially that of the Co(II)-coordinated one is significantly increased, with the UV-Vis irradiation.
Materials synthesis
All involved chemicals were commercially purchased and used as received.The materials are prepared according to the reported recipes with slight modifications [36][37][38].To prepare the ligand of TCPP-M, the solution of 5,10,15,20-tetrakis (4-methoxycarbonylphenyl)porphyrin (TPPCOOMe, 1.0 mmol; purity > 95%; Yanshen, Jilin) and superfluous corresponding metal salt (FeCl 2 , CoCl 2 , or NiCl 2 ) in 100 ml of dimethyl formamide (DMF) was refluxed for 8 h.With H 2 O added in the solution, the generated precipitate was filtered and then washed with H 2 O sufficiently.The solid intermediate was dissolved in CHCl 3 and then sufficiently acidized with 1 M HCl.Washed with H 2 O and dried with MgSO 4 , the organic phase was evaporated to obtain the TCPP-M (M = Fe, Co, or Ni).
To prepare CPCN-M, C 60 -fullerene (6 mg; purity > 99.9%; Macklin, Shanghai) was ultrasonically dissolved in toluene, followed by the addition of ZrOCl 2 •8H 2 O (120 mg), TCPP-M (31 mg), and benzoic acid (1.20 g) mixed in DMF (8.0 ml).The mixture was sealed in a stainless autoclave with Teflon inner liner and heated at 120 °C for 24 h.After cooling down to ambient temperature, the crystals were harvested by centrifugation and washed with toluene for wiping off the dissociative C 60 particles, and then with DMF, tetrahydrofuran (THF), and CH 2 Cl 2 successively, for further cleaning the unreacted ligands and exchanging the solvent molecules.After drying in vacuum, CPCN-M (M = Fe, Co, or Ni) was gotten eventually.The recipe for PCN-M preparation is similar to that of CPCN-M, but without the participation of C 60 -toluene.
Characterization methods
The scanning electron microscope (SEM) images were scanned on a Nova NanoSEM 450 microscope.The HREM images were observed on a JEM-2100F apparatus at an accelerating voltage of 200 kV.With Cu Kα at 40 kV and 40 mA, the XRPD patterns were tested with a Bruker D8 Advance diffractometer.With the KBr wafer, a Nicolet Nexus 470 spectrometer was used to record the FTIR spectra.The NMR spectroscopy was recorded with a Bruker AVANCE II 400M apparatus.The XPS was recorded on a Thermo Scientific Escalab 250Xi device with an Al Kα source.TG analyses in N 2 atmosphere were performed with a TG209F1 apparatus.The UV-Vis absorption spectra, the luminescence emission spectra, and the luminescence decay profiles of the samples were recorded at ambient temperature with an FLS1000 from Edinburgh Instruments.After degassing the materials at 100 °C for 4 h, the N 2 adsorption-desorption isotherms at 77 K were measured over a Micromeritics ASAP 2020 analyzer.S BET was calculated at the P/P 0 range of 0.05 to 0.15.The total pore volumes were calculated at the relative pressure of 0.95, and the nonlocal DFT (NLDFT) was used to estimate the pore size distribution.
Static gas adsorption
Static adsorption experiments of CO 2 (purity > 99.999%) and N 2 (purity > 99.999%) over the sorbents were measured with the Micromeritics ASAP 2020 analyzer, respectively.The free space was determined using He (purity > 99.999%), with the assumption that He was not adsorbed.For pristine tests, the adsorption isotherms of CO 2 and N 2 were collected in a dark environment, with the sample cells immersed in an ice-water bath to keep the temperature at 0 °C or in a thermostatic water bath of 25 °C.For UV-Vis irradiation tests, a CEL-HXUV300 xenon lamp (optical power density: 2,000 mW cm −2 ; Beijing China Education AuLight Technology Co. Ltd.) was used with a VisREF optical filter to generate the exciting light at the wavelength of 350 to 780 nm.The xenon lamp was placed 20 cm away from the sample cells to provide excitation light source, and other operation conditions were similar to the pristine ones.
Computational methods
The DFT and TDDFT calculations were performed by employing wB97XD functional implemented in Gaussian-16 package.The functional was proved to be robust for both DFT and TDDFT calculations [56,57].With ultrafine numerical integration grids, self-consistent field procedures of full accuracy were performed with tight convergence and without any orbital symmetry constraints.The geometry relaxation was performed with the basis set of Def2-SVP.Four frontier excited states were involved in the TDDFT calculation with Tamm-Dancoff approximation in order to calculate the first excited state accurately.
Fig. 1 .
Fig. 1.The scheme of the composite CPCN-M construction and the nonderform ing photo-responsiveness to enhance CO 2 adsorption capability over the metalloporphyrin-C 60 site.
Fig. 3 .
Fig. 3.The static adsorption isotherms of CO 2 and N 2 tested with UV-Vis irradiation and in the darkness over CPCN-Ms and PCN-Ms at 0 °C.(A) Static adsorption isotherms of CO 2 and N 2 over CPCN-Ms, in which the green arrows indicate the variation trend of the UV-Vis CO 2 adsorption isotherm with respect to that in the darkness.(B) Static adsorption isotherms of CO 2 and N 2 over PCN-Ms.(C) Static adsorption isotherms of CO 2 and N 2 over C 60 .(D) Difference of the CO 2 uptake capacity at 1 bar with UV-Vis subtracting that in the darkness.(E) Rate of change for the photo-responsive CO 2 adsorption capacity at 1 bar.
exhibits, the maximal negative ESP values of CTPP-Fe and CTPP-Co are enhanced from −26.6 meV at the ground state to −31.3 meV at the excited state, and from −27.3 to −31.7 meV, respectively, whereas that of CTPP-Ni is weakened from −30.5 to −27.3 meV instead.Such obvious changes of ESP over the adsorption sites are enough to produce remarkable results on a macro level, and thus, the CO 2 adsorption isotherms over all CPCN-Ms can be markedly changed due to the UV-Vis irradiation.What is more, the different ESP changes over CTPP-Ms can well explain the experimental results of photo-gain for CPCN-Fe and CPCN-Co, but photoloss for CPCN-Ni in terms of CO 2 adsorption capability.
Fig. 4 .
Fig. 4. The effect differences between the EDD at the excited state (ES) and that at the ground state (GS) for CTPP-M.(A) Molecular surface ESPs with the density isovalue = 1 × 10 −3 at the ground and excited states.(B) The electron-hole distribution at the excited state (green area, electron distribution; blue area, hole distribution). | 6,578.6 | 2023-10-15T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Host–pathogen coevolution promotes the evolution of general, broad-spectrum resistance and reduces foreign pathogen spillover risk
Abstract Genetic variation for disease resistance within host populations can strongly impact the spread of endemic pathogens. In plants, recent work has shown that within-population variation in resistance can also affect the transmission of foreign spillover pathogens if that resistance is general. However, most hosts also possess specific resistance mechanisms that provide strong defenses against coevolved endemic pathogens. Here we use a modeling approach to ask how antagonistic coevolution between hosts and their endemic pathogen at the specific resistance locus can affect the frequency of general resistance, and therefore a host’s vulnerability to foreign pathogens. We develop a two-locus model with variable recombination that incorporates both general resistance (effective against all pathogens) and specific resistance (effective against endemic pathogens only). With coevolution, when pathogens can evolve to evade specific resistance, we find that the regions where general resistance can evolve are greatly expanded, decreasing the risk of foreign pathogen invasion. Furthermore, coevolution greatly expands the conditions that maintain polymorphisms at both resistance loci, thereby driving greater genetic diversity within host populations. This genetic diversity often leads to positive correlations between host resistance to foreign and endemic pathogens, similar to those observed in natural populations. However, if resistance loci become linked, the resistance correlations can shift to negative. If we include a third linkage-modifying locus in our model, we find that selection often favors complete linkage. Our model demonstrates how coevolutionary dynamics with an endemic pathogen can mold the resistance structure of host populations in ways that affect its susceptibility to foreign pathogen spillovers, and that the nature of these outcomes depends on resistance costs, as well as the degree of linkage between resistance genes.
Linkage Disequilibrium Calculation
We calculated linkage disequilibrium (LD) based on the equilibrium allele frequencies (Eq S1).
Numerical Integration Results
To verify that our numerical root finding was converging to equilibria that would be attained in biologically feasible scenarios, we ran simulations using the same parameters as in the main text but running numerical integration to = 10,000.To account for solutions with oscillatory equilibria (or very gradually dampened oscillations), we averaged each solution over the last 2000 time point to obtain an approximate equilibrium.We found no major differences in outcomes using this method when compared to the methods described in the main text.
Stability Analysis Results
With our coevolutionary simulations, nearly all parameters tested produced asymptotically stable equilibria (Fig S2).Cases where equilibria had a non-negative eigenvalue corresponded either to parameters near boundaries of genotype fixation or loss or regions where gene-for-gene coevolution can produce highly irregular oscillatory dynamics.In either of these cases, the non-stability of equilibria is likely due to numerical issues, and these scenarios are rare enough to not influence our conclusions.
We also assessed the equilibrium stability for our three-locus model with the linkage modifier allele (Fig S3).We examined equilibrium stability when the modifier locus was either fixed or lost.With loss, there was a significant region where the equilibria were unstable, but the same parameters corresponded to a stable equilibrium when the linkage modifier was fixed.These parameters represent scenarios where if the modifier allele can invade, it will sweep to fixation, potentially shifting a population from positive to negative resistance transitivity.
Linkage Disequilibrium
If polymorphism was maintained at both resistance loci, we calculated the degree of linkage disequilibrium (LD), and the transitivity slope.For LD, we used Lewontin's ′ normalized LD measure, where a value of 1 represents the highest possible level of linkage disequilibrium for any given allele frequencies.We also recorded the sign of LD, as this tells us the relative frequency of the GS genotype based on allele frequencies.To avoid numerical issues associated with low allele frequencies, we only considered LD for simulations where each allele represented at least 1 percent of the uninfected host population.
We found non-zero linkage disequilibrium (LD) across parameters where general resistance was polymorphic (Fig S2).When LD was non-zero, it was always negative, indicating a lower relative fitness of the GS and gs genotypes.Without coevolution, general resistance was either fixed or lost, there were no parameters which led to nonzero LD.
Alternative Model Assumption Results
We considered a range of alternative models, using density-dependent transmission (Fig S4 ), as well as the hard selection gene-for-gene model (Fig S3D-F), where virulence costs apply to all host genotypes.The equations for our system with densitydependent transmission are given below (Eq S3-4).Table S1.Mating matrix example for the two-locus model showing the genotype frequencies for offspring of each possible parental pair.To calculate the total proportion of offspring genotypes, these frequencies are weighted by the frequency of each parental pair, assuming random mating.The matrix in the main text is given by the last four columns here.
Figure S1 .
Figure S1.Equilibria classifications for default parameters.Yellow colors denote asymptotically stable solutions while blue represent solutions with a non-negative eigenvalue.(A): Stability for solutions without coevolution.Here, the large area of unstable solutions corresponds to regions where introducing the virulent pathogen genotype changes the equilibrium solution.(B): Stability for solutions with coevolution as a function of and .(C): Stability for solutions with coevolution as a function of and virulence costs ( ).
Figure S2 .
Figure S2.Equilibria classifications for solutions in the three-locus model.Yellow colors denote asymptotically stable solutions while blue represent solutions with a nonnegative eigenvalue.(A-B): Stability for solutions where the modifier locus is lost as a function of (A): and (B): and .Blue regions indicate parameters where the modifier locus can invade and evolve to fixation.(C-D): Stability for solutions where the modifier locus is fixed as a function of (C): and and (D): and .
Figure S3 .
Figure S3.Normalized linkage disequilibrium.Normalized linkage disequibrium ( ′ ) is shown as a function of (A) general and specific resistance costs and (B) general resistance and virulence costs.A D' value of 1 represents the highest possible LD given the equilibrium allele frequencies.All other parameters are the same as Fig 1.
Fig S4 .
Fig S4.General resistance allele frequencies in alternative models.(A-C): Densitydependent transmission, with = 0.001 throughout.(D-F): Hard-selection gene-forgene model, where the costs of virulence apply to all host genotypes.(G-I): Model with stronger general resistance ( = 0.5).(J-L): Model with full recombination (ρ = 0.5).Unless stated, all other parameters are the same as in Fig 1. | 1,457.4 | 2023-10-16T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Feature Selection Based on Principal Component Regression for Underwater Source Localization by Deep Learning
: Underwater source localization is an important task, especially for real-time operation. Recently, machine learning methods have been combined with supervised learning schemes. This opens new possibilities for underwater source localization. However, in many real scenarios, the number of labeled datasets is insufficient for purely supervised learning, and the training time of a deep neural network can be huge. To mitigate the problem related to the low number of labeled datasets available, we propose a two-step framework for underwater source localization based on the semi-supervised learning scheme. The first step utilizes a convolutional autoencoder to extract the latent features from the whole available dataset. The second step performs source localization via an encoder multi-layer perceptron trained on a limited labeled portion of the dataset. To reduce the training time, an interpretable feature selection (FS) method based on principal component regression is proposed, which can extract important features for underwater source localization by only introducing the source location without other prior information. The proposed approach is validated on the public dataset SWellEx-96 Event S5. The results show that the framework has appealing accuracy and robustness on the unseen data, especially when the number of data used to train gradually decreases. After FS, not only the training stage has a 95% acceleration but the performance of the framework becomes more robust on the receiver-depth selection and more accurate when the number of labeled data used to train is extremely limited.
Introduction
Underwater source localization is a relevant and challenging task in underwater acoustics. The most popular method for source localization is matched-field processing (MFP) [1], which has inspired several works [2][3][4][5]. One of the major drawbacks of the MFP method is the need to compute many "replica" acoustic fields with different environmental parameters via numerical simulations based on the acoustic propagation model. Accuracy of the results is heavily affected by the amount of prior information about the marine environment (e.g., sound speed profile, geoacoustic parameters, etc.), which unfortunately is often hard to acquire in real scenarios.
Artificial intelligence (AI), and primarily data-driven approaches based on machine learning (ML), has become pervasive in many research fields [6,7]. ML-techniques are commonly divided into supervised and unsupervised learning. The former approach relies on the availability of labeled datasets, i.e., when measurements are paired with ground truth information. The latter refers to the case when unlabeled data are available [8].
Recently, there have been several studies on underwater source localization based on ML using the supervised learning scheme [9][10][11][12][13][14][15][16][17]. The general approach of underwater source localization by supervised learning scheme is through the use of acoustic propagation simulation models to create a huge simulation dataset for covering the real scenario. This approach has two main limitations: firstly, creating such a huge simulation dataset is time consuming and requires large computer storage resources; secondly, the set of environmental parameters to create a simulation dataset may not be able to account and adapt for environmental changes in a real-world scenario. The latter aspect requires a new simulation process, which may often be unrealistic.
Apparently, data-driven ML approaches rely on information extracted from available data, then the need of being able to exploit both labeled and unlabeled data is crucial in many applications, including underwater source localization. Semi-supervised learning has been proposed to face this issue in computer vision [18] and room acoustics [19,20].
Deep learning is famous for its brilliant performance for many tasks; however, the huge computation is the price. In the study of Niu et al. [15], the training time was six days for their ResNet50-1 model and three days for each of the ResNet50-2-x-D models. Each ResNet50-2-x-R model took 15 days to train.
In real scenarios, the speed of training is vital for real-time localization. To accelerate the training speed, some feature selection (FS) methods have been applied in underwater acoustics [21][22][23][24]. Feature selection aims to find the optimal feature subspace that can express the systematic structure of the raw dataset [21]. Principal component analysis (PCA) is a well-known method which can maximize the variance in each principal direction and remove the correlations among the features of the raw dataset [21,25]. Furthermore, the latent relationship between features can be interpreted by studying the correlation loading plot of PCA [26]. Principal component regression (PCR) is a PCA-based method, which can find out the significant variables for the target of regression by analyzing the absolute value of the regression coefficients [27,28].
In our study, an interpretable FS method for underwater source localization based on PCR is proposed. To make the situation closer to the real scenario, a two-step semisupervised framework, and the data collected by a single hydrophone are used to build and train the neural network, respectively. Figure 1 shows the workflow to illustrate our approach. The raw data is firstly preprocessed by discrete Fourier transform and min-max scaling. To select the important features for source localization, the PCR is conducted. Based on the absolute value of the regression coefficients of PCR, the important features are selected. Finally, the selected features are fed into the two-step semi-supervised framework for source localization. The structure of the framework is built on the encoder of a convolutional autoencoder which is trained in unsupervised-learning mode, and a 4-layer multi-layer perceptron (MLP) which is trained in supervised-learning mode.
The performance of our approach is assessed on the public dataset SWellEx-96 Event S5 [29].
The objectives of this paper are: • Mitigating the problem related to the low number of labeled datasets in many real scenarios. • Reducing the training time of the neural network and keeping the localization performance as much as possible.
More specifically, the contributions of our work are: • An interpretable approach of FS for underwater acoustic source localization is proposed. This approach can reveal the important features related to sources by only introducing the source location without other prior information. • By using the selected features, the training time of the neural networks is significantly reduced with a slight loss of the performance of localization. • A semi-supervised two-step framework is used for underwater source localization exploiting both unlabeled and labeled data. The performance of the framework is assessed showing appealing behavior in terms of good performance combined with simple implementation and large flexibility. The paper is organized as follows: Section 2 describes the theories of PCA and PCR, as well as the method of FS; Section 3 presents the two-step framework for underwater source localization; the public dataset SWellEx-96 Event S5, the data preprocessing, and the schemes of building the training and test datasets are given in Section 4; in Section 5, a comprehensive analysis of the localization performance between our framework based on the FS method and the control groups are described; the selected features are interpreted from both physical and data-science perspective in Section 6; and, finally, the conclusion is given in Section 7.
The Interpretable FS Method Based on PCR
The training time of a deep neural network could be huge, especially when the depth of the network is large. To reduce the training time, as well as keep the accuracy of the underwater source localization, an interpretable FS method based on PCR is introduced in this section.
Theory of PCA
PCA [28] refers to the following decomposition of a column-mean-centered data matrix X of size N × K, where N and K represent the number of samples and the number of features, respectively, where (.) T is the transpose operation for a matrix, T is a score matrix of size N × A related to the projections of the matrix X into an A-dimensional space, P is a loading matrix of size K × A related to the projections of the features into the A-dimensional space (with P T P = I), and E is a residual matrix of size N × K. More specifically, the A-dimensional space is identified via the singular value decomposition (SVD) of X by selecting the first A principal components.
Denoting X = USV T the SVD of X andÛ,Ŝ, andV the matrices containing the first A columns of U, S, and V , respectively, then we have andX = TP T is called the reconstructed data matrix.
Theory of PCR
The multiple linear regression (MLR) method is given by where y is the regression target (in this paper is source location) of size N × 1 containing N samples; X is the data matrix as mentioned above; θ is the regression coefficients of size K × 1; and e is the unexplained residuals of y. Using ordinary least squares regression [30], the regression coefficientsθ MLR of size K × 1 can be estimated aŝ PCR is the MLR based on the first A PCs extracted from the original data matrix X. To estimate the regression parametersθ PCR of size A × 1, the score matrix T is used instead of
Method of FS
The aim of FS is to select a set of important variables for accelerating the speed of underwater acoustic source localization. Furthermore, PCA and PCR are highly interpretable methods, the correlation between variables and the significant variables for regression can be revealed by investigating the plot of the correlation loading and the values of the regression coefficients, respectively [28].
The method of FS has 5 steps: 1.
Conducting mean-centered operation for each column in the data matrix X.
2.
Conducting SVD on the column-mean-centered data matrix X to calculate the first A PCs (A = 3 in this paper), as well as build the matrices of the score T and the loading P, following Equation (2). 3.
Calculating the regression coefficientsθ of size K × 1 for each original variable bȳ
4.
Ranking the elements inθ with absolute value from high to low. And setting a threshold ( = 0.02 in this paper) to choose the variables equal to or greater than the threshold as the selected features.
5.
A new data matrixX of size N × M is constructed based on the selected M features.
The Two-Step Framework for Underwater Source Localization
In many real scenarios, a whole dataset will often consist of a small portion of labeled data and a large portion of unlabeled data (purely acoustic signals). To make the experiment condition closer to real scenarios, in the following, we assume that a large-size dataset is available with most of the data being unlabeled and only a small fraction labeled.
Step-One: Training a Convolutional Autoencoder
The autoencoder is an unsupervised learning machine, which can be trained based on the unlabeled dataset [31]. The first step of the framework is to train a convolutional autoencoder (CAE) [32]. The role of the CAE is to conduct unsupervised learning since training the CAE does not need labels, which means that the whole dataset can be covered.
The structure of CAE is shown in Figure 2a, where the network consists of an encoder and a decoder. The arrows indicate the direction of the data stream. The encoder, made of 4 blocks, is used to extract the compressed features from the input data. Each block contains a convolution layer (for extracting features), a batch norm layer (for speeding up training), and a leaky-ReLU layer (for operating a non-linear transform on the data stream). Additionally, the decoder has a dual-symmetric structure as the encoder and is used to create the reconstruction of the input data from the compressed features. After creating the reconstructed input, the mean squared error (MSE) is the selected loss function to measure the difference between the input and the reconstruction.
It is worth noticing that, in this step, the whole dataset (both the labeled and unlabeled portions) is used to train the network, but only the purely acoustic signals are involved as described later in the paper.
Step-Two: Training the Encoder-MLP Localizer Based on the Semi-Supervised Learning Scheme
After training the CAE, the second step requires training the Encoder-MLP for localization based on the semi-supervised learning scheme. The structure of this model is shown in Figure 2b, which consists of a pre-trained encoder extracting the compressed features from input data and a 4-layer-MLP estimating the location of the acoustic source based on the compressed features. The MLP consists of four blocks, with each block containing a dense layer followed by a dropout layer (for regularization) and a non-linear transform function. The sigmoid function is an appropriate choice for the non-linear transform since, during the data preprocessing stage, the regression target, i.e., the horizontal distance between source and receiver, is scaled into the interval (0, 1).
Similarly, the arrows in Figure 2b indicate the direction of the data stream. Since the encoder has been trained, its parameters will be frozen during the training stage of the second step. After the encoder, the compressed features are fed in the MLP, which will provide the estimated source location as output. Finally, the same loss function, i.e., MSE, is used since the localization task is a regression problem.
Dataset and Preprocessing
In this section, SWellEx-96 Event S5 is introduced. The preprocessing method and the schemes for building the training and test datasets are used. Finally, to illustrate the performance of our framework and the proposed FS method, the control groups are created.
SWellEx-96 Event S5
Vertical linear array (VLA) data from SWellEx-96 Event S5 are used to illustrate the localization performance of our framework. The event was conducted near San Diego, CA, where the acoustic source started its track of all arrays and proceeded northward at a speed of 5 knots (2.5 m/s). The source had two sub-sources, a shallow one was at a depth of 9 m and a deep one at 54 m. The sampling rate of the data was 1500 Hz and the recording time of the data was 75 min. The VLA contained 21 receivers equally spaced between 94.125 m and 212.25 m. The water depth was 216.5 m. Additionally, the horizontal distance between the source and the VLA is also provided in the dataset. More detailed information of this event can be found in Reference [29].
Preprocessing and FS
In this paper, the underwater acoustic signals collected by a single receiver are transformed into the frequency domain. We calculate the spectrum without overlap for each 1 s slice of the signal and arrange it in a matrix (namely, features) X format with the shape of 4500 × 750, where each row is related to one slice. More specifically, 4500 is the total number of time-steps (75 min = 4500 s) and 750 is the number of frequencies. In the matrix, each row corresponds to one single time-step, and each column corresponds to one single frequency.
Besides the acoustic signals, the horizontal distance between the source and the VLA was provided in the original dataset, which can be expressed as a vector y (namely, labels) with the shape of 4500 × 1, where 4500 indicates the total number of time-steps, and 1 indicates the distance at each time-step.
For the training stability of our framework, the features X and labels y are scaled into interval (0, 1) by the min-max scaling method: After preprocessing, the FS is conducted following the steps in Section 2 based on X and y. Note that the systems with/without min-max scaling before FS have been compared showing that pre-scaling improves the performance.
Schemes for Building the Dataset
For step-one, the dataset for CAE is expressed as: whereX is the features in matrix form,x i is a row-vector with the length of M (the number of selected features), corresponding to the ith row of the features matrixX, and N is the number of time-steps. For step-two, the dataset for Encoder-MLP localizer is expressed as: whereX and y are the features and labels in matrix form. And y i is the ith element in labels vector y.
Schemes of Separating Training and Test Datasets
To illustrate the performance of the semi-supervised framework as the number of labeled datasets decreases, 50%, 25%, and 12.5% of the whole labeled dataset are chosen, respectively, as the training dataset of step-two.
Since source localization is a regression task, the labels in the training dataset of step-two should cover the whole interval of the horizontal distance between the source and receiver. As described above, the total number of time-steps is 4500, which can be expressed by the index i ∈ (1, 4500) To make a fair comparison, we trained a neural network with the same structure as the framework by the purely supervised learning scheme.
Control Group for the FS Method
To show the performance of our FS method, a framework without FS is trained in the same way. The matrix containing the whole features X of size 4500 × 750 is calculated from Equation (7) and used to build the dataset following the schemes described in Sections 4.3 and 4.4.
Performance of Source Localization
In this section, the hyperparameters of the two-step framework are introduced. After that, several experiments are conducted to exam the performance of source localization. Finally, a comprehensive comparison of the localization performance is shown, which demonstrates the benefit of our approach.
Hyperparameters of the Framework
In Figure 2, the output channel (n_Conv1D) and the kernel size (k_Conv1D) of the 1D-convolutional layer, as well as the input channel (n_Dense1) of the first dense layer, are not fixed. This is because the size of input features varies between datasets collected by different receivers. After FS, the number of selected features M is shown in Table 1. To train the framework, the learning rates for step-one and step-two are 1 × 10 −4 and 5 × 10 −5 , respectively. The optimization scheme is Adam. The epoch and the batch-size are 100 and 5 for each step, respectively.
All the networks mentioned in this paper are trained using one NVIDIA RTX 2080Ti GPU card.
Examining the Performance When Removing Some 2D-Convolutional Layers of the Framework after FS
The function of the encoder is to compress the original dataset and create its compressed expression, which is similar to our manual FS method. To find the best structure for the framework using FS, the number of 2D-convolutional layers of the encoder, and the corresponding number of transposed 2D-convolutional layers of the decoder are gradually decreased. After re-training the modified CAE, step-two is conducted as before. The structures of CAE after removing one and two 2D-convolutional layers are shown in Tables 2 and 3, respectively. The performance will be discussed in Section 5.3.
Overall Analysis of the Localization Performance
To make a comprehensive comparison, 4 pairs of networks are tested on the data collected by all receivers and trained separately based on the data collected by receivers no. 1, no. 10, and no. 21. One pair is trained without FS, the rest are all trained with the FS method proposed by this paper. For the rest 3 pairs of networks, one has the same number of layers as the networks trained without FS; others have the structures shown in Tables 2 and 3, respectively. Additionally, each pair of networks consists of the framework trained by the semi-supervised learning scheme and the same network of step-two trained by the purely supervised learning scheme.
Comparison between the Framework and the Purely Supervised Learning Scheme after FS
After FS and tested on all receivers, the performance of our framework and the purely supervised learning scheme is shown in Table 4. In the Table, the first row indicates the percentage of the data used to build the training dataset. In the first column, R1 to R21 indicate receivers no. 1 to no. 21, respectively. Additionally, the mean indicates the average of MSE on all receivers. The bold numbers indicate the lower values of MSE in every pair of our framework and the purely supervised learning scheme, which means the model has a better performance on source localization. When the test dataset is far from the receiver used to train, its performance is getting worse dramatically. This trend is more obvious when the percentage of data used to train decreases. This shows the limitation of the purely supervised learning scheme: when the labeled training dataset is limited, the generalization ability of the model is poor.
2.
Performance of our framework: Compared to the purely supervised learning scheme, our framework is more robust and has much lower MSE on the data collected by those receivers which are far from the receiver used to build the training dataset, even though its performance on the data collected by the receivers near the receiver used to train is a bit poorer. This trend is more obvious when the percentage of data used to train decreases.
3.
Comparison of the different percentages used to train: When the percentage of the data used to build the training dataset decreases, the performance of both schemes becomes worse. However, the degree of performance degradation of our framework is smaller than that of the purely supervised learning scheme.
Comparison of the Mean MSE and the Training Time between the Networks with and without FS
The performance between the networks with and without FS is shown in Table 5. In the table, the residual illustrates the difference of the mean MSE between networks, and the percentage of the residual illustrates the performance improvement (positive value) and degradation (negative value) by FS. They are calculated by Observing Table 5, phenomena can be found: 1.
When the percentage of data used to train is 50% and 25%, respectively, the performance of the framework trained on R1 and R10 has some degradation (12.82% to 17.65%). However, when the percentage of data used to train is 12.5%, the performance of the framework trained on R1 and R10 has a slight improvement (7.69%) and degradation (4%), respectively.
3.
Compared to the framework, the performance of the purely supervised learning scheme gains more improvement (14.04% to 50.47%) after FS. The performance degradation only happens when it is trained on 50% R1, 50% R10, and 25% R10. The training time of the networks is shown in Table 6. This table illustrates that the training time is reduced significantly after FS for both framework and the purely supervised learning scheme. Step According to the Tables, interesting phenomena can be found: 1.
For the framework, Structure 1 attains the lowest MSE except for trained on 50% R1 and 12.5% R1.
2.
For the purely supervised learning scheme, Structure 2 attains the lowest MSE with a slight improvement compared to Structure 1 when the percentages of data used to train are 25% and 12.5%. 3.
Structure 1 shows the best performance for training time reduction. Considering both MSE and training time, the best structure after the FS is Structure 1.
Conclusions of the Performance Analysis
In Figure 3, the conclusion of the performance analysis is illustrated. The legend used in all the sub-figures is the same. The blue bar is related to the network without FS. The orange, yellow, and gray bars are related to the 'Original structure', 'Structure 1', and 'Structure 2' in Tables 7 and 8, respectively. From Figure 3, interesting phenomena can be found:
1.
When the number of labeled data is gradually decreasing, the power of the framework with the semi-supervised learning scheme is revealed.
2.
The FS method is beneficial for both the framework and the purely supervised learning scheme, which can significantly decrease the training time with a slight loss of the performance of localization.
3.
After FS, the difference in performance between different receiver-depth is not significant, which means it can increase the robustness of the receiver-depth selection.
To have an intuitive view of the performance, Figure 4 shows the localization result of our framework trained on 50% R1 after FS.
Discussion of FS
In this section, the discussion of the selected features is given, which demonstrates that the most significant portion of the original features for source localization has been selected by performing the FS. The training dataset using 50% data collected by receiver no. 21 is used in this section for illustration.
Details of the Sources in SWellEx-96 Event S5
According to the details on the website of SWellEx-96 Event S5 [29], the deep source (J-15) transmitted 5 sets of 13 tones between 49 Hz and 400 Hz. The first set of tones was projected at maximum transmitted levels of 158 dB. The second set of tones was projected with levels of 132 dB. The subsequent sets (3rd, 4th, and 5th) were each projected 4 dB down from the previous set. The shallow source transmitted only one set containing 9 tones between 109 Hz and 385 Hz. According to Du et al. [33], 500-700 Hz is related to the noise radiated by the ship towing the sources in the experiment, which is also an important contribution for source localization.
Interpretation of the Selected Features
After the FS described in Section 2.3, the matrixX of size N × M containing selected features is created. To interpret the selected features, another PCA is conducted on this matrix. To investigate the correlation structure between the features and the PCs, correlation loading is calculated based on the method proposed by Frank Westad et al. [26].
As shown in Figure 5, the abscissa is PC 1 and the ordinate is PC 3. There are 2 circles in the plot, in which the inner and outer ones indicate 50% and 100% explained variance, respectively. The points between the two circles are the significant features that can explain at least a 50% variance of the data. And the legend with different colors illustrates different sets of tones and the ship noise. From the correlation loading plot in Figure 5, phenomena with physical meanings can be found: 1. Along PC 1, the frequencies related to the high transmitted signal level are in the positive half-axis. The frequencies related to lower energy levels are in the negative half-axis. For more details: • 7 frequencies (127, 145, 198, 232, 280, 335, 385 Hz) of the shallow source and 3 frequencies (238, 338, 388 Hz) of the deep source are in the area between two circles, which means that they are significant features. • The rest frequencies of the shallow source and the highest transmitted level (and also the tone with 391 Hz related to the second transmitted level) of the deep source are also close to the boundary of the inner circle, which means that they still have some importance for the data. • Expect for frequencies of the highest transmitted level and 391 Hz of the second transmitted level, the rest frequencies are closer to the origin, which means that they are less significant from the statistical perspective.
2. Along PC 3, the frequencies related to the shallow source are in the negative halfaxis (except for the tone with 391 Hz). Furthermore, the frequencies related to the ship noise are also in the negative half-axis, since the ship can be treated as a shallow noise source. The frequencies related to the deep source are in the positive half-axis.
More specifically, the numbers of selected features among the different subsets of tones and ship noise are: According to the discussion above, the frequencies related to the shallow source, deep source, and the ship noise are selected by applying the FS, which are the most important features for source localization. The FS process does not need any prior information.
Different Roles of the FS and the Autoencoder
Autoencoders are often used as feature extractors; thus, the considered featureselection stage might seem redundant. However, the PCR adapted for FS is a linear method that can select the most important subset of variables for the regression target (i.e., source localization in this paper). The effect is that the non-linear processing of the autoencoder becomes easier to train (i.e., significantly reduced training time) while keeping approximately the same performance.
After the FS stage, the most important subset of the original features is gained. More specifically, the roles of the FS and the autoencoder in our framework are: • The FS: Selecting the most important subset of the original features for reducing the training time of our framework and providing a nice starting point for the framework. • The autoencoder: Conducting the unsupervised learning to cover all the information in the dataset.
Conclusions
In this paper, we utilize a two-step semi-supervised framework for source localization to deal with the condition of the limited amount of labeled data in many real scenarios. To accelerate the training stage of the framework for the real-time operation, a FS method based on PCR is proposed.
Based on a public dataset, SWellEx-96 Event S5, the performances of our FS method and the two-step framework have been demonstrated. The results show that the framework is more robust on the unseen data, especially when the number of labeled data used to train gradually decreases. After FS, the training time is significantly reduced (by an average of 95%). The localization performance has a slight degradation when 50% and 25% of data are used to train. However, when the percentage of data used to train is 12.5%, this condition is closer to the real scenario, the FS method can improve the performance of both semi-supervised learning and purely supervised learning.
It needs to be mentioned that the structure of the network used in this paper is just a demo for showing the performance of our framework. More complex and powerful networks can be applied in this framework, and, based on our anticipation, the performance of source localization will be better as long as the network has been trained appropriately and well. | 6,902 | 2021-04-13T00:00:00.000 | [
"Computer Science"
] |
Senior High School Homeschoolers' Perceived Impact of the Flipped Classroom on ESL Learning in an Online Platform
Many scholars have conducted studies on Flipped Classrooms (FC) over a decade. Several of these studies have shown the FC's positive effect on college students' learning in face-to-face classrooms, limited research on FC was focused on high school learners and was done in online learning. To address this gap, this study focused on the perceived effect of the Flipped Classroom on Grade 11 Homeschoolers' ESL learning in a digital classroom during the pandemic. Twenty-five Grade 11 students were taught Reading and Writing Skills subject using the FC for three days. The researcher observed her co-researcher's Flipped classroom execution and gave her constructive feedback to refine her FC implementation.
Introduction
The Covid-19 pandemic has transformed the educational landscape worldwide.Suddenly, in 2020, all schools shifted from traditional face-to-face learning to remote learning.This drastic change forced teachers to learn new skill sets to be able to adapt to online teaching to address students' educational needs.It was a huge leap in terms of educational technology use that several educators found themselves grappling with.
Teachers play several roles in online learning.Aside from being subject experts and facilitators of learning in online classrooms, they also serve as technicians, class advisers, and counselors.
As technicians, they provide technical support to students encountering login issues in video conferencing platforms, accessing instructional materials, and uploading their projects to the school's Learning Management System (LMS).As class advisers and counselors, they guide students suffering from learning loss and anxiety, and depression due to prolonged lockdowns.
With these multifaceted roles for more than two years in the global pandemic, educators have been made to work more than 8 hours a day to prepare online instructional materials and engaging learning activities for students."Moreover, they have to create an online learning environment that will afford students to receive feedback, interact with peers, and monitor their learning to ensure learning success in the process" (Cequeña et al.,in press,p.1).
Thus, online or e-learning has transformed how classroom instruction is conducted to pave the way for various teaching methods and approaches, such as blended learning and flipped classrooms, to thrive (Chen, 2021).
The flipped classroom is an instructional approach that aims to make students active participants in learning.Jonathan Bergmann and Aaron Sams (2012) proposed this approach that highlights students' learning through practice and first-hand experience (as cited in Chien, 2021).In this type of classroom, students watch pre-recorded lectures and videos as pre-work before attending the live online or face-to-face class.This gives students more time to digest and understand the lesson at home, reduces teachers' discussion time, and allows more extended periods for the actual application of learning during class (Chien, 2021;Musa et al., 2021) However, a few studies investigate the learning outcomes of students using the flipped classroom, particularly in an online setting.Several studies have shown the effectiveness of this approach in Science and Mathematics (Ackayir & Ackayir, 2018;Musa et al., 2021), Research Methods (Sirakaya & Ozdemir, 2018), English Language classrooms (Chaqmaqchee, 2021;Hung, 2014;Quyen & Loi, 2018;Turan & Cimen, 2019) and other disciplines in tertiary education but a dearth of research was conducted in secondary schools (Mohammed & Daham, 2021;Wang, 2017).
Therefore, this study intends to find out Senior High School Homeschool students' perception of the impact of the flipped classroom in an online platform by addressing the following research questions: 1. What is the perception of the Grade 11 students on the impact of the Flipped Model Classroom on their learning?2. Which of the activities utilized in the Flipped Classroom did the Grade 11 students like the most?
Review of Related Literature
The Flipped Classroom (FC) Model is an instructional method of flipping a classroom in which lesson content generally studied by students in a traditional classroom through a teacher's lecture is done at home at the learners' own pace.In FC, the learners watch video lessons and read textbook chapters and other supplementary materials about the topics assigned prior to Face-to-face or synchronous sessions.At school, the teacher provides only a short review and clarification of the lesson, and students spend much of their time doing collaborative learning activities as applications of the concepts learned.In a nutshell, Flipped classroom means "school work at home and home work at school" (Flipped Learning Network, 2014, para.1).Flipped Learning allows teachers "to utilize class time to guide each student through active, practical, innovative applications of the course principles" (The Academy of Arts and Sciences,n.d.,para. 2).Moreover, through the Flipped Classroom, students can further develop their higher order thinking skills (HOTS) as they collaboratively work in answering HOTS questions.In addition, teachers can spend more time helping struggling learners understand important concepts of the lesson (Kerr, 2020).Most importantly, educators can observe students' mistakes and understand their thought processes as they work in in-class activities.This way, they can tailor their instruction and learning activities to address students' learning needs (The Benjamin Center, 2020).FLN (2014) presents the four pillars of Flipped learning derived from its acronym F-L-I-P that can guide school administrators in implementing Flipped Classroom in case they will buy into the idea of adopting Flipped Learning in their classrooms.The first pillar, Flexible Environment, allows the learners to choose the time and place to study.The second pillar, Learning Culture, focuses on a learner-centered approach, empowering the learners to explore topics in greater depth and actively engage in meaningful learning.The third pillar, Intentional Content, highlights the importance of designing instructional content, materials, strategies, and student-centered learning activities with great emphasis on students' cognitive levels and subject matter.Finally, the fourth pillar, Professional Educators, stresses the importance of regular monitoring and assessing students' learning progress by providing constant feedback on their academic outputs.Professional teachers should also reflect on their teaching performance and establish a network that can provide them with constructive feedback to further improve their classroom practices.
The implementation of the Flipped Classroom varies among educators and researchers.This study adapts Marshall ( 2017) and Marshall and Buitrago's (2017) Synchronous Online Flipped Learning Approach's (SOFLA) framework because it is feasible to implement for adult learners and it is aligned with the flipped learning principles in online instruction (as cited in Marshall & Kostka, 2020).
Generally, SOFLA consists of eight steps: 1. Pre-work.The learners do this at home, where they are asked to watch video lessons (which can be made interactive) and study a chapter of their textbook, PowerPoint presentations, and other supplementary materials provided by the teacher.Its purpose is for the students to study their lessons in advance to understand important concepts.2. Sign-in activity.Whether synchronous learning or face-to-face class, sign-in activity is a reinforcement activity that the learners do as they sign in, log into the virtual classroom, or come into the physical classroom.For instance, in a literature class, the learners may be asked to answer an activity to identify the meaning of the underlined figures of speech used in sentences.3. Whole group discussion.This is where the teacher guides the entire class in solidifying their understanding of the concepts learned in the pre-work to correct misconceptions.For example, they learned in the video lesson how to interpret a poem by understanding the meaning of the figures of speech used in the poem.For whole group discussion, the teacher may present an excerpt from the poem and provide the class prompts (questions) to guide them in interpreting its meaning based on the figures of speech used.The teacher processes their answers and presents the correct interpretation of the poem.
4. Breakouts.This fourth step requires the students to work in small groups and discuss the learning activity provided by the teacher as a hands-on application of the concepts learned.
One concrete example is giving each group a stanza (excerpt) from another poem to identify the figures of speech used and their meanings and then to explain how they contribute to the stanza's overall meaning. 5. Share-out.It is the group sharing of their outputs where each group assigns a representative to discuss their interpretation of the stanza based on the meaning of each figure of speech.If time allows, one or two groups may be called to provide insights or feedback about each group's presentation.The teacher may provide the structure on how to provide feedback.6. Preview and Discover.This is different from the previous steps because it is not about the lesson for the day.It is where the teacher provides a glimpse of the upcoming lesson whose purpose is to excite the class for the next meeting's topic to encourage them to do their pre-work activities.7. Assignment Instructions.This step allows the teacher to provide instructions for the next out-of-class work activities, like what tasks to do, what videos to watch, and which texts to read.8. Reflections.This is the last step in which the learners are encouraged to write their reflections in the Zoom chat box about what they found useful and meaningful in their session.They can write their insights and key takeaways about the lesson they learned.Hwang et al. (2015) stressed that the success of the flipped classroom depends on teachers' seamless design of in-class and out-of-class activities.Knowing the proper implementation of the FC Model is significant because of its multifaceted benefits for both students and teachers.The FC provides flexible learning where the students can study at their own pace; it allows students' interaction with their peers about the lesson's concepts (Kerr, 2020); it fosters independent learning with the teacher's support (Lee & Martin, 2020).With the FC, students whose personality types do not fit the traditional classroom can benefit from FC's flexible set up where they can rewind and watch video lectures again to understand important concepts well.Students' frustration level remains low with Flipped Classroom (Du et al., 2014).For teachers, on the other hand, the FC approach allows educators to work closely with students in school and to group those who can work together.It also helps improve students' attitudes and their ability to solve open-ended problems (Du et al., 2014).However, the challenges in the implementation of the Flipped Classroom include students' internet accessibility and technical ability (Du et al., 2014;Lee & Martin, 2020;Wang, 2017), learners' self-motivation (Du et al., 2014), technical support for teachers, unclear learner responsibility, and an inability to provide immediate lesson clarification (Lee & Martin, 2020).Lee and Martin (2020) recommended for Flipped Classrooms to be effective, there is "a need to establish guidelines for best practices in flipped classrooms and to develop high-quality approaches to flipping without a dependence on instructional videos" (p.2605).
The Flipped Classroom and Connectivism
The digital age has led to the emergence of new learning theories and pedagogical frameworks.The Internet has opened opportunities to further communication and collaboration with anyone, including distance learning.More than ever, student-centeredness, selfregulation, and peer collaboration have risen significantly with the use of technology in learning.These elements are thought of as contributing factors to students' active involvement and increased motivation, thereby improving learning achievement (Boyraz & Ocak, 2021).
Learning through interactions is at the core of the learning theory of connectivism.Connectivism's main goal is for the student to learn through dialogue or in a social context and to gain autonomy in the process (Boyraz & Ocak, 2021).
Connectivism as a learning theory was developed by Siemens (2005) and Downes (2007).According to Downes, Connectivism is "the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks" (Downes, 2007, para. 1).Networks can be a book, a person, and a webpage (technology/media) which are considered as information sources.Learning through connectivism is the ability to connect these nodes of information and to see connections between concepts and ideas.Furthermore, learning from the Connectivism theory recognizes the importance of social interaction, diversity of ideas, and the accuracy and recency of knowledge (Siemens, 2005).
Connectivism is for the digital age as it utilizes media as a significant and enabling tool for gaining knowledge (Shrivastava, 2018).The flipped classroom follows this learning theory with its effective use of technology, allowing learners to develop autonomy in the classroom.
The Flipped Classroom and Its Perceived Impact on Academic Achievement and Learning
Several studies posited the positive effects of the Flipped Classroom Model on students' academic achievement in various disciplines, such as Enterprise Resource Planning System (Chien, 2021), Teaching Principles and Methods course (Debbag & Yeldiz, 2021); English language (Chaqmaqchee, 2021); educational technology (Alamri, 2019); biology courses (Lensen et al., 2018); Scientific Research Methods (Sirakaya & Ozdemir, 2018); English, basic medical history taking a course (Jego et al., 2017); research methods (Nouri, 2016); chemistry (Baepler et al., 2014); and English language (Hung, 2014).Conversely, Cabi (2018), in her experimental study, revealed there was no significant difference in the academic performance in Computer 1 of the experimental group taught using the FC Model and that of the control group taught using Blended Learning.However, Cabi (2018) noted some positive aspects of the FC Model as perceived by the respondents, including coming to classes prepared and completing the assignments in class instead of doing these at home.Pre-service teachers encountered problems that included a lack of motivation, difficult and insufficient materials for content, and time constraints in learning/studying.
In a meta-analysis conducted by Akcyir and Akcyir (2018) with 71 articles surveyed regarding the advantages and challenges of the Flipped Classroom (FC), they discovered that top among the advantages is learning outcomes that include improvement of learning performance (52%), satisfaction (18%), and engagement (14%).Some of the challenges of the Flipped classroom for students include limited student preparation before class time (12.68%) and time-consuming (8%).Turan and Cimen (2019) found out in their systematic review of 43 articles that the greatest challenges of using the FC from students' perspective include extra load for learners and technology/internet-related problems.Two common challenges for teachers are time-consuming preparations (14%) and additional workload (7%).Doing pre-recorded videos and in-class and out-class activities takes a lot of time and require technical skills (Akcyir & Akcyir, 2018;The Benjamin Center, 2016).
In the area of English language teaching, some studies reviewed in this paper revealed the positive effect of FC on students' academic performance.In their quasi-experimental study, Quyen and Loi (2018) investigated the effects of the Flipped Classroom on college students' speaking performance.Results of their study showed that the experimental group's speaking skills improved after being taught English as Foreign Language using the Flipped Classroom Method.Similarly, Turan and Cimen (2019) noted that among the 43 articles they reviewed which investigated the effect of Flipped Classrooms on English language teaching, 18 articles revealed the effectiveness of the FC in teaching English as a foreign language.Furthermore, they also found the following top findings as regards FC's advantages: enhancing learners' engagement, speaking skills, peer interactions, and learning achievements.Marks (2000) and Rajabalee et al., (2019) posited a direct relationship between engagement and academic achievement.They found that students who demonstrate greater psychological engagement have higher grades.However, the question remains if engagement translates to learning.
Learning is defined differently by various scholars.However, only some basic learning definitions were taken that are relevant to the study.Learning is "a process that leads to change, which occurs as a result of experience and increases the potential for improved performance and future learning" (Ambrose et al., 2010, p. 3).Houwer and Moors (2013) stated that learning as a change in behavior caused by an experience requires regularity of stimulus or environment.For Brown et al. (2014), as cited in Academic Affairs, The University of Arizona (2022), learning means "acquiring knowledge and skills and having them readily available from memory so you can make sense of future problems and opportunities" (para.2).This view is based on the Cognitive Theory of Learning wherein the learner uses his background knowledge and skills acquired in processing new information (Rumelhart, 1980).From the constructivists' view, learning is a product of the learner's construction of knowledge as he transacts meaning with text, visual images, audio, videos, and other forms of information.
Successful learning can be determined by students' engagement which can be classified into three types: behavioral, cognitive, and emotional engagement.These three types of engagements are intertwined.Behavior engagement is shown when the students demonstrate interest, focus their attention on a task, actively share their ideas, and ask questions.Cognitive engagement is focused on the learning task to be accomplished.For instance, as the teacher begins his lesson by showing his class a poster of literacy advocacy (a picture of poor children in a barrio being taught by a school teacher in a small hut), he may encourage his students to share their thoughts about it.Once these students interact more enthusiastically about this topic of literacy advocacy with their teacher and peers in a dynamic way, it is termed as emotional involvement or emotional engagement (Csikzentmihalyi, 1988).Emotional engagement refers to an interest in anything that feelings of boredom may represent, happiness, grief, anxiety, pleasure, or unhappiness with school or teachers (Connell & Wellborn, 1991;Skinner & Belmont, 1993).Finally, cognitive engagement refers to the learner's effort to demonstrate his capabilities to learn new knowledge or skills.Given the same poster, the teacher may ask the students to write their interpretation of the poster after listening to each others' insights in the preliminary activity.These written outputs reflect the learners' cognitive engagement.
In the context of the study, learning refers to the learners' change of behavior through the experiences provided in class through the Flipped Classroom (FC).As Houwer and Moors (2013) stated, there should be the regularity of environment or stimulus for a change of behavior to take place through experiences (learning).In this study, regularity of environment and stimulus was done through the Flipped Classroom introduced to the learners, wherein they were asked to watch videos, read the lessons in their textbook, and the teacher's slide presentations (pre-work) prior to their synchronous classes.During synchronous sessions, important concepts were reviewed through a question and answer section to clarify misconceptions (whole group session).Then, group activities were accomplished by the students in the breakout rooms (breakouts).Group presentations of outputs came next (shareouts).Finally, the students wrote the synthesis and key takeaways in the chat box (synthesis and reflections).Perception of students' learning is measured by three types of engagement such as behavioral, cognitive, and affective/emotional engagement.
While several studies have been conducted on the effects of the FC model on students' achievement across disciplines in a face to face classroom environments in tertiary education, limited studies were conducted on the perceived effect of the approach on students' learning in secondary education and online or distance learning.To address this research gap, this pilot study aimed to determine the perception of the Grade 11 Homeschoolers on the impact of the Flipped Classroom on their learning in a digital environment.
Participants
The present study originally had twenty-five Grade 11 students as the respondents of the study.However, only 15 and 11 students were able to answer Day 1 and Day 2 surveys, respectively.These grade 11 were homeschoolers enrolled in a Catholic homeschool in the country.However, the senior high school homeschooling set up in the institution does not follow the conventional homeschooling program wherein students' parents serve as their teachers.These homeschoolers have online classes in all senior high school subjects except Physical Education and Health.They have a synchronous session for each subject once a week.
Instruments and Data Sets
Likert Scale Survey.The Likert scale survey consists of eight questions of a 4-point Likert scale that measure their perception of their learning of English lessons (choosing a degree program and a university, resume writing, cover letter writing, and conducting mock job interview) taught using the Flipped Classroom Model and two open-ended questions.Some examples of the Likert Scale questions include: Activities done in today's lesson made me engaged and actively participated in the discussion; I learned the lesson well in today's session because of the activities provided by the teacher.Two open-ended questions included in the survey asked the respondents about the learning activities they liked the most and their comments about the Flipped Classroom.Prior to the instrument administration, the survey was validated by a language expert and a social science researcher.The survey was also subjected to Cronbach's alpha which generated a 0.87 value, which proves that the survey question shows internal consistency and is a reliable measure of perception of students' learning.
Focus Group Discussion Protocol.The FGD Protocol consists of three open-ended questions asking the respondents about the learning activities they liked the most, their comments or perception regarding the impact of the Flipped Classroom on their learning, and if they would recommend using Flipped Classroom in teaching other subjects of senior high school.
Procedure
Before the commencement of the study, the researchers sought the consent of the School Directress, the students, and their parents.Twenty-five Grade 11 students served as the respondents of the study.They were homeschoolers with online classes twice weekly for their core and specialized subjects as academic support.Since most senior high school subjects are highly technical and specialized, the institution designed its homeschool program to combine learning with asynchronous and synchronous sessions twice a week.The participants of the study, the Grade 11 students, were taught Reading and Writing Skills subject using the Flipped Classroom Model for three weeks with one session per week.Technically, the study was done only for three days that spanned three weeks, with one hour and 15 minutes per session.This short period of Flipped Learning implementation was necessary because the following week was already the final examinations, and the second semester would be over.The second semester during the global pandemic hype, was shortened to fourteen weeks from the original sixteen weeks due to the health break declared by the school authorities to allow academic ease both for the teachers' and students' physical health.The term consists of ten weeks of Conventional Teaching, where the teacher lectured most of the time, providing limited time for students to interact with their teacher and classmates.This has been a common scenario in our secondary school's online learning during the pandemic in School Year 2020-2021.This prompted the researchers to do a pilot study, even for a short period, to find out a more effective pedagogy in making the students learn the important competencies for each subject.Hence, this study aimed to determine the students' perception of the Flipped Classroom in relation to their learning of important concepts and the group activities provided.During the three-day sessions, the researcher observed the Flipped classroom execution of her fellow researcher and jotted down some comments on her implementation and her students' performance.After each class, she met with the teacher, her co-researcher, and gave some constructive feedback to refine the FC implementation in her online classes.
The FC Model adapted Marshall's (2017) and Marshall and Buitrago's (2017) SOFLA in Teaching Reading and Writing Skills subject, but the eight stages were reduced to 5. The FC method utilized in this study consists of these five stages: (1) Pre-work done at home -Independent study of lesson content such as online videos, PowerPoint presentations, textbook readings, other supplementary materials, and learning prompts uploaded to Google classroom a few days before the online class (out-of-class work); (2) Whole Group Application -Review of the Concepts learned from videos and readings to clarify misconceptions (in-class work); (3) Breakouts -Collaborative activities (group work) as an application of the concepts learned (in-class work); (4) Share-outs -sharing of group outputs and (5) Synthesis and Reflectionssummary of the important concepts learned and reflections or key takeaways from the lessons as well as from their interactions with their teacher and peers.
On Day 1 of the FC's implementation, the researchers did not administer the survey purposely since the FC's execution was still to be refined.On Day 2 and 3 after the session, a researchermade survey was administered as an evaluation of the students' perceived impact of the Flipped Model on their learning.The teacher posted the link to the survey in the Zoom chat box, and the students were given two to three minutes to answer.The survey consists of eight questions of 4-point Likert scale that measure their perception of their learning of English lessons (choosing a degree program and a university, resume writing, cover letter writing and conducting mock job interview) taught using the Flipped Classroom Model and two openended questions.Some examples of the Likert Scale questions include: Activities done in today's lesson made me engaged and actively participated in the discussion; I learned the lesson well in today's session because of the activities provided by the teacher.Two open-ended questions included in the survey asked the respondents about the learning activities they liked the most and their comments about the Flipped Classroom.Before the instrument administration, the survey was validated by a language expert and a social science researcher.
The survey was also subjected to Cronbach's alpha which generated a 0.87 value, which proves that the survey question shows internal consistency and is a reliable measure of perception of students' learning.However, only 15 students answered the researcher-made survey on Day 2, and only 11 students responded to the same survey on Day 3.This low response rate was due to some students leaving the online class earlier than the rest since they had to log in to their next synchronous class via Google meet or Zoom conference.A Focused group discussion was also conducted in the afternoon of Day 3 for the same purpose.However, only two students participated since most of them were busy doing their final projects, having only one week to complete their school requirements.
Data Analyses
The data gathered were then subjected to statistical analysis.Responses to open-ended and FGD questions were coded, and categories were tallied.Weighted mean was used to quantitatively measure the students' perception of the impact of the Flipped Classroom on their learning.
Results and Discussion
Table 1 shows the learning indicators as determined by the three types of engagement such as behavioral, cognitive and emotional engagement on Day 2 and Day 3 of implementing the FC Model in the online teaching of Reading and Writing Skills subject.The findings indicate that among the three types of learning indicators, behavior engagement ranked first (Day 1 -M=3.90 and Day 3 M=3.75).Most participants perceived that they were actively engaged in the learning activities provided by their teacher.The studies of Turan and Cimen (2019), and Akcyir and Akcyir (2018) corroborate this finding that the FC results in students' engagement.Once students are actively engaged in class activities, learning takes place.Marks (2000) and Rajabalee et al. (2019) found a direct link between engagement and academic achievement.Secondly, the respondents also reported that they learned the lessons well because of the group activities provided for two days (Mean= 3.73 and Mean =3.70) and the materials (videos, D2 Mean= 3:53; D3 Mean=3:70) and other supplementary readings, D2 Mean= 3:73; D3 Mean=3:60) uploaded to their Google classroom which the students studied prior to attending their English class (Cognitive Engagement) (Connell & Wellborn, 1991;Skinner & Belmont, 1993).Kerr (2020) and Marshall and Buitrago (2017) recognized the importance of collaboration among students in the Flipped Classroom as they engage in meaning-making.Students' learning of concepts is made permanent through meaningful and authentic activities they engage in via the Flipped Classroom.
Finally, it can be noted that there was a significant increase in the mean of students' responses when asked if they liked the structure of the Flipped Classroom (Emotional Engagement) (#8) from M=3.33 on Day 2 to 3.70 on Day 3.This implies that the students tend to like the new teaching method as they familiarize themselves.Generally, any method introduced for the first time to students may not be welcomed well at first.However, as they get familiar with it and get used to the process, they can readily adjust and appreciate it once they understand its structure and process of implementation.Next, their level of satisfaction or enjoyment (emotional involvement) of the activities provided in the Day 2 session was higher compared to Day 3 session.However, in both sessions, the students rated the activities higher than the average (D2 M=3.80 and D3 M=3.70).This shows that they were satisfied and emotionally engaged with the learning activities provided.Hence, it may be deduced that though the implementation of the Flipped Classroom was done in a short period of time, the participants perceived its positive impact on their learning as shown by their high level of engagement behaviorally, cognitively, and emotionally.3.60 SA 0.9 3 5.The video/s assigned for us to watch before our session was/were effective that led to my understanding of the lesson.
3.53 S A 0.5 2 3.70 SA 0.9 0 6.Additional readings relevant to the lesson that were required to read before the session helped me comprehend important concepts of the lesson.
Emotional Engagement
7. I enjoyed the activities in today's session.mock job interviews and group work.This shows that the students appreciated simulations of real-life situations and collaborative activities because they learned well through interacting with peers.Basically, this is the core principle of the Flipped Model, that is, to provide much time for students' group activities and engagement (Akcyir and Akcyir, 2018;Du et al., 2014;Kerr, 2020;Marshall & Buitrago, 2017) as an application of the concepts learned in the subject.That is the logic behind flipping the classroom, making the students study content at home through video lessons and readings provided by the teacher and doing hands-on activities in school via online learning.Table 2 shows the learning activities that the homeschoolers liked the most and their reasons for liking them.Most students said that they liked the mock job interview the most because it provided them with hands-on or real-world experience.This simulation activity gives them an insight into how a young professional should behave or present himself/herself before a prospective employer and answer interview questions to gain employment.Other students said they liked group work the most because it was fun collaborating with peers.
Finally, some students also listed cover letter writing as an enjoyable and challenging activity for them.When asked what other comments they could provide to the researcher about the Flipped Classroom, others replied to provide more group activities.
Observation
On the first day of the Flipped Classroom execution, the teacher never lectured, instead, asked questions to clarify misconceptions and to gauge their understanding of the assigned lessons: applying for college admissions and resume writing.Sample questions include: What are the major considerations in choosing a degree program and a university?How do you write an impressive resume?Afterward, the students were asked to perform a group activity and write a resume based on the given information about a showbiz personality.The students enjoyed their time writing a resume in the breakout rooms.Finally, they were asked to write their learnings in the Zoom chat box for the day's lesson since there was no time for group presentations.
On Day 2, the teacher reviewed the class through her art of questioning the format of a business letter, the contents of a cover or application letter, and the effective way of writing an impressive cover letter.Next, breakout rooms were opened for the group activity, where four members collaboratively rewrote the incomplete cover letter provided to make it more impressive.They were given twenty minutes to finish the activity.After which, group presentations of outputs followed.Finally, students were asked to write down their learnings.With these varied activities, maximum engagement among students was evident.
During the third day of the Flipped Classroom implementation observation, the researcher was assigned to observe one group of students doing a mock job interview.During the 15-minute breakout session, two pairs of students took turns in interviewing each other.The first pair portrayed the role of an interviewer and an applicant, with the interviewer asking two or three questions to the applicant (e.g., Describe yourself; What are your strengths and weaknesses?Why should we hire you?Where do you see yourself five years from now?).The same set of questions was asked.Then, they switched roles.During the mock job interview, the students actively answered the interview questions.They were spontaneous in their responses.They enjoyed listening to their group mates' answers.
On Day 3, during the Focus Group Discussion with two students held in the afternoon after their online class in their last subject, they shared the same responses with the group in the two-open ended questions included in the survey.They found the group activities engaging and enjoyed working with peers because they could accomplish the work despite the limited time.When asked if they would recommend using the Flipped Classroom in other subjects, they said that it could be used in Language and Social Sciences but not in Math and Science subjects in Senior High School due to the complexity of topics in both disciplines.They also pointed out that the videos that teachers should make or choose from online resources should be short but informative and engaging.Finally, they commented that the major challenge that teachers would encounter in utilizing the Flipped Classroom was students' self-motivation to study their lessons by doing pre-work (e.g., watching the prescribed video/s and reading the lessons in their textbook) before coming to their online classes (Du et al., 2014).
Conclusion
This pilot study on the perceived impact of the Flipped Classroom Model on Grade 11 homeschoolers' ESL learning has two major findings.First, the Grade 11 students perceived the positive impact of the Flipped Classroom on their learning of language skills (cognitive engagement) through engaging in group activities and discussions during the online sessions (behavior and emotional engagement) and through the materials provided before the session (cognitive engagement) (Connell & Wellborn, 1991;Skinner & Belmont, 1993).
Second, the respondents indicated that they liked mock job interviews and group work provided during the three-day implementation of the Flipped Classroom Model.They stated that they enjoyed group work because it was fun collaborating with peers in accomplishing the task within the limited time.As Kerr (2020) and Marshall and Buitrago (2017) posited, students' learning is facilitated through collaborative work.Hence, providing engaging activities aligned with the target competencies will make students learn the knowledge and skills well.
The Flipped Classroom will provide the students adequate time in applying concepts and skills through engaging, authentic group activities and in processing the way they learn (metacognition), which is essential in making learning permanent.With the positive impact of the Flipped Classroom on students' learning and engagement, it is therefore recommended that educators across disciplines utilize the FC method in their instructional delivery.However, as Hwang et al. (2015) state, the success of the implementation of the FC Method lies heavily on the teachers' instructional design; therefore, educators should design well the pre-work materials such as videos and supplementary readings for students to study at home as well as the in-class collaborative learning activities that students will work on in school during the synchronous sessions to ensure the seamless presentation of concepts from the pre-work to the synthesis phase of the lesson to maximize students' learning.
However, educators should take note of the challenges that students may encounter in learning through the Flipped classroom.These include internet accessibility, technical ability, workload (Turan & Cimen, 2019), and self-motivation (Cabi, 2018;Du et al., 2014).Teachers should conduct a needs assessment of their students in these four areas to address these needs and maximize learning.Doing the Flipped Learning requires knowledge of content, pedagogy and technology ( (Turan & Cimen, 2019) in designing lessons, learning activities and videos to be able to execute the Flipped Learning seamlessly in the online learning environments.Thus, teachers' training on the Flipped Classroom is necessary to ensure smooth execution of this method in online learning.School administrators should also be aware of their teachers' physical, technical, and pedagogical needs prior to shifting from conventional teaching methods to the Flipped Classroom.
Since the study is based on students' perception of the impact of the FC Method on their learning and the data obtained were from a small sample size, follow-up research is recommended using an experimental method to compare the effect of the Flipped Classroom with that of the conventional method in online learning.
Limitations of the Study
Since this is a pilot study, as researchers, we have noted some of its limitations, including its small sample size and study duration.This pilot study was conducted only for three sessions, and the data were obtained from a small sample size.However, despite these limitations, the respondents of the study, the Grade 11 students, demonstrate a positive perception towards the Flipped Classroom because of the engaging, collaborative activities provided by the teacher.To make the findings more robust, a follow-up study using an experimental design or action research is recommended for a large sample of students within a semester or two to gather sufficient data from which conclusions can be drawn.
(Reading Education) from UP Diliman and Master of Arts in Literature from Ateneo De Manila University.She is currently the Academic Department Head of Catholic Filipino Academy Homeschool (CFAH).Prior to her stint at CFAH, she had been a language educator, teaching English and research courses at De La Salle University, University of Santo Tomas, National University, and Siena College Taytay.As a writer, she co-authored High School English textbooks and a collection of poems.As a researcher, she has published a number of research articles on ESL Writing, metacognition, and reading comprehension in reputable journals.She has also presented these research papers at international conferences in San Antonio, Texas (USA), Hawaii, Rome, Thailand, Malaysia, Korea, Singapore, and Hong Kong.(email: maria.cequena@cfa.edu.ph)Niña Svetlana Mendoza is currently working as the Humanities and Social Sciences program track adviser in the Senior High School Department of the Catholic Filipino Academy Homeschool.She is a licensed professional teacher and has earned her Master of Arts in Reading Education from the University of the Philippines-Diliman.For the past decade, she has participated as a volunteer tutor to organizations that provide literacy programs to underprivileged children.She has recently served as a volunteer reading tutor to children living in Manila North Cemetery with ATD Fourth World Philippines.(email<EMAIL_ADDRESS>
Declaration of Possible Conflict of Interest
This is to confirm that we have no conflict of interest in the possible publication of this research article in MJSELT.
Figure 1
Figure1below shows that the Grade 11 students liked the following activities the most: mock job interviews and group work.This shows that the students appreciated simulations of real-life situations and collaborative activities because they learned well through interacting with peers.Basically, this is the core principle of the Flipped Model, that is, to provide much time for students' group activities and engagement(Akcyir and Akcyir, 2018;Du et al., 2014;Kerr, 2020;Marshall & Buitrago, 2017) as an application of the concepts learned in the subject.That is the logic behind flipping the classroom, making the students study content at home through video lessons and readings provided by the teacher and doing hands-on activities in school via online learning.
Figure 1 Learning
Figure 1
Table 1
Perceived Impact of the Flipped Model on the Grade 11 Students' Learning
Table 2
Learning Activities that the Grade 11 Homeschoolers Liked the Most | 8,947.4 | 2022-12-30T00:00:00.000 | [
"Education",
"Computer Science"
] |
Nanofocused synchrotron X-ray absorption studies of the intracellular redox state of an organometallic complex in cancer cells †
Synchrotron nanoprobe X-ray absorption (XAS) studies of a potent organo-osmium arene anticancer complex in ovarian cancer cells at subcellular resolution allow detection and quantification of both Os II and Os III species, which are distributed heterogeneously in different areas of the cells. We used hard X-rays (over 5–10 keV) focused on a 70 (cid:3) 100 nm 2 spot size at the nano-analysis beamline ID16B 14 (ESRF) to record XAS spectra on selected areas of A2870 ovarian carcinoma cells treated for 24 h with 1 m M of 1 . Both 500 nm thick sections of epon embedded cells [TS] and whole cells cryo-fixed and then dehydrated [CFD] were studied.
The rich chemistry of transition metals allows them to undergo a wide variety of reactions inside cells. 1 Novel metallodrugs can be designed which reach diverse cellular targets and have unusual effects on biochemical pathways. 2 They offer a new strategy to help overcome tumour resistance to chemotherapy, a current clinical need. 3 However, elucidation of the effects of metallodrugs on biomolecular interactions in cells is a major challenge. 4 The organometallic osmium half-sandwich, 'piano-stool' complex [(Z 6 -p-cym)Os(Azpy-NMe 2 )I] + (p-cym = p-cymene, Azpy-NMe 2 = 2-(p-((dimethylamino)phenylazo)pyridine)) [1] (Fig. 1a) is a promising anticancer drug candidate with interesting anticancer activity in vitro and in vivo. 5 The osmium complex penetrates readily into the core of tumours, 6 has a different mechanism of action from cisplatin and is capable of overcoming platinum resistance. 5 Complex 1 is a relatively inert prodrug that is activated by hydrolysis of the Os-I bond in a reducing environment inside cells, 7 and rapidly generates ROS efficiently through mitochondrial pathways. 8 In vitro experiments have shown that the reaction between 1 and GSH generates reactive analogues, including the hydroxido and chlorido complexes 1-OH or 1-Cl, respectively, which can bind to cysteine residues and generate OH radicals in the presence of hydrogen peroxide (Scheme S1, ESI †). 7 However, the nature of the derivatives of 1 involved in the cellular activity of the drug after intracellular activation is not understood. Here we use X-ray absorption spectroscopy (XAS) to investigate the chemical behaviour of 1 in cancer cells.
Synchrotron radiation is particularly suitable for exploring the chemical behaviour of metals in biological samples. 9 XAS is a powerful technique that can be used to obtain valuable information about the chemical state, and the electronic and structural properties of metals. This is achieved by using an X-ray beam of variable energy to probe the binding energy of electrons in specific electronic shells of the element (X-ray Absorption Near Edge Structure; XANES), or the elastic scattering processes between the photo-electrons generated by the incident beam and other atoms in the vicinity of the metal centre (Extended X-Ray Absorption Fine Structure, EXAFS) (Fig. S1, ESI †). The study of bulk cell populations (e.g. cell pellets or tissue samples) 10 and single cells (by using microprobe beamlines) 11 with these techniques has provided information on the oxidation state and speciation of several metallodrugs. Furthermore, state-of-the-art nanofocused synchrotron radiation allows imaging and spectroscopic techniques to be combined in studies of metal systems with subcellular resolution. However, although nano-XAS has allowed the chemical characterisation of nanomaterials with nanometre resolution, 12 sensitivity limits of the technique usually hamper its application for studies of metals inside cells.
To record meaningful XAS spectra using nanofocused synchrotron radiation, a metallodrug needs to be (1) present in high concentration inside cells (difficult to achieve when using biologically-relevant doses), or (2) concentrated within small specific areas of the cell. Recent X-ray fluorescence (XRF) maps of cells treated with 1 show a marked concentration of osmium within small organelles (probably mitochondria; supported by ICP-MS analysis of mitochondrial fractions extracted from the same cells). 13 Hence, we have investigated nano-XAS spectra of the Os L-edge in cells treated with 1 to provide new insight into its intracellular speciation.
We used hard X-rays (over 5-10 keV) focused on a 70 Â 100 nm 2 spot size at the nano-analysis beamline ID16B 14 (ESRF) to record XAS spectra on selected areas of A2870 ovarian carcinoma cells treated for 24 h with 1 mM of 1. Both 500 nm thick sections of epon embedded cells [TS] and whole cells cryofixed and then dehydrated [CFD] were studied.
This comparison was made to rule out the possibility that sample preparation for sections might affect the chemical form of 1 in cells. Initially, XRF maps were collected to detect areas with high concentrations of Os. As observed previously, 13 peaks corresponding to emissions resulting from excitation of the Os-L, Zn-K and Cu-K edges were observed for samples treated with 1 ( Fig. S1 and S2, ESI †), independently of the sample fixation protocol used. XRF elemental maps confirmed that Os was highly concentrated in small areas within the cytoplasm of cells, and not in the nuclei, the latter regions being where high concentrations of Zn were found ( Fig. 1 and Fig. S3, S4, ESI) †.
Nano-XAS Os L 3 -edge spectra (probing 2p 3/2 electrons; Fig. S1, ESI †) were recorded at ambient temperature for areas where XRF maps showed a high density of Os (red boxes in Fig. 1b, c and Fig. S3, S4, ESI †). The spectra displayed well-defined XANES and short EXAFS regions where some fine structures could be observed ( Fig. 2a and Fig. S5, ESI †). The XAS spectra were recorded for a series of Os II standards that might represent species that could be found inside cells, based on the in vitro activation experiments (1, 1-OH, 1-Cl and 1-SG; Scheme S1 and Fig. S6, ESI †), 7 but also Os 0 to Os IV standards to probe the oxidation state of intracellular Os species. Where possible, the S/N ratios were improved by averaging multiple scans obtained on each area (Table 1). Small drifts in the position of the beam were observed between individual acquisitions. This was compensated for by realigning the beam before each scan, but ultimately implied that the spatial resolution of the XAS measurements was slightly worse than the 70 Â 100 nm 2 spot size. Data analysis is complicated for Os L-edges due to the large atomic size and atomic number of Os. 15 Remarkably, the ranges of energy of the incident X-rays used for obtaining the XAS spectra and wavenumber k (Å À1 ) are related, and define the resolution for the measurement of interatomic distances between the metal and its coordination sphere. The longer these ranges are, the higher is the precision of the calculation on the length of these bonds. We were able to collect meaningful EXAFS only for wavenumbers up to k = 8 Å À1 (Fig. S5c and d, ESI †), which proved to be too small to provide highly accurate information on the coordination sphere of the complex. Through XANES analysis it was also impossible to differentiate between 1 and its derivatives (i.e. 1-OH, 1-Cl, etc.; Fig. S6, ESI †), and therefore the speciation of 1 inside cells could not be fully determined.
Nonetheless, XANES spectra of Os standards with oxidation states between 0 and IV+ were significantly different ( Fig. 2b and Fig. S7, ESI †). The small differences between the XANES of 1 and its analogues (Fig. S6, ESI †) suggest that changes observed for Os 0 to Os IV standards should be mainly related to the oxidation state of the metal centers, and not to possible contributions from their coordination sphere. Interestingly, the spectra obtained from areas within cells appeared to be intermediate between Os II and Os III by comparison with standards ( Fig. 2c and Fig. S8-S16, ESI †). Moreover, Linear Combination Fitting (LCF, using the Os II and Os III standards) revealed high spatial heterogeneity, with different contributions of Os II and Os III oxidation states to the multiple spectra recorded (even when the areas studied were found within the same cell, Fig. 1, Table 1 and Fig. S4, ESI †). Between 0-30% Os II and 100-70% Os III species were found in the different areas analysed (Table 1). However, the overall ratio of Os II /Os III species found in each sample was constant and independent of the fixation method used (ca. 10% Os II and 90% Os III ; Fig. 2d). Unfortunately, continuous irradiation of a solid sample of 1 for over 30 min using a microfocused beam with a similar flux density to the one used in our studies suggested that beam damage could cause some of the oxidation observed (Fig. S17, ESI †). The XRF elemental mapping involved irradiation of the samples for much shorter times (dwell time 1 s), avoiding unwanted oxidation. A higher number of scans improved the S/N ratios, but, in CFD samples, the irradiation time also appeared to correlate with observation of increasing quantities of Os III in the areas studied (as calculated by LCF from standards; Table 1 and Fig. S18, ESI †). For sectioned cells (TS), in contrast, the relative quantities of Os II and Os III species were related to the spatial localisation of the cellular area studied, and not to the number of scans obtained (Fig. 3, Table 1 and Fig. S18, S19, ESI †). It was not possible to assess this on CFD samples since we analysed only one area per cell. Hence we cannot rule out the possibility that some beam damage might have occurred when multiple scans were obtained from individual areas of the cells (to improve the S/N ratio of XANES spectra). Nevertheless, this beam damage is not responsible for the differences observed in the speciation of Os, which is related to cellular localisation.
Overall, these studies show that meaningful nano-XAS spectra can be obtained from ovarian cancer cells treated with biologically-relevant concentrations of an Os-based candidate anticancer drug. Although no information on the coordination sphere of the complex could be obtained, a heterogeneous distribution of Os II and Os III species within the treated A2780 cells was observed. Such spatial variations in the speciation of the drug are interesting, and might be due to the presence of: (1) different degrees of oxidation of 1 within the areas studied, or (2) an intracellular redox cycle as part of the mechanism of (Os II ) and OsCl 3 (Os III ) standards, and spectra collected in positions CFD-1 and TS-5 (green and red dotted lines show maximum of secondary peaks for Os II and Os III standards); and (d) average ratio of Os II and Os III species found in CFD and TS samples. Scan: 10.82-11.12 keV; 1 eV step; 3 s accumulation; and 70 Â 100 nm 2 beam size. 16 A similar redox cycle involving Os II /Os III species could explain the ROS increase observed when cells are treated with 1. However, additional experiments are needed to fully understand the mechanism responsible for the intracellular generation of Os III species, and its importance in the anticancer mechanism of action of 1.
Conflicts of interest
There are no conflicts to declare. The ellipses indicate areas with 100% Os III (purple) or 20% Os II and 80% Os III species (yellow). | 2,618 | 2019-06-13T00:00:00.000 | [
"Physics",
"Chemistry"
] |
the Virtual Common Space of a Network for the Scholarship of Teaching and Learning (SoTL) in an Academic Department
Traditionally, undergraduate curriculum committees, consisting of appointed faculty and student representatives, have served as the sole departmental vehicle for investigating, discussing and promoting the scholarship of teaching and learning (SoTL) within an academic department. However, with the universal demand for greater accountability on all aspects of evidence-based teaching and on the totality of student learning and career outcomes, some academic departments have encouraged the formation of additional organizations to support their SoTL mandate. In the Department of Human
Introduction
Over the past decade, there has been an increasing call for change in the way universities view and enact their teaching and research mandates.Within the teaching domain, voices at the local university [1] and at the national/international [2] levels have eloquently articulated the need for a more accountable, scholarly and learner-centered approach to undergraduate education.Within the research domain, the major Canadian research funding agencies, also known as the Tri-council agencies (SSHRC, NSERC, CIHR) have made it clear that grantees are required to go beyond the discovery of knowledge and produce significant, accountable, scholarly activity in knowledge transfer and translation (i.e., mobilization/exchange of knowledge, also known as KTT).The "slow explosion" of change is underway at Canadian universities [3,4] and it is affecting the professional lives of all members of the university community.
These new dimensions of scholarship follow on the seminal work of Boyer [5], which called for an infusion of scholarship into all aspects of the professional lives of faculty at universities.While the proposed changes are highly desirable for the education of undergraduate and graduate students and for society at large, faculty (in the natural sciences disciplines in particular) are generally not trained in graduate school or post-doctoral activities to be competent in the scholarship of teaching and learning or of knowledge transfer and translation [6][7][8].With pressures mounting for rapid change in universities, how can existing faculty, staff and students, facilitate and guide that change?How can they create a multi-dimensional common space for learning at the grassroots level (in order to foster and support professional development in these new areas of scholarship) and have the institutional infrastructure and policies needed to encourage and reward this activity [9]?
Traditionally, undergraduate curriculum committees, consisting of appointed faculty and student representatives, have served as the sole departmental vehicle for investigating, discussing and promoting the scholarship of teaching and learning (SoTL) within an academic department.However, with the universal demand for greater accountability on all aspects of evidence-based teaching and on the totality of student learning and career outcomes, some academic departments have encouraged the formation of additional organizations to support their SoTL mandate.In the Department of Human Health and Nutritional Sciences at the University of Guelph (UoG), the approach taken was to combine the interests of the faculty who had a previous interest in the SoTL and the scholarship of knowledge transfer and translation (hereafter to be referred to as SoKTT) in the health sciences with those who had a developing interest in these areas.These faculty members would then form the foundation of a "network" which has been called the K*T3net.The name of the network was formed from K*, which is used increasingly in the academic community as an overarching term which encompasses several related domains which include, but are not limited to, knowledge brokerage, knowledge management, knowledge transfer and translation [10].The second part of the network name (T3) represents knowledge transfer, translation and teaching.
The central common space of the network is a virtual space within an LMS platform already in use by the university for course websites (specifically, the LMS in use at the UoG is Desire2Learn (D2L, Kitchener, ON, CAN).The K*T3net virtual space (hereafter referred to as the K*T3net site) is accessed by all faculty members in the network and by a growing number of staff and senior PhD students in the department.The features and potential uses of the K*T3net site will be discussed in this paper.
The K*T3net
The purposes and functions of the K*T3net are defined in the Founding Statement, which is made available to all members through the homepage of the K*T3net site (Figure 1).B. The K*T3net is designed to link the Department activities in this broad domain to similar groups currently forming in other departments/colleges on the University of Guelph and University of Guelph-Humber campusesand to national and international organizations that support and promote these aspects of the collective academic endeavour.
C. The K*T3net is designed to foster excellence in these areas of research, scholarship and creative expression in the health sciences with emphasis on nutritional and nutraceutical sciences and human kinesiology.
All faculty, staff and graduate students within the department who were involved or interested in SoTL or SoKTT were invited to join K*T3net and new members are welcome at any time.The departmental chair, associate chair, graduate curriculum committee chair and undergraduate curriculum committee chair are members, in part to ensure that any consensus views from within the network activities and discussions can be readily made available to department members who choose not to participate in the network.Interested academics from outside the department and/or outside of the University of Guelph, who share expertise in teaching nutrition and nutraceutical science (NANS), human kinesiology, or a closely related field, have also been added as members of K*T3net.Currently, the membership of K*T3net includes faculty at the University of Guelph-Humber, York University and the University of Waterloo School of Pharmacy.
The Centre for Open Learning and Educational Support (OpenEd) is the central organization of teaching support at UoG.It provides assistance with educational development and has a broad focus on SoTL.They are available to assist all faculty and graduate students, regardless of their research background or other research practices, with the development of SoTL projects as well as the application of best practices in the classroom.The Office of Research at the University of Guelph is the central organization of knowledge translation and transfer (KTT) support for academic departments that do not have traditional extension divisions.It offers some assistance in SoKTT, particularly to holders of tri-council and Ontario Ministry of Agriculture, Food and Rural Affairs (OMAFRA) grants, however the main focus of this office is to provide KTT support.This office, along with the OpenEd, provide a great deal of general support in SoKTT and SoTL, however an ongoing issue for many faculty members is that they are often unable to attain discipline-specific support, and this can be a substantial barrier, especially for those who are new to this area of research.It can be extremely helpful, especially to a research new to the areas of SoTL and SoKTT, to be have access to individuals with a similar background who also have an understanding of how to approach a SoTL/SoKTT project in the field.This is especially important in the identification of potential collaborators, and the design of projects surrounding specific courses or subjects.For example, if a member were interested in examining the potential uses of virtual physiology laboratories in the classroom this may be out of the realm of expertise for an educational developer in OpenEd whose background is in the social sciences.A goal of the K*T3net is to provide discipline-specific, local support to its members and the department, while still functioning within the overarching umbrellas of the OpenEd and the Office of Research.The relationship of the K*T3net to central organizations which support SoTL and SoKTT is shown in Figure 2.
The Virtual Common Space
An LMS site was chosen as the common space of our departmental network.Desire2Learn (D2L) is an LMS used by over 650 institutions worldwide [11].It is typically used as an online management system for individual courses (i.e., a course website).D2L was already in use as LMS for courses at UoG, however the use of an LMS site as the common space of a network of this nature is a novel application of the software system.The perceived benefits of choosing an LMS site as the common space for the K*T3 net are: Ease and speed of set-up (total estimated set up time 10 people hours) No start up or maintenance costs, no space requirements Local professional support for the software application System sustainability (main use of the software is for UoG classes) Faculty, staff and student familiarity with the system Large and flexible-format storage capacity for multi-year use System security for information storage and discussions Ability to give users varying types of access (read-only, etc.). Ability to invite users who do not use D2L at their home institution While many of the same benefits could be achieved through an open public forum, such as wiki or facebook, an LMS offers several advantages which these public sites can not.The added security of a private site like D2L, into which all members must be invited by an administrator of the site, protects personal information and discussion topics from being accessed by outside persons.This exclusivity helps ensure that members of the network feel comfortable sharing information related to their teaching and research.While public forums do increase the potential of other like-minded individuals finding the site and becoming involved, this is still outweighed by the many above-mentioned benefits of an LMS.
Features of K*T3net Virtual Space
When members first open the homepage of the K*T3net, they have simple access to all components of the virtual space.Some components are accessible through links on a navigation bar (i.e., discussion board, member list), however it was the goal of the founding members to make the majority of the content (including journal articles, conference proceedings, videos, and any other interesting information related to SoTL and SoKTT, see Section 2.3.1.)available directly on the homepage (Figure 3).The founding statement of K*T3net is posted centrally on the homepage, and it includes the aims as well as a link to the initial presentation proposing the network.The presentation provides a more detailed description of all that the network encompasses and is a useful tool for new members to learn of the network's origins and objectives.The new universal learning outcomes proposed by the University of Guelph (which include an enhanced focus throughout the institution on evidence-based teaching and learner-centredness) are also posted in this section of the homepage as they are an important resource which faculty and students may wish to refer to often, and also serves as the basis for the objectives of K*T3net.Although there is a link in the navigation bar to the Content section of the site, this was also linked directly to the homepage so members can easily access new postings (Figure 3), as this was considered to be one of the most important components of the site.The details of the Content section are discussed in further detail below (see Content).
The final components of the homepage are the widgets for online news feeds regarding SoTL and SoKTT.We have used these widgets previously in undergraduate courses to provide news updates in relevant areas to biology students.The widgets are direct links to news feeds from education websites and are immediately updated when a new story is posted to the host site.At the time of publication there were two widgets on the K*T3net homepage: (1) The Chronicle of Higher Education, and (2) University Affairs.These widgets were added to keep K*T3net members aware of breaking news in higher education and occasionally provide interesting topics for discussion within the network.
Content/Data Storage
The Content section is subdivided into different topics, such as SoTL, KTT, and quotes, to make posting and retrieving information as simple as possible.All members are able to post content and this can include journal articles, conference proceedings or any other information that they find useful or interesting.As the site grows, the Content is further subdivided to allow more areas of interest to be added and easily accessed by members.For the more integrative topics, such as the Quote Garden, simple word documents are set up chronologically and members can open the document, add their new information, and repost, after which all members will see the updated version of the document when they enter the site.This was thought to be a more user-friendly method than adding a separate document for each quote since members can open the document and read a month's worth of information quickly instead of opening several separate documents.
Also included in the Content section is a single page document known as the Research Cloud.This document provides an organized subset of lists of the peer-reviewed journals currently in existence in several different areas of SoKTT and SoTL, including those with a very broad scope as well as those that are specific to science and health-related disciplines.This is meant to provide an initial picture of the information available, as well as where one could publish, for those members who are relatively new to this area of research.
Discussion Board
The Discussion Board is accessible through a link on the main navigation bar from the homepage.It is divided into Forums, each of which is focused on a specific area of discussion.Within each Forum, Topics are created and these represent each separate discussion that is taking place.Within a Discussion Topic members can post a new comment or reply to someone else's and these appear as either separate (new thought/comment) or combined (reply to a comment) threads (Figure 4).Importantly, a feature of D2L (and many other LMS platforms) allows members to comment anonymously, meaning that no other member can see who made that specific comment.We felt this was an important component of K*T3net as many of the members are supervisors, students or colleagues to several other members, and it was assumed that certain topics of discussion may arise where members may not feel comfortable attaching their identity to a comment.Thus the option of posting anonymously was determined to be invaluable to encourage an open environment where all members felt safe and comfortable sharing their thoughts.Under the "Classlist" section of the K*T3net site is the roster of members, along with their e-mail addresses and their "access level" on the site.Under the "Blog" section of each individual listed on the "Classlist", each member is being asked to place a short biographic sketch, followed by an indication of examples of completed projects and a list of ongoing and planning-stage projects in SoTL and/or SoKTT.This is designed to help the members of the K*T3net become more comfortable in reporting and promoting their activities in these domains; faculty traditionally do not report all of their activities in SoTL and SoKTT on their faculty activity report for tenure and promotion.Listing ongoing and planning stage projects will also encourage and indirectly facilitate collaboration between network members.
Level of Access
The traditional use of an LMS is as a course website; therefore the levels of access are organized by different levels of the teaching team as well as students and guests.All members of K*T3net were given teaching assistant-level access, meaning they are able to post and retrieve material in the Content section, create Discussion Board topics and comment on all Discussions.Two of the founding members were given instructor-level access, allowing them to alter the physical design of the site and add new members.They are also able to remove Discussion Board postings if necessary.This extra level of control was assigned to only two members as a means of maintaining the organization of the site as well as security.In the fall of 2012, MSc students and senior undergraduate students in a SoTL/SoKTT course will be given student level access (can post to the Discussion, but cannot create new Forums or Topics) to the site.
The virtual common space of the K*T3net has been operational for approximately six months at the time of submission.A number of developments over this time period suggest that the site is serving successfully as a local "teaching commons" [12] for SoTL and SoKTT, that is, a community of individuals with similar goals who can come to this virtual common space to share ideas and discuss related topics.As a result of the growing interest in SoTL and SoKTT in the department of Human Health and Nutritional Science, a new undergraduate course has been developed which will allow senior undergraduate students to conduct independent scholarship in these areas.The course, called "Teaching, Learning and Knowledge Transfer" will be an elective, upper year course and all students in the course will be invited to become members of K*T3net and have read-only access (student level) to the K*T3net site.Another novel use of the site is that students will be able to continue as members for the K*T3net, and have access to the site, for an extended period of time after completing the course.A graduate course is also being developed, as there is a rapidly expanding group of graduate students in the department who are involved in SoTL and SoKTT research.
Future Direction and Conclusions
The K*T3net has taken on several other functions since its inception.It has a central role in departmental projects on learning to "Bloom" (evaluate according to Bloom's taxonomy) examinations and other evaluative tools in the undergraduate classes of the department [13,14].It is sponsoring a departmental effort to define "creativity" [15,16], in the fields of nutritional and nutraceutical sciences and of human kinesiology.Outcomes of both of these activities will be posted and stored on the K*T3net site.An announcement has been posted on the site stating that network members are willing to help serve to evaluate research/scholarship in SoTL and SoKTT, including pre-reviewing manuscripts and grants that will be submitted.Members can also help review the scholarship quality of submissions in SoTL and SoKTT during tenure and promotion deliberations.
Faculty members and students who are interested in applying best practices in the classroom but who are not interested in taking on research projects in SoTL and SoKTT can also gain a great deal of knowledge and assistance through the K*T3net site.Members post many articles on discipline-specific teaching practices and some discussion threads have addressed specific examples of the application of new techniques in the classroom.
In addition to helping disseminate best practices within the department for teaching and outreach purposes, the K*T3net will also help guide collaborative research projects in SoTL and SoKTT.Joint applications for research projects are currently being prepared by network members, and it is a goal of the founding members to help connect members who are interested in similar areas of research in SoTL and SoKTT.Through the constant provision of new information and the involvement of members in discussions, new ideas and hypotheses are generated in this virtual common space, and potential collaborators are often identified through discussion postings or other posted content.It appears the K*T3net can provide the basic information and a virtual environment for faculty staff and students to move forward in research and scholarship as they strive to become "expert teachers" [17] in the classroom and beyond.
The K*T3net provides support to its faculty and student members in the areas of SoTL and SoKTT.It has improved knowledge translation of research in the areas of SoTL and SoKTT within the department and beyond, and has helped foster collaborations in these research areas.The virtual common space of the K*T3net has been very useful in providing a secure and convenient online meeting space for all members.
Figure 1 .
Figure 1.Founding Statement of the K*T3net as it appears on the K*T3net homepage.
Figure 2 .
Figure 2. The relationship of K*T3net to central organizations which provide support in SoTL research within the University of Guelph.Blank shapes represent colleges and departments that are not involved in K*T3net.MCB, Molecular and Cellular Biology; HHNS, Human Health and Nutritional Sciences; IB, Integrative Biology; CBS, College of Biological Sciences.
Figure 4 .
Figure 4. Organization page of the discussion board (A), and an example of a discussion board thread (B) on the K*T3net virtual space. | 4,596.2 | 2013-05-15T00:00:00.000 | [
"Education",
"Computer Science"
] |
State of Charge Estimation of Li-Ion Battery Based on Adaptive Sliding Mode Observer
As the main power source of new energy electric vehicles, the accurate estimation of State of Charge (SOC) of Li-ion batteries is of great significance for accurately estimating the vehicle’s driving range, prolonging the battery life, and ensuring the maximum efficiency of the whole battery pack. In this paper, the ternary Li-ion battery is taken as the research object, and the Dual Polarization (DP) equivalent circuit model with temperature-varying parameters is established. The parameters of the Li-ion battery model at ambient temperature are identified by the forgetting factor least square method. Based on the state space equation of power battery SOC, an adaptive Sliding Mode Observer is used to study the estimation of the State of Charge of the power battery. The SOC estimation results are fully verified at low temperature (0 °C), normal temperature (25 °C), and high temperature (50 °C). The simulation results of the Urban Dynamometer Driving Schedule (UDDS) show that the SOC error estimated at low temperature and high temperature is within 2%, and the SOC error estimated at normal temperature is less than 1%, The algorithm has the advantages of accurate estimation, fast convergence, and strong robustness.
Introduction
Li-ion batteries have gradually become the main power source of new energy electric vehicles due to their high energy density, long cycle life, low self-discharge rate [1], and good safety [2], and determine the cruising range of the vehicle. State of Charge (SOC) characterizes the remaining battery capacity, which is the core content of Battery Management Systems (BMSs), and an important indicator to assess the current status of batteries, highprecision SOC estimation is a must for power battery pack control strategies [3,4]. However, the SOC as a state quantity cannot be measured directly [5] and is affected by many factors. Therefore, it must be estimated approximately by measuring some other physical quantities such as voltage, current, etc. [6], and using a mathematical model or algorithm [7]. Accurate estimation of SOC is an important prerequisite for multiple battery control strategies [8]. It is important for accurately estimating vehicle mileage, prolonging battery life, preventing single batteries from overcharging or overloading, ensuring the maximum efficiency of the entire battery pack [9], and improving the economy of batteries [10].
Currently, the main methods of battery SOC estimation include the current integration method [11], open-circuit voltage method [12], machine learning algorithm [13], Kalman Filter algorithm [14], and so on. The estimation accuracy of the current integration method depends greatly on the sampling frequency and the accuracy of the current sensor instead of hardware. The open-circuit voltage method requires a long period of static time (up to several hours) to ensure that the port voltage of the battery is exactly the open-circuit voltage of the battery [15]. It is difficult to apply to real-time estimation, and the estimation method is open-loop estimation, which has a low accuracy. The machine learning algorithms include Artificial Neural Network (ANN) [16], Fuzzy Logic Control (FLC) [17], Support Vector
Second-Order Equivalent Circuit Model of Battery
The second-order RC equivalent circuit model can accurately describe the dynamic [25] and static characteristics [26] of the battery, with low complexity [27,28] and easy engineering implementation [29,30]. Considering the influence of different ambient temperatures on the SOC of batteries, an improved model of parameters changing with temperature, the DP equivalent circuit model with temperature changing parameters, is established. The DP equivalent circuit model is shown in Figure 1. In Figure 1, UOC is the OCV and R0 (Tamb) is the ohmic internal resistance; R1 (Tamb) and R2 (Tamb) are the electrochemical polarization resistance and the concentration polarization resistance, respectively; C1 (Tamb) and C2 (Tamb) are the electrochemical polarization capacitance and the concentration polarization capacitance, respectively. According to Kirchhoff's Law, the calculation formula for the terminal voltage of the DP equivalent circuit model is:
Parameter Identification of the Battery Model
The 18,650 ternary lithium-ion battery with a nominal capacity of 2600 mAH and a nominal voltage of 3.7 V was used in the experiment. The HPPC discharge experiment method was used to obtain the relationship between the open circuit voltage of the battery and SOC. The basic method is to discharge 5% of the battery's power every 1.5 h after the battery is fully charged. After the voltage is stabilized, the open circuit voltage corresponding to the current SOC is obtained. Finally, the corresponding data of OCV and SOC are fitted to obtain the relationship expression. The experiment is divided into the following six steps: (a) Put seven batteries in the same healthy state in the incubator at −10, 0, 10, 25, 30, 40, and 50 °C for 2 h; (b) Use the power battery performance test platform to discharge the single battery with 0.2 C current to the cut-off voltage of 2.5 V; In Figure 1, U OC is the OCV and R 0 (Tamb) is the ohmic internal resistance; R 1 (Tamb) and R 2 (Tamb) are the electrochemical polarization resistance and the concentration polarization resistance, respectively; C 1 (Tamb) and C 2 (Tamb) are the electrochemical polarization capacitance and the concentration polarization capacitance, respectively. According to Kirchhoff's Law, the calculation formula for the terminal voltage of the DP equivalent circuit model is:
Parameter Identification of the Battery Model
The 18,650 ternary lithium-ion battery with a nominal capacity of 2600 mAH and a nominal voltage of 3.7 V was used in the experiment. The HPPC discharge experiment method was used to obtain the relationship between the open circuit voltage of the battery and SOC. The basic method is to discharge 5% of the battery's power every 1.5 h after the battery is fully charged. After the voltage is stabilized, the open circuit voltage corresponding to the current SOC is obtained. Finally, the corresponding data of OCV and SOC are fitted to obtain the relationship expression. The experiment is divided into the following six steps: (a) Put seven batteries in the same healthy state in the incubator at −10, 0, 10, 25, 30, 40, and 50 • C for 2 h; (b) Use the power battery performance test platform to discharge the single battery with 0.2 C current to the cut-off voltage of 2.5 V; (c) After the battery is left for 2 h, charge the battery in the way of constant current first and then constant voltage according to the charging standard. When the battery is charged to 4.2 V, it is in the fully charged state by default, and the SOC is recorded as 1; (d) Set the thermostat to −10, 0, 10, 25, 30, 40, and 50 • C, and let the battery stand for 2 h; (e) In the constant temperature box, discharge the battery with 0.2 C current. After discharging to 5% of the standard capacity, let the battery stand for 2 h, and record the voltage at this time as the open circuit voltage; (f) Repeat step (e) until the cut-off voltage is 2.5 V.
The IT 8500 discharge meter and UK-150G thermostat are used to monitor the discharge current and terminal voltage of the battery in real time. 10,20,25,30,40, and 50 • C, the fully charged battery was continuously discharged at 0.2 C between the equal interval points of SOC. For every 5% decrease in SOC, the battery open circuit voltage was recorded after standing for 1.5 h. The OCV-SOC relationship curve at different temperatures is shown in Figure 2.
(e) In the constant temperature box, discharge the battery with 0.2 C current. After discharging to 5% of the standard capacity, let the battery stand for 2 h, and record the voltage at this time as the open circuit voltage; (f) Repeat step (e) until the cut-off voltage is 2.5 V.
The IT 8500 discharge meter and UK-150G thermostat are used to monitor the discharge current and terminal voltage of the battery in real time.
OCV Curve Fitting
At −10, 0, 10, 20, 25, 30, 40, and 50 °C, the fully charged battery was continuously discharged at 0.2 C between the equal interval points of SOC. For every 5% decrease in SOC, the battery open circuit voltage was recorded after standing for 1.5 h. The OCV-SOC relationship curve at different temperatures is shown in Figure 2. As shown in Figure 2, through the above experimental process, the open-circuit voltage data collected at various temperatures are fitted to obtain a three-dimensional space model diagram. Figure 2 clearly shows the relationship between the three. It can be seen from the changes in the three parameters in the figure that under the same temperature, the larger the SOC value, the larger the open circuit voltage. Additionally, the increasing trend of open circuit voltage is more obvious, and its range is 3~4.2 V. When SOC < 20% and T > 10°C, at the same SOC, the open circuit voltage shows a decreasing trend with the increase in temperature, and the decreasing trend is obvious when the SOC approaches 0. When SOC > 20% and T < 10°C, the open circuit voltage changes at the same SOC. Therefore, to build a more accurate battery model, it is necessary to consider the effect of temperature on the open circuit voltage.
Component Parameter Identification
Under the ambient temperature of [−10, 0, 10, 20, 25, 30, 40, 50], the Forgetting Factor Least Square (FFLS) is used to identify the parameters of the Li-ion battery model fused with ambient temperature. The calculation process of battery model parameter identification based on the forgetting factor least square method is as follows. As shown in Figure 2, through the above experimental process, the open-circuit voltage data collected at various temperatures are fitted to obtain a three-dimensional space model diagram. Figure 2 clearly shows the relationship between the three. It can be seen from the changes in the three parameters in the figure that under the same temperature, the larger the SOC value, the larger the open circuit voltage. Additionally, the increasing trend of open circuit voltage is more obvious, and its range is 3~4.2 V. When SOC < 20% and T > 10 • C, at the same SOC, the open circuit voltage shows a decreasing trend with the increase in temperature, and the decreasing trend is obvious when the SOC approaches 0. When SOC > 20% and T < 10 • C, the open circuit voltage changes at the same SOC. Therefore, to build a more accurate battery model, it is necessary to consider the effect of temperature on the open circuit voltage.
Component Parameter Identification
Under the ambient temperature of [−10, 0, 10, 20, 25, 30, 40, 50], the Forgetting Factor Least Square (FFLS) is used to identify the parameters of the Li-ion battery model fused with ambient temperature. The calculation process of battery model parameter identification based on the forgetting factor least square method is as follows.
1.
The system transfer function (2) is obtained by Laplace transform of the terminal voltage transformation Formula (1).
1.
Define Define the sampling error as e(k), then 2.
Introducing the forgetting factor λ, the recursive formula of FFLS is as follows: 2.
Use the inverse bilinear rule for the transfer function of Equation (3), let R 0 , R 1 , R 2 , C 1 , and C 2 can be obtained by comprehensively comparing the corresponding coefficients in steps 1 and 7. The identification results are shown in Table 1.
Model Validation
To verify the accuracy of the model, this paper selected the UDDS (Urban dynameter Driving Schedule) working condition to verify, and the working current is shown in Figure 3. The terminal voltage error curve under normal atmospheric temperature 25 • C is shown in Figure 4. Figures 5 and 6 show the terminal voltage error curves of UDDS operating at low (<25 • C) and high (>25 • C) temperatures.
, , , , and can be obtained by comprehensively comparing the corresponding coefficients in steps 1 and 7. The identification results are shown in Table 1.
Model Validation
To verify the accuracy of the model, this paper selected the UDDS (Urban dynameter Driving Schedule) working condition to verify, and the working current is shown in Fig Compared with the working current waveform of UDDS, the terminal voltage error at the small current section (current ≤ 1A) is mostly maintained at ±100 mV, and the volt age error at the large current section (current > 1A) can reach 150-200 mV. It can be seen from Figure 5 that the terminal voltage error at −10 °C fluctuated greatly, and the termina voltage error of 0 °C was greatly improved, which can maintain ±50 mV in the small cur rent section, and the error range is 50-100 mV in the large current section. As shown in Figure 6, when the ambient temperature is more than 25 °C, the fluctuation range o Compared with the working current waveform of UDDS, the terminal voltage erro at the small current section (current ≤ 1A) is mostly maintained at ±100 mV, and the volt age error at the large current section (current > 1A) can reach 150-200 mV. It can be seen from Figure 5 that the terminal voltage error at −10 °C fluctuated greatly, and the termina voltage error of 0 °C was greatly improved, which can maintain ±50 mV in the small cur rent section, and the error range is 50-100 mV in the large current section. As shown in Figure 6, when the ambient temperature is more than 25 °C, the fluctuation range o The model input was defined as the working current and temperature. Figure 3 is the working current diagram in this test environment. The output is the terminal voltage value estimated by the model. The terminal voltage value can describe the polarization phenomenon of the battery and conform to the voltage characteristics of the battery. Therefore, the difference between the output terminal voltage of the model and the real terminal voltage can evaluate the accuracy of the power battery model. The UDDS working condition experiment was carried out on the power battery at different temperature nodes, and the following conclusions were drawn.
Compared with the working current waveform of UDDS, the terminal voltage error at the small current section (current ≤ 1A) is mostly maintained at ±100 mV, and the voltage error at the large current section (current > 1A) can reach 150-200 mV. It can be seen from Figure 5 that the terminal voltage error at −10 • C fluctuated greatly, and the terminal voltage error of 0 • C was greatly improved, which can maintain ±50 mV in the small current section, and the error range is 50-100 mV in the large current section. As shown in Figure 6, when the ambient temperature is more than 25 • C, the fluctuation range of battery model error can be basically kept within ±20 mV, and even the maximum error of terminal voltage in the large current section is about 50 mV. It can be seen that the overall error of the battery model is small and the DP equivalent circuit model has good adaptability.
SOC Estimation Based on Adaptive Sliding Mode Observer
The adaptive Sliding Mode Observer [31] can estimate the SOC of the lithium-ion battery in electric vehicles. Whether the initial value of SOC is known or not, this method can estimate the SOC with high accuracy and less computation only by using the measured current and voltage values. It can overcome the nonlinearity, external interference, and measurement noise of the battery model, which is suitable for complex operating conditions. To adapt to the application of the adaptive Sliding Mode Observer, the state space equation of the DP equivalent circuit model is established. Voltages u 1 and u 2 on R 1 and R 2 are chosen as the two state quantities of the system, so x 1 = u 1 , x 2 = u 2 , x 3 = soc and input u are currents, which are positive when discharging and negative when charging. By writing the corresponding relationship between voltage and current according to the circuit principle, the state-space model of the cell can be obtained as shown in Equation (11): In the formula, η is the discharge efficiency, ideally 1, f (x 3 ) is the relationship between the open circuit voltage and SOC of the battery.
According to the state space equation of the DP equivalent circuit model, an adaptive Sliding Mode Observer is designed. Suppose that .x (i = 1, 2, 3) is the state of the estimated system based on the SMO andŷ is the output of the estimated system. The structure of the design estimator is shown in Equation (12): e y = y −ŷ is the systematic error of SOC estimation, L = l 1 l 2 l 3 T and P = ρ 1 ρ 2 ρ 3 T are, respectively, the Lomberg feedback gain and sliding-mode variable structure feedback gain. Substitute class symbolic function sgn a e y = e y |(ey)|+λ into Equation (12) and define is the observer gain matrix. According to Equations (11) and (12), the observer can be written as: To analyze the convergence of the observer, the state estimation error is defined as 1, 2, 3), then the estimation error system can be written as: According to Lagrange's median theorem, the output error equation is written as: Choose Lyapunov function: Then the sufficient condition for the stability of the observer is as follows: Equation (19) can be obtained by Equations (16) and (18): The sufficient condition for the stability of the above formula is that the following matrix H is positive definite: Let m 1 = 1 , m 2 = 1 R 2 C 2 , and the sufficient condition for the convergence of the observer is as follows.
Since SOC varies from 0 to 1, it can be determined that f (ξ) > 0 is always established, so f oc (x) is a monotonically increasing function, and its derivative is bounded, that is, 0 < f oc (0) ≤ f oc (1).
Set f oc (ξ), replace with the boundary value, and scale the third inequality in Equation (21) to get Equation (22): In this way, the sufficient conditions for the state gain matrix parameters to converge are obtained. Combined with the identified model parameters, the parameter function expressions m 1 = 1 R 1 (T)C 1 (T) and m 2 = 1 R 2 (T)C 2 (T) can be derived. By substituting these parameter forms into Equations (20) and (21) The range of s i is obtained using the calculation of l i and ρ i . Set l 1 = 0.28 and ρ 1 = 0.03 as initial values to satisfy the first condition in Equation (21). Where λ = 0.1, the range of s i can be obtained as: Substituting the range of s 1 into the second inequality of Equation (21), the range of s 2 . l 2 > 0.062 can be solved and l 2 = 0.8 can be selected. From 0 < ρ 2 < 0.083, l 3 > −1.12, 0 < ρ 3 < 0.086, ρ 2 = 0.21, l 3 = −0.186, and ρ 3 = 0.032 can be chosen. The results of feedback gain are shown in Table 2. Each sampling point is adjusted automatically to obtain the gain under the condition of the model parameter value by setting the observer gain parameters adaptively, so that the state feedback gain of the SMO satisfies all above inequalities. Therefore, the design of an adaptive Sliding Mode Observer for SOC estimation is completed. The SOC estimation process based on adaptive SMO is shown in Figure 7.
Experiments and Result Analysis
To verify the SOC estimation algorithm based on adaptive SMO, the experiments of the constant current discharge condition and UDDS condition under three different ambient temperatures of low temperature (0 °C), normal temperature (25 °C), and high temperature (50 °C) are designed. The initial value of the SOC under the constant current pulse discharge condition and UDDS condition is set to 0.8 in the simulation, while the real SOC starting point of the two conditions is 1. In the discharge process, the SOC measured by the IT8500 discharge instrument is taken as the measured value and compared with the battery SOC estimated value obtained by the estimation algorithm in this paper to verify the effectiveness of the algorithm.
Discharge Experiment Verification and Analysis at Low Temperature
The input ambient temperature is 0 °C, and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, and the SOC starting point in the discharge experiment to obtain the real value is 1. The SOC estimation results are shown in Figures 8-11.
Experiments and Result Analysis
To verify the SOC estimation algorithm based on adaptive SMO, the experiments of the constant current discharge condition and UDDS condition under three different ambient temperatures of low temperature (0 • C), normal temperature (25 • C), and high temperature (50 • C) are designed. The initial value of the SOC under the constant current pulse discharge condition and UDDS condition is set to 0.8 in the simulation, while the real SOC starting point of the two conditions is 1. In the discharge process, the SOC measured by the IT8500 discharge instrument is taken as the measured value and compared with the battery SOC estimated value obtained by the estimation algorithm in this paper to verify the effectiveness of the algorithm.
Discharge Experiment Verification and Analysis at Low Temperature
The input ambient temperature is 0 • C, and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, and the SOC starting point in the discharge experiment to obtain the real value is 1. The SOC estimation results are shown in Figures 8-11. It can be seen from the graph that the real and estimated SOC values converge in a It can be seen from the graph that the real and estimated SOC values converge in a trapezoidal shape under the condition of constant current pulse discharge. The UDDS It can be seen from the graph that the real and estimated SOC values converge in a trapezoidal shape under the condition of constant current pulse discharge. The UDDS working condition is the process of continuously discharging the battery, which is in dynamic change. Although the initial SOC values set in the estimation strategy are different, the estimated value with a large initial error can still converge to the real value in 127 s under the UDDS condition, and the overall deviation is kept within 2%.
Simulation results show that even if the initial SOC set in the estimation strategy is different, it can still converge the estimated value with a larger initial error to the real value in a short time period and keep the overall error within 2%.
Discharge Experiment Verification and Analysis at Normal Temperature
The input ambient temperature is 25 • C, and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, and the SOC starting point in the discharge experiment to obtain the real value is 1, the SOC estimation results are shown in Figures 12-15. working condition is the process of continuously discharging the battery, which is in dynamic change. Although the initial SOC values set in the estimation strategy are different, the estimated value with a large initial error can still converge to the real value in 127 s under the UDDS condition, and the overall deviation is kept within 2%.
Simulation results show that even if the initial SOC set in the estimation strategy is different, it can still converge the estimated value with a larger initial error to the real value in a short time period and keep the overall error within 2%.
Discharge Experiment Verification and Analysis at Normal Temperature
The input ambient temperature is 25 °C, and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, and the SOC starting point in the discharge experiment to obtain the real value is 1, the SOC estimation results are shown in Figures 12-15. working condition is the process of continuously discharging the battery, which is in dynamic change. Although the initial SOC values set in the estimation strategy are different, the estimated value with a large initial error can still converge to the real value in 127 s under the UDDS condition, and the overall deviation is kept within 2%.
Simulation results show that even if the initial SOC set in the estimation strategy is different, it can still converge the estimated value with a larger initial error to the real value in a short time period and keep the overall error within 2%.
Discharge Experiment Verification and Analysis at Normal Temperature
The input ambient temperature is 25 °C, and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, and the SOC starting point in the discharge experiment to obtain the real value is 1, the SOC estimation results are shown in Figures 12-15. It can be seen from the simulation results that the SOC estimation strategy at room temperature has faster convergence speed and higher accuracy, and the overall error is within 1%. Compared with the SOC estimation result at low temperature, the error of the SOC simulation results at room temperature is smaller. The main reason is that the internal chemical reaction of the battery at room temperature is in a stable state, so the estimated value at room temperature is very close to the real value, and the error between the two is smaller.
Discharge Experiment Verification and Analysis at High Temperature
The input ambient temperature is 50 °C and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, while the SOC starting point in the discharge experiment to obtain the real value is 1. The real SOC starting point of the two working conditions is 1, and the SOC estimation results are shown in Figures 16-19. It can be seen from the simulation results that the SOC estimation strategy at room temperature has faster convergence speed and higher accuracy, and the overall error is within 1%. Compared with the SOC estimation result at low temperature, the error of the SOC simulation results at room temperature is smaller. The main reason is that the internal chemical reaction of the battery at room temperature is in a stable state, so the estimated value at room temperature is very close to the real value, and the error between the two is smaller.
Discharge Experiment Verification and Analysis at High Temperature
The input ambient temperature is 50 • C and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, while the SOC starting point in the discharge experiment to obtain the real value is 1. The real SOC starting point of the two working conditions is 1, and the SOC estimation results are shown in Figures 16-19. It can be seen from the simulation results that the SOC estimation strategy at room temperature has faster convergence speed and higher accuracy, and the overall error is within 1%. Compared with the SOC estimation result at low temperature, the error of the SOC simulation results at room temperature is smaller. The main reason is that the internal chemical reaction of the battery at room temperature is in a stable state, so the estimated value at room temperature is very close to the real value, and the error between the two is smaller.
Discharge Experiment Verification and Analysis at High Temperature
The input ambient temperature is 50 °C and the initial SOC value is set to 0.8 in the constant current pulse discharge condition and UDDS condition in the SOC estimation using the adaptive Sliding Mode Observer, while the SOC starting point in the discharge experiment to obtain the real value is 1. The real SOC starting point of the two working conditions is 1, and the SOC estimation results are shown in Figures 16-19. It can be seen from the simulation results that the SOC estimation strategy proposed in this paper has good adaptability at high temperature, the overall error is within 2 %, and the estimated value can still converge to the real value in 146 s.
The estimation error results of three different temperatures (low temperature 0 °C, room temperature 25 °C, and high temperature 50 °C) under constant current discharge and UDDS conditions are shown in Table 3. The adaptive Sliding Mode Observer estimation SOC mentioned in this paper can converge quickly in different temperatures, and the convergence times are all less than 200 s. When the initial SOC value is uncertain or even has a large deviation from the actual value, the adaptive Sliding Mode Observer can make the estimated value converge to the actual value stably, and the estimation effect of the battery is well under different initial charging states. The estimation method based on adaptive Sliding Mode Observer has strong robustness and tracking ability to state variables and is suitable for constant flow and complex road conditions. It can be seen from the simulation results that the SOC estimation strategy proposed in this paper has good adaptability at high temperature, the overall error is within 2 %, and the estimated value can still converge to the real value in 146 s.
The estimation error results of three different temperatures (low temperature 0 °C, room temperature 25 °C, and high temperature 50 °C) under constant current discharge and UDDS conditions are shown in Table 3. The adaptive Sliding Mode Observer estimation SOC mentioned in this paper can converge quickly in different temperatures, and the convergence times are all less than 200 s. When the initial SOC value is uncertain or even has a large deviation from the actual value, the adaptive Sliding Mode Observer can make the estimated value converge to the actual value stably, and the estimation effect of the battery is well under different initial charging states. The estimation method based on adaptive Sliding Mode Observer has strong robustness and tracking ability to state variables and is suitable for constant flow and complex road conditions. It can be seen from the simulation results that the SOC estimation strategy proposed in this paper has good adaptability at high temperature, the overall error is within 2%, and the estimated value can still converge to the real value in 146 s.
Conclusions
The estimation error results of three different temperatures (low temperature 0 • C, room temperature 25 • C, and high temperature 50 • C) under constant current discharge and UDDS conditions are shown in Table 3. The adaptive Sliding Mode Observer estimation SOC mentioned in this paper can converge quickly in different temperatures, and the convergence times are all less than 200 s. When the initial SOC value is uncertain or even has a large deviation from the actual value, the adaptive Sliding Mode Observer can make the estimated value converge to the actual value stably, and the estimation effect of the battery is well under different initial charging states. The estimation method based on adaptive Sliding Mode Observer has strong robustness and tracking ability to state variables and is suitable for constant flow and complex road conditions.
Conclusions
In this paper, the second-order DP equivalent circuit model of a lithium-ion battery was established, and the parameters of the DP model were identified using a discharge experiment and least square method with a forgetting factor. A SOC estimation algorithm based on adaptive Sliding Mode Observer was proposed and verified by discharge experiments at different ambient temperatures. The experimental results show that the SOC estimation error of the algorithm is less than 2% at low and high temperatures, and the convergence speed is 127 and 146 s, respectively, under UDDS conditions. The SOC estimation error is less than 1 % at the normal temperature, and the convergence speed is 181 s. The algorithm has high accuracy, robustness, and the requirements of engineering practice. | 7,624.4 | 2022-10-01T00:00:00.000 | [
"Engineering"
] |
A Novel Integrated Prognosis & Diagnosis System for Lung Cancer Disease Detection using Soft Computing Techniques
:: Nowadays, lung cancer is one of the ranking first causes of mortality worldwide among men and women. Although there are a lot of treatment options like surgery, radiotherapy, and chemotherapy, five-year survival rate for patients is quite low. However, survival rate may go up to 54% in case lung cancer is identified in an early stage. Therefore, early detection of lung cancer is vital to decrease lung cancer mortality. Medical Experts are continuously trying to find the best solution for the early prediction and diagnosis of Lung Cancer Disease; in this Research work, an attempt has been made to design and develop a novel integrated soft computing predictive system to handle various types of patients’ clinical data to diagnose the lung cancer disease. Here data mining techniques are used to handle the numeric and textual data, image processing techniques are used to handle CT scan images, neural networks are used to train the lung cancer patient images, and fuzzy inference mechanism is used to predict the lung cancer stages. This integrated approach results in detection of lung cancer disease with Prognosis and suggesting diagnosis by the expert system for lung cancer disease. Even in cases of small-sized nodules (3 –10 mm), the proposed system is able to de termine the nodule type with 96% accuracy.
INTRODUCTION
The novel model suggested in this paper is to diagnose the lung cancer diseases using the techniques of image segmentation and decision making. A Fuzzy Inference System (FIS) is developed and result is tested by using artificial neural network. The methodology proposed for lung nodule detection consists of the following major components as shown in the figure 1. The figure 1 consists of five major components: Knowledge Base, Inference Engine, Working Memory, Database and a User Interface. Knowledge base consists of set of facts and rules. Here facts are true statements already tested by the system and rules are written by the user. An Input query given at the user interface can be solved by using the rules and facts in knowledge base.
Working Memory
Working memory consists of the data recently processed by the system. This memory is similar to a cache memory which contains the information about the frequently used rules and facts. This is also a very useful block in prediction system.
Database
In most of the prediction system designs a separate database will be maintained apart from the existing knowledge base to store the history of old records and other valuable information. In this predictive system design, the database is exclusively used to store the old patient records of lung cancer disease like CT Images, Test and Diagnostic Reports etc..
User Interface
User interface is the key component in the system through which user can interact with the predictive system. Here symptoms and patient test reports are given as input to the system and diagnosis and advice given by the expert are taken as output.
Inference Engine
Inference engine is used to infer the data from the knowledge base. The inference engine for the proposed integrated system contains four major independent components: Numeric Data Component, Image Data Component, Imprecise data components and training data component.
METHODOLOGY
In this novel predictive system design, the inference mechanism methodology is shown in the figure 2. The inference engine of the predictive system contains four major components, each of which are handling different types of feature of the lung cancer patient records. The components used in this methodology are explained hereunder.
Imprecise Data Component
This component is used to handle categorical data of the lung cancer patient records. The details are given in Table 1. These features are handled by Fuzzy inferencing techniques to determine the stage of the cancer. i.e. one of the four stages.
Imagery Data Component
This component is used to handle imagery data of the lung cancer patient records. The details are given in Table 1. These features are handled by Image Processing techniques to detect the tumors and their sizes and thereby determining whether a patient has cancer or not.
Text/ Imagery Data Neural Network Component
This component is used to handle Text/Imagery data in the input feature dataset given in Table 1. These features are handled by Neural Networks to detect whether the patient has cancer or not.
SYSTEM DESIGN
The proposed system handles numeric/categorical/imagery features supplied by users to determine lung cancer malignancy as well as the necessary diagnosis if the patient has lung cancer. The detailed steps of the proposed methodology are mentioned hereunder.
Step 1: User logs into the system with user name and password.
Step 2: User enters the Lung Cancer patient record feature set which contains combination of Textual, Imprecise & Imagery features. The feature set of a patient's record is given in Table 1.
Step 3: Given feature set is segregated into three feature subsets and those subsets will be supplied to respective components as shown in figure 2.
The feature subsets are a) Textual feature set contains numeric/categorical features of the given feature set as mentioned in Table 1.
b) The imprecise feature set contains fuzzy features of the given feature set as given in Table 1 c) The imagery feature set contains CT scan image related data as mentioned in Table 1.
Step 4: Each component evaluates received feature subsets and generates results based on the inference mechanism discussed above.
Step 5: Decision Aggregation will be done by the system and suggests necessary diagnosis/treatment if the given.
IMPLEMENTATION
The Methodology of the research work is implemented as mentioned in this section.
The trace of implementation steps are mentioned below.
Step 1: User logged into the system with user name= xxxx and password = **** Step 2: User Inputs the Lung Cancer patient record which is a combination of Numeric and Categorical Data. The cancer patient record details contains the features F1, F2, ….. F28, as given in Table 1.
Step 3: These features are handled by individual data components in the inference engine.
Step 3(a) : The Numerical data in the patient records described in Table 1 is handled by Numeric/Categorical data component. Data Mining concepts are used in this data component.
Input :
If (Age is Medium) and (Gender = Male) and (Hemoptysis is yes) and (pain in chest or bone is high) and (shortness of breath is yes) and (unexpected weight loss is high) Output lung cancer = yes The categorical feature set described in Table 2 of a lung cancer patient record is handled by Numeric/Categorical Data component. Data mining concepts are used in this data component.
For example :
Input :
If (Family History is yes) and (Exposure to Harmful Chemicals is high) and (pain in chest or bone is medium) and (shortness of breath is yes) and (unexpected weight loss is medium)
Output lung cancer = yes Step 3(b) : The imprecise data features of the patient records described in Table 1 For example :
Input :
If (Family History is yes) and (Hemoptysis is yes ) and (pain in chest or bone is high) and (Wheezing with sound is yes) and (Dysphagia is yes) and (unexpected weight loss is high) Output: lung cancer = yes and Stage = 3.
Step 3(c) : The imagery data described in table 4 is handled by Imagery data
Input: Image data
Output: Image Data
Input :
If (Exposure to Harmful chemicals is low) and (Hemoptysis is low) and (pain in is low)
Fig 4(a) & 4(b)
Step 4: Decision Aggregation will be performed by the system based on the results obtained from individual components.
Step 5: Finally the system suggests necessary Diagnosis/Treatment. The figure. 5 shows a typical treatment suggested by the system
CONCLUSIONS
Finally a Novel Integrated Predictive System is developed using Soft computing techniques for Prognosis and Diagnosis of Lung Cancer Disease which is helpful for both patients and doctors to detect the stage of the Lung Cancer using all sorts of datasets like Imagery dataset, Categorical dataset and Numerical datasets. It is also observed that the results of the system are more accurate and promising when compared with other Lung Cancer Analyst systems.
FUTURE ENHANCEMENTS
This research work can be developed as a smart device application. In this case, Graphical User Interface (GUI) is required to enable the user to get contact with the system easily. Using smart devices provide the users with the ability to use the system anywhere, which makes the process of diagnosis faster and easier. | 1,989.4 | 2020-12-02T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data
Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.
INTRODUCTION
High-throughput sequencing experiments such as RNA-Seq and microbiome sequencing are now routinely employed to interrogate the biological systems at the genome scale (Wang, Gerstein & Snyder, 2009). After processing of the raw sequence reads, the sequencing data usually presents as a count table of detected features. The complex processes involved in the sequencing causes the sequencing depth (library size) to vary across samples, sometimes ranging several orders of magnitude. Normalization, which aims to correct or reduce the bias introduced by variable library sizes, is an essential preprocessing step before any downstream statistical analyses for high-throughput sequencing experiments (Dillies et al., 2013;Li et al., 2015). Normalization is especially critical when the library size is a confounding factor that correlates with the variable of interest. An inappropriate normalization method may either reduce statistical power with calculation. We are thus left with a very small number of common OTUs to calculate the size factor. As the OTU data become more sparse, RLE becomes less stable. For datasets like COMBO data, where there are no common OTUs, RLE fails. For TMM, a reference sample has to be selected before the size factor calculation. Reliance on a reference sample restricts the size factor calculation to a specific OTU set that the reference sample harbors (77-433 OTUs for COMBO data). Therefore, both RLE and TMM use only a small fraction of the data available in the OTU data and are not optimal from an information perspective.
One popular strategy to circumvent the zero-inflation problem is to add a pseudocount (Mandal et al., 2015). This practice has a Bayesian explanation and implicitly assumes that all the zeros are due to under-sampling (McMurdie & Holmes, 2014). However, this assumption may not be appropriate due to the large extent of structural zeros due to physical absence. Moreover, the choice of the pseudo-count is very arbitrary and it has been shown that the clustering results can be highly dependent upon the choice (Costea et al., 2014). Recently, a new normalization method cumulative sum scaling (CSS) has been developed for microbiome sequencing data (Paulson et al., 2013). In CSS, raw counts are divided by the cumulative sum of counts, up to a percentile determined using a data-driven approach. The percentile is aimed to capture the relatively invariant count distribution for a dataset. However, the determination of the percentiles could fail for microbiome datasets that have high count variability. Therefore, a more robust method to address the zero-inflated sequencing data is still needed.
Here we propose a novel inter-sample normalization method geometric mean of pairwise ratios (GMPR), developed specifically for zero-inflated sequencing data such as microbiome sequencing data. By comprehensive tests on simulated and real datasets, we show that GMPR outperforms the other competing methods for zero-inflated count data.
GMPR normalization details
Our method extends the idea of RLE normalization for RNA-Seq data and relies on the same assumption that there is a large invariant part in the count data. Assume we have a count table of OTUs from 16S rDNA targeted microbiome sequencing. Denote the c ki as the count of the kth OTU (k = 1, : : : , q) in the ith (i = 1, : : : , n) sample. The RLE method calculates the size factor s i , which estimates the (relative) library size of a given sample, based on Step 1: Calculate the geometric means for all OTUs Step 2: For a given sample, Since geometric mean is not well defined for features with 0s, features with 0s are usually excluded in size calculation. However, for zero-inflated data such as microbiome sequencing data, as the sample size increases, the probability of existence of features without any 0s becomes smaller. It is not uncommon that a large dataset does not share any common taxa. In such cases, RLE fails. As an alternative, a pseudo-count such as 1 or 0.5 has been suggested to add to the original counts to eliminate 0s (Mandal et al., 2015). Since the majority of the counts may be 0s for microbiome data, adding even a small pseudo-count could have a dramatic effect on the geometric means of most OTUs. To circumvent the problem, GMPR reverses the order of the two steps of RLE. The first step is to calculate r ij , which is the median count ratio of nonzero counts between sample i and j, The second step is to calculate the size factor s i for a given sample i as . . . ; n: Figure 1 illustrates the procedure of GMPR. The basic strategy of GMPR is that we conduct the pairwise comparison first and then combine the pairwise results to obtain the final estimate. Although only a small number of OTUs (or none) are shared across all samples due to severe zero-inflation, for every pair of samples, they usually share many OTUs. For example, 83 OTUs are shared on average for COMBO sample pairs. Thus, for pairwise comparison, we focus on these common OTUs that are observed in both samples to have a reliable inference of the abundance ratio between samples. We then synthesize the pairwise abundance ratios using a geometric mean to obtain the size factor. Based on this pair analysis strategy, we utilize far more information than RLE and TMM, both of which are restricted to a small subset of OTUs. It should be noted that GMPR is a general method, which could be applied to any type of sequencing data in principle.
Simulation studies to evaluate the performance of GMPR normalization
We study the performance of GMPR using simulated OTU datasets. Specifically, we study the robustness of GMPR to differential and outlier OTUs, and the effect on the performance of differential abundance analysis (DAA) of OTU data. We compare GMPR to competing normalization methods including CSS, RLE, RLE with pseudo-count 1 (RLE+), TMM, TMM with pseudo-count 1 (TMM+) and TSS. The details of calculating the size factors using each normalization method are described in Table 1. The size factors from different normalization methods are further divided by the median so that they are on the same scale.
Robustness to differential and outlier OTUs
We first use a perturbation-based simulation approach to evaluate the performance of normalization methods, focusing on their robustness to differentially abundant OTUs and sample-specific outlier OTUs. The idea is that we first simulate the counts from a common probabilistic distribution so that the total count is a proxy of the "true" library size. Next, we perturb the counts in different ways and apply different normalization methods on the perturbed counts and evaluate the performance based on the correlation between estimated size factor and "true" library size. Specifically, we generate zero-inflated count data based on a Dirichlet-multinomial model with known library sizes (Chen & Li, 2013). The mean and dispersion parameters of Dirichletmultinomial distribution are estimated from the COMBO dataset after filtering out rare OTUs with less than 10 reads and discarding samples with less than 1,000 reads (n = 98, q = 625) (Wu et al., 2011). The library sizes are also drawn from those of the COMBO data. To investigate the effect of sparsity (the number of zeros), OTU counts are simulated with different zero percentages (∼60%, 70% and 80%) by adjusting the dispersion parameter. A varying percentage of OTUs (0%, 1%, 2%, 4%, 8%, 16%, 32%, 64%) are perturbed in each set of simulation, with varying strength of perturbation. The counts c ki of perturbed OTUs are changed to ffiffiffiffiffi c ki p or c 2 ki for strong perturbation and 0.25c ki or 4c ki for moderate perturbation.
We employ two perturbation approaches where we decrease/increase the abundances of a "fixed" or "random" set of OTUs. As shown in Fig. 2, in the "fixed" perturbation approach, the same set of OTUs are decreased/increased in the same direction for all samples, reflecting differentially abundant OTUs under a certain condition such as disease state. In the "random" perturbation approach, each sample has a random set of OTUs perturbed with a random direction, mimicking the sample-specific outliers.
Finally, size factors for all methods are estimated and the Pearson's correlation between the estimated and "true" library sizes is calculated. The simulation is repeated 25 times and the mean estimate and its 95% confidence intervals (CIs) are reported.
Effect on the performance of DAA
One use of the estimated size factor is for DAA of OTU data, where the size factor (usually on a log scale) is included as an offset in a count-based parametric model to address variable library sizes. Many count-based models have been proposed for DAA including Table 1 Calculation of size factors for normalization methods compared in the analysis.
GMPR (Geometric Mean of Pairwise Ratios):
The size factors for all samples are calculated by GMPR described in the Method section.
CSS (Cumulative Sum Scaling): The size factors for all samples are calculated by applying newMRexperiment, cumNorm and normFactors in Bioconductor package metagenomeSeq (Paulson et al., 2013).
RLE (Relative Log Expression):
The size factors for all samples are calculated by the calcNormFactors with the parameter set as "RLE" in the edgeR Bioconductor package (Anders & Huber, 2010). The scaled size factors are obtained by multiplying the size factors with the total read count. RLE+ (Relative Log Expression plus pseudo-counts): The scaled size factors for all samples are calculated in the same way as RLE, except that each data entry is added with a pseudo-count 1.
TMM (Trimmed Mean of M values):
The size factors for all samples are calculated by the calcNormFactors function with the parameter set as "TMM" in the edgeR Bioconductor package (Robinson & Oshlack, 2010). The scaled size factors are obtained by multiplying the size factors with the total read count.
TMM+ (Trimmed Mean of M values plus pseudo-counts):
The scaled size factors for all sample are calculated in the same way as TMM, except that each data entry is added with a pseudo-count 1.
TSS (Total Sum Scaling):
The size factors are taken to be the total read counts.
DESeq2 and edgeR (McMurdie & Holmes, 2014). These methods usually come with their native normalization schemes such as RLE for DESeq2 and TMM for edgeR. Therefore, it is interesting to see if the GMPR normalization could improve the performance of these methods. To achieve this end, we use DESeq2 to perform DAA on the OTU table since DESeq2 has been shown to be more robust than edgeR for zero-inflated dataset (Chen et al., 2018). We compare the performance of DESeq2 using its native RLE normalization to that using GMPR or TSS normalization. Figure 2 Illustration of the simulation strategy. In the "fixed" perturbation approach, the abundances of the same set of OTUs are decreased/ increased for all samples, reflecting differentially abundant OTUs under certain conditions such as disease state. In the "random" perturbation approach, each sample has a random set of OTUs perturbed with a random direction, reflecting the sample-specific outliers. The darkness of the color indicates the OTU abundance.
Full-size DOI: 10.7717/peerj.4600/ fig-2 We use the same simulation strategy described in Chen et al. (2018). Specifically, zero-inflated negative binomial distribution (ZINB) is used to simulate the OTU count data. ZINB has the following probability distribution function which is a mixture of a point mass at zero (I 0 ) and a negative binomial (f nb ) distribution of the form The three parameters-prevalence (p ki ), abundance (m ki ) and dispersion(ϕ ki )-fully capture the zero-inflated and dispersed count data. We generate the simulated datasets (two sample groups of size 49 each) based on the parameter values estimated from the COMBO dataset. Five percent of OTUs are randomly selected to have their counts in one group multiplied by a factor of four. The groups in which this occurs are randomly selected and thus the abundance change is relatively "balanced." To further study the performance under strong compositional effects, on top of the "balanced" simulation, we also select two highly abundant OTUs (π = 0.168 and 0.083, respectively) to be differentially abundant in one group. We then apply DESeq2 on the simulated datasets with RLE, GMPR and TSS normalization, where we denote DESeq2-GMPR, DESeq2-RLE, DESeq2-TSS for these three approaches. For each approach, the P-values are calculated for each OTU and corrected for multiple testing using false discovery rate (FDR) control (Benjamini-Hochberg procedure). We evaluate the performance based on FDR control and ROC analysis, where the true positive rate is plotted against false positive rate at different P-value cutoffs. The observed FDR is calculated as where FP and TP are the number of false and true positives respectively. Simulation results are averaged over 100 repetitions.
RESULTS
Simulation: GMPR is robust to differential and outlier OTUs We first study the robustness of GMPR to differentially abundant OTUs and sample-specific outlier OTUs by using the perturbation-based simulation approach, where we artificially alter the abundances of a "fixed" or "random" set of OTUs under different levels of zero-inflation, percentage of perturbed OTUs and strength of perturbation.
In the simulation of "fixed" perturbation ( Fig. 3), the performance of all methods decrease in most cases with the increased zero percentage. TSS has excellent performance under moderate perturbation but performs unstably under strong perturbation (the correlation decreases steeply when the percentage of perturbed OTUs increases from 1% to 4%; after that, the correlation increases since the total sum moves closer to S k c 2 ki , which is highly correlated with the original library size S k c ki ). GMPR, followed by CSS, consistently outperforms the other methods when the perturbation is strong. When the perturbation is moderate, GMPR is only secondary to TSS when the percentage of zeros is high (80%) and on par with TSS when the percentage of zeros is moderate (70%) or low (60%). For RNA-Seq based methods, TMM performs better than RLE in either strong or moderate perturbation. Though the performance of RLE+ improves by adding pseudo-counts to the OTU data, the size factor estimated by TMM+ merely correlates with true library size when the zero percentage is high (70% and 80%). In contrast, GMPR, together with CSS, performs stable in all cases and GMPR yields better size factor estimate than CSS. In the "random" perturbation scenario (Fig. 4), performance of all methods decreases with the increased zero percentage as the "fixed" scenario. Similar to the performance in "fixed" perturbation scenario, TSS has excellent performance under moderate perturbation but performs poorly under strong perturbation. When the perturbation is strong, GMPR, followed by CSS, still outperforms the other methods. RNA-Seq based methods including TMM, TMM+, RLE and RLE+ have a similar trend as in "fixed" perturbation. However, compared to "fixed" perturbation, the performance of TMM and RLE decreases more obviously as the number of perturbed OTUs increases. In contrast, GMPR and CSS are more robust to sample-specific outlier OTUs in all cases and GMPR results in better size factor estimate than CSS.
Simulation: GMPR improves the performance of DAA
In the previous section, we demonstrate that GMPR could better recover the "true" library size in presence of differentially abundant OTUs or sample-specific outlier OTUs. In this section, with a different perspective, we show that the robustness of GMPR method translates into a better false positive control and higher statistical power in the context of DAA, where the aim is to detect differentially abundant OTUs between two sample groups. We simulate the zero-inflated count data using ZINB model and use DESeq2 to perform DAA with different normalization schemes (RLE, GMPR and TSS). In one scenario, we randomly select 5% OTUs to be differential with a fold change of four in either sample group (Scene 1). In the other scenario, in addition to the 5% randomly selected OTUs, we select two highly abundant OTUs to be differentially abundant in one group to create strong compositional effects (Scene 2). In this scenario, the abundance change of these highly abundant OTUs will lead to the change of the "relative" abundances of other OTUs if the TSS normalization is used. The results for the two scenarios are presented in Fig. 5. In Scene 1 (Figs. 5A and 5B), we see that all approaches have slightly elevated FDRs relative to the nominal levels (Fig. 5A), probably due to inaccurate P-valuẽ calculation based on the asymptotic distribution of Wald statistic for those taxa with excessive zeros. Nevertheless, the observed FDR of DESeq2 using GMPR is closer to the nominal level than that using RLE (native normalization) and TSS. In terms of ROCbased power analysis (Fig. 5B), GMPR achieves a higher area under the curve than RLE and TSS. In this "balanced" scenario, TSS performs relatively well and is even slightly better than RLE. The performance differences are more revealing in Scene 2 (Figs. 5C and 5D), where we artificially alter the abundances of two highly abundant OTUs. In this setting, TSS has a poor FDR control due to strong compositional effects and has a much lower statistical power at the same false positive rate. In contrast, the performance of GMPR and RLE remains stable, and GMPR performs better than RLE in terms of both FDR control and power.
Real data: GMPR reduces the inter-sample variability of normalized abundances We next evaluate various normalization methods using 38 gut microbiome datasets from 16S rDNA sequencing of stool samples (Table S1). These experimental datasets were retrieved from Qiita database (https://qiita.ucsd.edu/) with a sample size larger than 50 each. The 38 datasets come from different species as well as a wide range of biological conditions. If a study involves multiple species, we include samples from the predominant species. We focus the analysis on gut microbiome samples because the gut microbiome is more studied than that from other sample types. For the real data, it is not feasible to calculate the correlation between estimated size factors and "true" library sizes as done for simulations. As an alternative, we use the inter-sample variability as a performance measure since an appropriate normalization method will reduce the variability of the normalized OTU abundances (raw counts divided by the size factor) due to different library sizes. A similar measure has been used in the evaluation of normalization performance for microarray data (Fortin et al., 2014). We use the traditional variance as the metric to assess inter-sample variability. For each method, the variance of the normalized abundance of each OTU across all samples is calculated and the median of the variances of all OTUs or stratified OTUs (based on their prevalence) is reported for each study. For each study, all methods are ranked based on these median variances. The distributions of their ranks across these 38 studies for each method are depicted in Fig. 6. A higher ranking (lower values in the box plot) indicates a better performance in terms of minimizing inter-sample variability.
In Fig. 6, we could see that GMPR achieves the best performance with top ranks in 22 out of 38 datasets, followed by CSS, which tops in seven datasets (Table S2). This result is consistent with the simulation studies, where GMPR and CSS are overall more robust to perturbations than other methods. Moreover, GMPR consistently performs the best for reducing the variability of OTUs at different prevalence levels. It is also noticeable that the inter-sample variability is the largest without normalization (RAW) and TSS does not perform well for a large number of studies. As expected, RLE only works for eight out of 38 datasets due to a large percentage of zero read counts. By adding pseudo-counts, RLE+ improves the performance significantly compared to RLE. However, there is not much improvement of TMM+ compared to TMM. To see if the difference is significant, we performed paired Wilcoxon signed-rank tests between the ranks of the 38 datasets obtained by GMPR and by any other methods. GMPR achieves significantly better ranking than other methods (P < 0.05 for all OTUs or stratified OTUs). Fig. S1 compares the distributions of the OTU variances and their ranks for an example dataset (study ID 1561, all OTUs). Each OTU is ranked based on its variances among the competing methods. Although the difference in median variance is moderate, GMPR performs significantly better than other methods (P < 0.05, for all comparisons) and achieves a much lower rank.
To demonstrate the performance on low-diversity microbiome samples, we perform the same analyses on an oral and a skin microbiome dataset from Qiita (Table S1, bottom). Consistent with the performance on gut microbiome datasets, although the difference in median variance is small (Fig. S2), GMPR achieves the lowest rank for majority of the OTUs, followed by CSS (Fig. 7).
Rank
Real data: GMPR improves the reproducibility of normalized abundances When replicates are available, we could evaluate the performance of normalization based on its ability to reduce between-replicate variability. Normalization will increase the reproducibility of the normalized OTU abundances. In this section, we compare the performance of different normalization methods based on a reproducibility analysis of a fecal stability study, which aims to compare the temporal stability of different stool collection methods (Sinha et al., 2016). In this study, 20 healthy volunteers provided the stool samples and these samples were subject to different treatment methods. The stool samples were then frozen immediately or after storage in ambient temperature for one or four days for the study of the stability of the microbiota. Each sample had two to three replicates for each condition and thus we could perform reproducibility analysis based on the replicate samples. We evaluate the reproducibility for the "no additive" treatment method for the data generated at the Knight Lab (Sinha et al., 2016), where the stool samples were left untreated. Under this condition, certain bacteria will grow in the ambient temperature with varying growth rates and we thus expect a lower agreement between replicates after four-day ambient temperature storage. We conduct the reproducibility analysis on the core genera, which are present in more than 75% samples (a total of 26 genera are assessed). We first estimate the size factors based on the OTU-level data and the genus-level counts are divided by the size factors to produce normalized genus-level abundances. Intraclass correlation coefficients (ICC) is used to quantify the reproducibility for the genus-level normalized abundances. The ICC is defined as, where s 2 b represents the biological variability, i.e., sample-to-sample variability and s 2 ε represents the replicate-to-replicate variability. We calculate the ICC for 26 core genera for "day 0" (immediately frozen) and "day 4" (frozen after four-day storage), respectively. The ICCs are estimated using the R package "ICC" based on the mixed effects model. An ICC closer to one indicates excellent reproducibility. Figure 8 shows that the reproducibility of the genera in "day 0" has higher reproducibility than "day 4" regardless of the normalization method used since reproducibility decreases as certain bacteria grow randomly as time elapses. While all the methods have resulted in comparable ICCs for "day 0," GMPR achieves higher ICCs for "day 4" than the rest methods. Sinha et al. (2016) showed that most taxa were relatively stable over four days and only a small group of taxa (mostly OTUs from Gammaproteobacteria) displayed a pronounced growth at ambient temperature. This suggests that most of the genera may be temporally stable and their "day 4" ICCs should be close to the "day 0" ICCs. However, due to the compositional effect, if the data are not properly normalized, a few fast-growing bacteria will skew the relative abundances of other bacteria, leading to apparently lower ICCs for those stable genera. In contrast, the GMPR method is more robust to differential or outlier taxa as demonstrated by the simulation study, which explains higher ICCs for "day 4" samples.
DISCUSSION AND CONCLUSION
Normalization is a critical step in processing microbiome data, rendering multiple samples comparable by removing the bias caused by variable sequencing depths. Normalization paves the way for the downstream analysis, especially for DAA of microbiome data, where proper normalization could reduce the false positive rates due to compositional effects. However, the characteristics of microbiome sequencing data, including over-dispersion and zero-inflation, make the normalization a non-trivial task.
In this study, we propose the GMPR method for normalizing microbiome sequencing data by addressing the zero-inflation. In one simulation study, we demonstrate GMPR's effectiveness by showing it performs better than other normalization methods in recovering the original library sizes when a subset of OTUs are differentially abundant or when random outlier OTUs exist. In another simulation study, GMPR yields better FDR control and higher power in detecting differentially abundant OTUs. In real data analysis, we show GMPR reduces the inter-sample variability and increases inter-replicate reproducibility of normalized taxa abundances. Overall, GMPR outperforms RNA-Seq normalization methods including TMM and RLE and modified TMM+ and RLE+. It also yields better performance than CSS, which is a normalization method specifically designed for microbiome data. As a general normalization method for zero-inflated sequencing data, GMPR could also be applied to other sequencing data with excessive zeros such as single-cell RNA-Seq data (Vallejos et al., 2017).
We note that the main application of GMPR method is for taxon-level analysis such as the presented DAA and reproducibility analysis, where it is important to distinguish those "truly" differential from "falsely" differential taxa due to compositional effects. Although we could apply the proposed normalization to (weighted) distance-based statistical methods such as ordination, clustering and PERMANOVA (Caporaso et al., 2010;Chen et al., 2012) based on the GMPR-normalized abundance data, simulations show that the advantage of using GMPR is very limited for such applications, compared to the traditionally used TSS method (i.e., proportion-based method) (Fig. S3). This is explained by the fact that the distance-based analysis focuses on the overall dissimilarity and the proportional data is already efficient enough to capture the overall dissimilarity. Probably, more important factors to consider in distance-based statistical methods are the selection of the most relevant distance measure and/or the application of appropriate transformation after normalization (Costea et al., 2014;Thorsen et al., 2016).
Besides the size factor-based approach (GMPR, CSS, TSS, RLE, TMM), the other popular approach for normalizing the microbiome data is through rarefaction. Both approaches have weakness and strength for particular applications. Although rarefaction discards a significant portion of the reads and is probably not optimal from an information perspective, it is still widely used for microbiome data analysis, particularly for aand b-diversity analysis. The reason for its extensive use is that the majority of the taxa in the microbiota are of low abundance and their presence/absence strongly depends on the sequencing depth. Thus rarefaction puts the comparison of aand b-diversity on an equal basis. Size factor-based normalization, on the other hand, is unable to address this problem. Thus rarefaction is still recommended for aand b-diversity analysis, especially for unweighted measures and for confounded scenarios, where the sequencing depth correlates with the variable of interest (Weiss et al., 2017). For DAA, one major challenge is to address the compositional problem. Rarefaction has a limited ability in this regard since the total sum constraint still exists after rarefaction. In addition, it suffers from a great power loss due to the discard of a large number of reads (McMurdie & Holmes, 2014). In contrast, the size factor-based approaches are capable of capturing the invariant part of the taxa counts and address the compositional problem efficiently through normalization by the size factors. The size factors could be naturally included as offsets in count-based parametric models to address uneven sequencing depth (Chen et al., 2018).
Geometric mean of pairwise ratios is an inter-sample normalization method and has a computational complexity of O(n 2 q), where n and q are the number of samples and features respectively. While GMPR calculates the size factors for a typical microbiome dataset (n < 1,000) in seconds, it does not scale linearly with the sample size. Large samples sizes are increasingly popular for epidemiological study and genetic association study of the microbiome (Robinson, Brotman & Ravel, 2016;Hall, Tolonen & Xavier, 2017), where tens or hundreds of thousands of samples will be collected to detect weak association signals. For such large sample sizes, GMPR may take a much longer time. A potential strategy for efficient computation under ultra-large sample sizes is to divide the dataset into overlapping blocks, calculate GMPR size factors on these blocks and unify the size factors through the overlapping samples between blocks. To increase the computational efficiency of GMPR for ultra-large sample sizes will be the focus of our future research.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was supported by Mayo Clinic Center for Individualized Medicine. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors: Mayo Clinic Center for Individualized Medicine. | 6,882.4 | 2018-04-02T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Nanobiosensors as new diagnostic tools for SARS, MERS and COVID-19: from past to perspectives
The severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS) and novel coronavirus 19 (COVID-19) epidemics represent the biggest global health threats in the last two decades. These infections manifest as bronchitis, pneumonia or severe, sometimes fatal, respiratory illness. The novel coronavirus seems to be associated with milder infections but it has spread globally more rapidly becoming a pandemic. This review summarises the state of the art of nanotechnology-based affinity biosensors for SARS, MERS and COVID-19 detection. The nanobiosensors are antibody- or DNA-based biosensors with electrochemical, optical or FET-based transduction. Various kinds of nanomaterials, such as metal nanoparticles, nanowires and graphene, have been merged to the affinity biosensors to enhance their analytical performances. The advantages of the use of the nanomaterials are highlighted, and the results compared with those obtained using non-nanostructured biosensors. A critical comparison with conventional methods, such as RT-PCR and ELISA, is also reported. It is hoped that this review will provide interesting information for the future development of new reliable nano-based platforms for point-of-care diagnostic devices for COVID-19 prevention and control.
Coronaviruses are a large family of enveloped, singlestranded, positive-sense RNA viruses that mostly infect animals, such as birds and mammals, but may "spill over" from the animal host to human populations. There are seven coronaviruses infecting humans, four of them cause mild infection in the upper respiratory tract, whereas three of them (SARS-CoV, MERS-CoV, SARS-CoV-2) cause respiratory illnesses of varying severity, from the common cold to fatal pneumonia [4,5]. Before SARS appeared, coronaviruses had never been particularly dangerous to humans, causing severe diseases only in animals [6].
According to the World Health Organization (WHO), the onset of the SARS epidemic occurred in Guangdong, China, in November 2002, followed by the worldwide spread of the virus with reported cases in 29 countries, including Canada and the USA. However, 8 months later, in July 2003, after causing 774 deaths, the SARS epidemic was declared to be contained by the WHO [7].
Only a decade later, another pathogenic coronavirus, MERS-CoV, caused an endemic in Middle Eastern countries. Since 2012, there have been at least 845 MERS-CoV-related deaths in 27 countries but about 80% of the reported cases were in Saudi Arabia. Nowadays, it continues to cause sporadic outbreaks, mostly localized in the Arabian Peninsula [8].
The outbreak of the novel highly contagious SARS-CoV-2 was first identified in Wuhan, Hubei province, China, in early December 2019 [9]. The SARS-CoV-2 virus rapidly spreads across continents and on 30 January 2020, the WHO declared the outbreak of a Public Health Emergency of International Concern and a pandemic on 11 March. Infections with SARS-CoV-2 are now widespread, and as of 31 August 2020, 25,467.390 cases have been confirmed in more than 110 countries, with 851,000 deaths [10][11][12][13].
The three Coronaviruses belong to the same Betacoronavirus genus [14]. Clinical presentation of COVID-19, the disease caused by SARS-CoV-2 virus, shows great similarities with SARS and MERS pneumonia. More than 80% of cases are mild, and patients normally recover within 2 weeks. However, some patients show severe symptoms, like acute respiratory distress syndrome (ARDS) and about 5% show critical conditions, which may evolve into septic shock or multiple organ failure [13].
According to early phylogenetic studies, SARS-CoV-2 is related to SARS and both of them show over 85% genome sequence identity with the bat SARS-like CoV, which would suggest the bat origin of the virus [15,16].
Several studies hypothesized the entry of these three viruses in humans from their natural reservoir bats, via intermediate host like civets and camels, in the case of SARS-CoV and MERS-CoV, respectively. The intermediate host of the SARS-CoV-2 still needs to be established, although some studies suggest pangolins as a possible host [17].
Although SARS and MERS have significantly higher mortality rates than COVID-19, the novel SARS-CoV-2 is more infectious and the overall number of deaths from COVID-19 far outweighs that from SARS or MERS [18][19][20]. Table 1 summarises the main features of the three coronaviruses.
Countries are racing to slow the spread of the virus by testing and treating patients, carrying out contact tracing, limiting travel, quarantining citizens, and cancelling large gatherings such as sporting events, concerts and schools [21].
Early diagnostic tests are essential tools to track the spread of the virus in order to control the epidemic. At the moment, most testing for COVID-19 is currently done on viral genetic material from nasopharyngeal swabs, using the reverse transcription polymerase chain reaction (RT-PCR), a molecular biology technique which amplifies a specific genetic sequence of the virus. Alternatively, the enzyme-linked immunosorbent assay (ELISA), a common biochemical technique, is used to detect the specific antibodies or antigens in patient blood. Unfortunately, both methods require expensive instruments and the use of specialized laboratories and well-trained personnel [22]. In particular, serological tests show evidence of low sensitivity, accuracy and specificity [23].
Obviously, the first method tells if a person is currently infected, whereas the second method can determine if a patient has at some point been infected by SARS-CoV-2. Reverse transcription loop-mediated isothermal amplification (RT-LAMP) is a more recent molecular technique, where the amplification is conducted at a single temperature and does not need specialized laboratory equipment [24]. However, all these methods are not suitable for point-of-care testing [11,25].
Therefore, rapid, accurate, low-cost, miniaturized diagnostic platforms for virus detection usable at the point-of-care remain a challenge [25,26]. Lateral flow immunoassays are qualitative chromatographic assays, similar to common pregnancy tests, based on a two-site non-competitive format, as ELISA tests, but usable at the point-of-care. They are produced as test kits to be used by a specialist or by the patients themselves, but they suffer from a poor sensitivity [27]. The main characteristics of current methods for COVID-19 detection are summarized in Table 2.
Affinity-based biosensors (ABBs) represent interesting diagnostic tools for early and affordable detection of virus diseases, thanks to their properties, such as high sensitivity, high specificity, fast response time and the possibility of miniaturization for POC use [22,[28][29][30][31]. These peculiar characteristics allow them to complement current methods of screening and monitoring of a virus outbreak, especially when in situ and real-time analysis is required.
The recent advances in nanotechnology and the use of nanomaterials in the construction of biosensors resulted in a significant improvement in the performances of these devices [32][33][34]. Nanomaterials allowed a large increase in biosensors efficiency and sensitivity, thanks to their excellent conductivity, extraordinary photoelectrochemical properties and the possibility of miniaturization of the sensing platform [35][36][37][38][39].
In this review, we describe the nanobiosensors based on affinity interactions reported in the literature for the detection of SARS, MERS and COVID-19. A comparison with nonnanostructured affinity-based biosensors and with conventional methods is also provided. The review is structured into four sections. The first section presents a brief overview of the different classes of nanomaterials used for affinity-based biosensors purposes, and the other sections are divided by analyte type, with a particular focus on COVID-19, because of the urgent need to find early diagnostic methods for SARS-CoV-2 detection, in order to deal with the current COVID-19 pandemic. Affinity-based nanobiosensors The use of novel nanomaterials in biosensing may overcome some of the challenges and limitations of biosensor technology. Nanomaterials by definition must have dimensions in the range 1-100 nm. They are developed to exhibit novel characteristics compared to the same material without nanoscale features, such as increased strength, conductivity and unique optical, magnetic, thermal and chemical properties. Affinity-based nanobiosensors combine the high specificity of the biorecognition agents, namely bioreceptors, such as antibodies, ssDNA and aptamers, with the extraordinary properties of the nanomaterials, which allow enhanced sensitivities and lowered detection limits of several orders of magnitudes [37,40].
The high specific surface interaction of the nanobiosensor with the bioanalyte becomes highly efficient thanks to the extremely large surface/volume ratio, which enables the immobilization of an enhanced amount of bioreceptor units. Thus, the immobilization strategies used to conjugate the biorecognition agents onto the nanomaterials remain a constant challenge [41]. The technique used for the bioreceptor immobilization is one of the key factors in developing a reliable affinity-based nanobiosensor. In addition to the immobilization of the bio-molecules, the nanomaterials can serve for target recognition and for signal transduction and amplification [40].
The ABBs for coronavirus detection reported in the literature are based on gold nanoparticles (AuNPs) and nanoislands (AuNIs), graphene (GR), and nanowires (NWs). Figure 1 shows a schematic diagram of a nanomaterial-based affinity biosensor for coronavirus detection.
Gold nanoparticle and gold nanoisland affinity-based biosensor
Among the group of metal noble nanoparticles (MNPs), gold nanoparticles (AuNPs) are mostly used in biosensing application for virus infections due to their outstanding optical/ electrical properties, excellent biocompatibility, catalytic properties and relatively simple production pathway [45][46][47]. MNPs have been widely used as supporting electrode materials increasing electron transfer rates and surface-to-volume ratio, thus allowing the immobilization of large amounts of primary antibodies or cDNA, suppressing the non-specific binding (NSB) of proteins, with a consequent enhancement of the analytical response of the device.
AuNPs play a different role in the biosensing process depending on the transduction mode of the biosensor, namely optical and electrochemical biosensors. There are several optical sensing modalities for AuNPs, surface plasmon resonance (SPR) being the one that attracted most intensive research, as AuNPs are considered to have the ability to amplify the SPR signal [48]. As for the electrochemical biosensors, AuNPs allow to improve the analytical performances of the device through a double mechanism: (i) as novel immobilization platforms/electrochemical transducers, which allow loading of a larger amount of the biosensing element, thanks to the much higher surface area of AuNPs compared to flat gold surfaces; (ii) as labels for signal amplification [40].
AuNPs are often conjugated with other nanomaterials to further improve their binding capacity. In this context, carbon nanotubes (CNTs) have attracted much interest due to their unique properties [49]. Nanohybrids of AuNPs and CNTs have been realized, offering a more effective immobilization matrix. Platforms based on gold nanoislands were also used for numerous sensing applications [50]. They are basically gold aggregates with dimensions in the range 20-80 nm, obtained by deposition and annealing of the AuNPs at high temperature (560°C) for several hours (~10 h).
Graphene affinity-based biosensors
Graphene (G), together with carbon nanotubes (CNTs), is the most promising nanostructured carbon materials for biosensing applications, where each allotrope has its own specific properties and advantages as a transducer element. Carbon nanotubes are one-atom-thick sheets of graphite, called graphene, rolled into cylinders with a diameter of the order of a few nanometres [51][52][53][54][55][56][57][58]. Compared to CNTs, graphene has a much younger history. Graphene and CNTs share some properties in common, including excellent mechanic, electronic and thermal properties [59][60][61]. Thanks to its defect-rich property, graphene can be easily functionalized by inserting functional groups on the 2D plane, thus becoming a good support for immobilizing of ligands, such as antibodies and single-strand DNA for developing immunosensors and aptasensors, respectively [43,62,63]. Graphene, as well as CNTs, is water insoluble, which can be overcome by modifying its surface with hydrophilic functional groups in order to increase its solubility and suppress the NSB of proteins onto the electrode surface. Another common strategy to minimize NSB is achieved by graphene functionalization through its oxidation to graphene oxide (GO) and reduced graphene oxide (rGO), which found a lot of applications in immunosensors development [64]. The choice between the use of G or GO depends on the need of functionalization of the nanocarbon for biomolecule immobilization (more oxygen groups required) or of a higher conductivity (less oxygen groups required). As for CNTs, graphene is largely used in electrochemical and optical biosensors or field-effect transistor (FET) setups, where the changes in the conductivity of the graphene channel after the biorecognition event led to high sensitivities [59,65]. Contrary to CNTs, ssDNA or oligonucleotide bioreceptors are reversibly adsorbed to graphene oxide and successively released after the recognition event, thus allowing the recovering of the bioreceptors [34].
Nanowire affinity-based biosensors
Nanowires (NWs) are one-dimensional nanostructures in the form of wire that can be composed of both metallic and nonmetallic elements with nanometre sized diameters and micron long lengths. The NWs are robust and have high physical strength, directly attributed to their crystalline structure, and unique 1D morphology, electrical, mechanical, optical, magnetic and thermal properties. Silica NWs are broadly explored for biosensor applications, thanks to their optical, photonic and electronic properties with excellent biocompatibility for sensing application [66,67].
Silicon and indium oxide NWs are mostly explored as novel biosensing tools for highly sensitive virus detection, due to their wide bandgap, which broadens the scope of detection from purely electrochemical or FET-based detection to more simple optical methods [68,69].
Nanobiosensors for SARS detection
Up to now, there is only one nanobiosensor for SARS-CoV detection reported in the literature. It is a FET-based immunosensor, developed by Ishikawa and coworkers in 2009 [70], where the antigen-antibody binding generates a change in conductance, correlated to the virus concentration. The biomarker used for SARS-CoV detection is the virus antigen nucleocapsid protein (N-protein), the most abundant protein in coronaviruses. Instead of conventional antibodies, antibody mimic proteins (AMPs) have been utilized as bioreceptors. The AMPs can be easily produced by in vitro selection techniques and are smaller and stable at a wider range of pH than normal antibodies. A fibronectin-based protein (Fn) has been properly engineered as AMP capture agent. The exposed gate region of the FET-based immunosensor was modified with In 2 O 3 nanowires on a Si/SiO 2 substrate, in order to improve the immobilization of the AMPs and the signal transducing. The Fn protein was anchored to the NWs via the only thiol functional group present in the whole peptide sequence from a cysteine residue. The schematic diagram showing the covalent immobilization of the Fn probe onto the nanowires is represented in Fig. 2. At the working pH = 7.4, the N-proteins are positively charged and therefore their binding on a p-type channel causes depletion of charge carriers (holes) and a consequent decrease in conductance. Bovine serum albumin (BSA) was used as a "blocking agent" for nanowires and source-drain electrodes, thus avoiding NSB, which may lead to false-positive results (Fig. 2). The sodeveloped platform was able to detect the N-protein at subnanomolar concentrations, in the presence of 44 μM BSA, with a comparable sensitivity to current immunological detection methods, but with a shorter detection time and without the need of labelled reagents. Indeed, the high sensitivity and high selectivity of the proposed biosensor are achieved by the synergic effect of the In 2 O 3 nanowires/Fn protein, which is able to selectively detect the SARS biomarker N-protein. Moreover, compared to other metal oxide nanowires (ZnO, SnO 2 , etc.), In 2 O 3 nanowires have the advantage that they do not possess an insulating oxide layer, such as SiO 2 for Si nanowires, which can decrease the sensor sensitivity, and therefore, they contribute to a marked decrease of the detection limit of the sensor. Table 3 shows the characteristics and the analytical performances of the described SARS immunosensor and a comparison with other "non-nanotechnology-based" immunosensors for SARS detection, reported in the literature. In particular, two SPR-based biosensors [71,72] and one piezoelectric biosensor [73] have been realized but their sensitivities resulted to be lower of several orders of magnitudes.
Nanobiosensors for MERS detection
An amperometric nano-immunosensor for MERS-CoV virus detection was described in 2019 by Layqah and Eissa [74]. In this case, the virus spike protein S1, which is the common target for neutralizing antibodies, was utilized as MERS biomarker [80,81].
The biosensor is based on an indirect competition between the free virus in the sample and immobilized MERS-CoV recombinant spike protein S1, for a fixed antibody concentration added to the sample. The immunosensor was realized on an array electrodes system, thus allowing the simultaneous detection of MERS-CoV and HCoV, another human coronavirus. The surface of the carbon electrodes was modified with AuNPs, in order to enhance the electrochemical properties of the electrode, providing a higher surface area and a faster electron transfer rate. Successively, MERS-CoV and HCoV antigens were immobilized onto the AuNPs/carbon electrode, by a simple drop-casting procedure, after incubating the electrode in a solution containing cysteamine and glutaraldeyde, for covalently binding of the NH 2 groups of the antigens. A schematic representation of the AuNPs immunosensor is shown in Fig. 3. The non-specific adsorptions were reduced by incubating the electrode in BSA solution, in order to block the unreacted aldehyde groups and the free gold surface. The experimental conditions were carefully optimized, in particular the concentration of antibody used for incubation of the antigen-modified electrode and the binding time, resulting in 10 μg/mL and 20 min, respectively. The detection was obtained by measuring the peak current signal of the ferro/ ferricyanide redox probe, properly added to the solution, with the square wave voltammetry (SWV) technique. A decrease of the SWV peak current is clearly observed after binding of the antibodies to the immobilized antigens, because of the "coverage" of the electrode surface by the antibody molecules. Thus, a decrease of both electron transfer efficiency and current is registered. The so-realized immunosensor showed a good linear response from 0.001 to 100 ng/mL for MERS-CoV and a very high sensitivity, with a detection limit of 0.4 pg/mL, definitely lower value than that obtained with ELISA method (1 ng/mL) [82]. The characteristics of the biosensor are summarized in Table 3. The selectivity of the biosensor was studied by using different virus proteins, such as FluA and FluB, showing no cross-reactivity phenomena. The possibility of the use of the proposed biosensor for simultaneous detection of different types of CoVs was also confirmed by mixing the two proteins MERS-CoV and HCoV on the electrode surface. The stability of the sensor was good, as the sensor showed only 2% current decrease after 2 weeks. Finally, the proposed immunosensor was successfully tested in spiked nasal samples showing good recovery percentages.
Viral antigen detection
The first biosensing strategy is the use of antibodies or cDNA to selectively capture the viral antigen or viral RNA. A graphene-based FET device has been properly engineered by Seo thus allowing also the determination of the severity of the COVID-19 disease [75]. The sensing area of the FET-based biosensor is a graphene sheet, transferred to a SiO 2 /Si substrate, and successively modified with SARS-CoV-2 spike antibody, properly immobilized onto the graphene sheet surface by drop-casting, as schematized in Fig. 4. The device allowed the detection of the SARS-CoV-2 antigen spike protein at concentrations as low as 1 fg/mL in phosphate buffer, a value much lower than that reported with ELISA and PCR methods [83]. The biosensor was tested in the universal transport medium (UTM), used for suspending the nasopharyngeal swabs for real clinical analysis. No reagent contained in UTM affected the measurements and the detection limit resulted to be 100 fg/mL. In addition, the proposed COVID-19 sensor showed no significant response to MERS-CoV spike proteins, assuring high selectivity and specificity for the SARS-CoV-2 spike antigen protein. Finally, the performance of the sensor was tested in real clinical samples, collecting nasopharyngeal swab specimens of COVID-19 patients and of normal subjects. The COVID-19 FET-based nanobiosensor allowed to discriminate between patient and normal samples with detection limits lower than those reported with other current methods, without any sample preparation or preprocessing. Noble metal nanostructures have been frequently used for virus detection systems to enhance functions such as specificity and sensitivity. A DNA-nanosensor for SARS-CoV-2 detection was recently proposed by Qiu and coworkers [76]. They realized a dual DNA-sensor consisting of a single chip, modified with a two-dimensional distribution of gold nanoislands (AuNIs). The chip integrates the plasmonic photothermal (PPT) effect and the localized surface plasmon resonance (LSPR) sensing transduction. The sensor chip was functionalized with complementary DNA (cDNA) receptors by forming Au-S bonds between the AuNIs and the thiolic groups of cDNA. A schematic representation of the AuNIs PPT enhanced LSPR biosensor is shown in Fig. 5. The proper surface functionalization can suppress the non-specific binding events, thus increasing the sensitivity of the biosensor. The PPT heat, generated in situ on the same AuNI chip when illuminated at their plasmonic resonance frequency, was capable to significantly improve the kinetics and the specificity of the hybridization of SARS-CoV-2 nucleic acid sequences to their cDNA. A large number of false positive or false negative have been reported with current methods of COVID-19 detection. The PPT heating is capable of inhibiting the spurious binding of nonmatching sequences, thus avoiding an incorrect diagnosis. The dual-functional biosensor exhibited a linear range between 0.1 pM and 1 mM with a detection limit of 0.22 pM, which resulted low enough for direct analysis of SARS-CoV-2 sequences in respiratory real samples. Similar multiple nonspecific gene sequences from SARS-CoV and SARS-CoV-2 were tested and discriminated, attesting the high selectivity of the biosensor towards cross-reactive and interfering sequences. MERS Immunosensor Amperometric Antigen spike protein S1 AuNPs/MERS-CoV antigen/MERS-CoV Ab Label-based competitive 0.001-100 ng/mL 0.4 pg/mL [74] COVID-19 Immunosensor FET Antigen spike protein S1 Si-SiO 2 /graphene/SARS-CoV-2 Ab Table 3.
Another promising tool for detection of SARS-CoV-2 viral genome is the Clustered Regularly Interspaced Short Palindromic Repeats (CRISP)-associated (Cas) enzymes technology, which allowed the detection of specific COVID-19 gene sequences with detection limits between 10 and 100 copies per microliter in less than 1 h employing a target amplification (RPA) [84]. Recently, a research group has announced his interest to develop an amplification free electrochemical CRISP biosensor for on-site COVID-19 testing, with nanomodification of the electrode surface for signal enhancement [71].
Another recent study describes a portable graphene-based electrochemical biosensor for highly sensitive POC testing of Vibrio parahaemolyticus in seafood, which could be translated for COVID-19 detection. The detection was carried out using loop-mediated isothermal amplification (LAMP) and a graphene-based screen-printed electrode (SPE). The interaction between SPE and amplicons results in a shift in cathodic current, which stems from the intercalation of redox probe to double-stranded DNA. A portable mini potentiostat is used with SPE for on-site POC detection [85]. Most recently, PathSensors Inc. announced the development of a "Canary" fast biosensor for SARS-CoV-2 aerosol detection. The proposed platform utilizes a cell-based immunosensor that couples capture of the virus with signal amplification and provides a result in 3-5 min. PathSensors is based on a genetically engineered immune cell able to identify and bind to a specific target pathogen and then light up when the target pathogen is found. By measuring the light output from the cell, it is possible to know if the target pathogen is present in the sample. The initial application of the PathSensors device will be for testing of environmental swabs and air monitoring in sensitive spaces, such as hospitals, offices and food services. Validation data of the new biosensors will be available soon [72].
Antibody detection
A different approach in the detection of COVID-19 infection is the development of nanobiosensors for anti-SARS-CoV-2 detection, after functionalization of the sensor's surface with the specific viral antigens. Serologic assays for SARS-CoV-2 antibodies are now broadly available and play an important role in understanding the virus epidemiology in the general population and identifying groups at higher risk for infection. Unlike viral direct detection methods that can detect acutely infected persons, antibody tests are indirect tests that help determine whether the individual being tested was ever infected, even if that person never showed symptoms, by measuring the host humoral immune response to the virus. Therefore, serology antibody assays do not typically replace direct detection methods as the primary tool for diagnosing an active SARS-CoV-2 infection, but they do have several important applications in monitoring and responding to the COVID-19 pandemic. Demographic and geographic patterns of serologic antibody test results can help determine which communities may have experienced a higher infection rate and therefore may have higher rates of herd immunity. Moreover, serologic test results may assist with identifying persons potentially infected with SARS-CoV-2 and determining who may donate blood that can be used to manufacture convalescent plasma, as a possible treatment for COVID-19 disease.
Like infections with other pathogens, SARS-CoV-2 infection elicits the development of IgM, IgA and IgG antibodies. IgA and IgM reach their peak during 4-25 days after illness onset, whereas IgG during 21-25 days, and therefore, they are used for diagnosis at early and late stages, respectively [73]. Anyhow, it remains uncertain how long the immunoglobulins remain detectable following infection and whether individuals with antibodies are protected against reinfection with SARS-CoV-2.
At present, no immuno-or DNA-sensors for the detection of immunoglubulins against SARS-CoV-2 are available on the market. Several nanobiosensors for the detection of specific antibodies against viral antigens or biomarkers have been extensively explored in literature, which could potentially being used for COVID-19 [86][87][88]. For example, a recent study has reported the development of a label-free electrochemical biosensor with an aptamer-functionalized black phosphorus (BP) nanostructured electrode [89]. The BP nanosheets are functionalized with anti-Ab-aptamers after coating with poly-L-lysine (PLL). BP-based biosensors show higher detection sensitivity and specificity compared to reduced graphene oxide biosensors, achieving detection limits down to pg level and ng level, respectively. A similar platform could be used also for highly sensitive detection of IgG or IgM against SARS-CoV-2 in patient blood samples.
IoT biosensors
An important issue regarding the use of nanobiosensors for early POC diagnosis and prevention of COVID-19 is their capability to upload collected data via a Bluetooth interface to an Android-based smartphone, which will successively transfer them to Health Centres Authorities to tackle the disease spread, as already realized by Zhou et al. [90], for an IoT (Internet of Things) real-time PCR system for dengue fever virus spread control.
The IoT applied to medicine, also called the Internet of Healthcare Things (IoHT), encompasses medical devices and software applications connected to the Internet, offering extensive healthcare services. The IoT has opened up a world of possibilities in the medical field: when connected to the Internet, ordinary medical devices can collect invaluable additional data, give extra insight into symptoms and trends, enable remote care, track medication orders and use wearable devices to transmit health information to the concerned health [91]. Future studies should further improve the function of smartphone and specific smartphone apps to enable on-site data analysis while allowing data storage to track patient health status.
By combining biosensors, artificial intelligence (AI), information technology and dynamic networking devices, IoT could provide long-distance communication between nanobiosensors, hospitals and patients, thus improving current medical conditions [92]. In Fig. 6 is a schematized IoT architecture of a next-generation nanobiosensor-based diagnostics system.
At present, an Internet of Things (IoT) based on a functional POC instrument for SARS-CoV-2 antibody detection is not available. The POC-measured biosensors data could be automatically uploaded via Bluetooth to the patient's smartphone or tablet and then sent through global communications to a central epidemiological data centre, for automated monitoring of the epidemiological situation. Obtained data could be made ready to fit into epidemiological models in order to forecast evolution of the epidemic outbreak. Suitable modelling algorithms have already been developed through AI to predict key parameters for clinical practitioner, polling IgG and IgM test results, to provide advanced diagnosis of individuals, to characterize the own progress of the illness and to help in categorizing patients, especially in the "transition zones", where the decision can be more doubtful (Fig. 6). Each validated diagnosis instance of a given patient, once anonymized, could be automatically transmitted to a central data station, for real-time advanced monitoring of the epidemics, epidemiological management, prevention of new virus outbreaks and evaluation of success of eventual vaccination. Recently, several contact tracing apps have been developed, thus offering technological solutions to the problem of controlling the spread of the virus and have worked reasonably well in countries such as Singapore, China, South Korea and other parts of Asia. However, in other countries, privacy concerns are limiting their introduction, which could limit efforts to control the pandemic [93].
Conclusions and future perspectives
The incorporation of nanomaterials into affinity-based biosensing applications has successfully demonstrated to allow fast, sensitive and reliable detection of SARS, MERS and Fig. 6 Next-generation IoT nanobiosensor-based diagnostics system COVID-19. In particular, nanomaterial-based FET biosensors empowered the achievement of elevated biosensor performances, in terms of high sensitivity, selectivity and low detection limits. Graphene and In 2 O 3 nanowires have been used for the exposed gated micro-region of the FET for very high electrochemical detection of SARS-CoV-2 and SARS-CoV, respectively. Other nanomaterials, such as gold nanoparticles and gold nanoislands, were successfully employed for the development of an immunosensor and a DNA-sensor for MERS and COVID-19, respectively, with detection limits in the fempto-pico molar range.
An important feature of the nanobiosensors with electrochemical and FET transduction is the possibility of their miniaturization into inexpensive and integrated platforms, similar in operation to handheld electrochemical readers, usable for POC diagnostics. Additionally, the affinity biosensors can be easily multiplexed, by incorporating multiple individually accessible electrodes on the same platform, thus allowing simultaneous detections. Of course, further efforts will be required before miniaturized POC multiplex testing practical applications are feasible.
Despite these encouraging properties, very few examples of nanobiosensors for SARS, MERS and COVID-19 detection have been developed and successfully applied in real clinical analysis so far. Of course, a lot of studies for COVID-19 are currently in progress. Point-of-care nanosensors for detection of SARS-CoV-2 antibodies are under development. At the moment, only lateral flow immunoassays produced by different companies are commercially available as single-use POC tests for anti-SARS-CoV-2 antibodies [94]. They are paper-like membrane strips coated with gold nanoparticle-antigen conjugates in the conjugation pad and antigens in the nitrocellulose membrane. A blood drop of the patient is deposited on the sample pad and is moved across the test by capillary action. Immobilized antibodies recognize and bind all human IgG and IgM. However, only human IgG-IgM/gold nanoparticle-antigen conjugates will produce a visible coloured line.
A next issue in nanobiosensing design should be the integration of the extraction system into the proposed biosensor to make it wearable and user-friendly. One possibility would be to translate the developed platforms into microneedles-based biosensors, which allow a continuous monitoring of SARS-CoV-2 antigens, antibodies or nucleic acid in the dermal interstitial fluid of both symptomatic and asymptomatic population. These wearable devices would provide important information on the following: (i) prevalence of infection in the community; (ii) development and decay of immunity in a population (the dynamics of "herd" immunity). As with individuals, the levels of antibody change over time (whether from natural infection or vaccination) and continuous measurements allow this to be tracked and potentially predict the likelihood of successive waves of the pandemic; (iii) post vaccination immune response, possibly indicating the need for a "booster" vaccination.
In summary, significant challenges are yet to be addressed and a lot of effort is worth to be invested in future studies for the development of IoT wearable nanobiosensors for COVID-19 detection, thanks to their great potential to perform rapid, accurate and in situ early diagnosis and, more importantly, to track the infectious diseases, thus preventing further pandemic outbreaks. | 7,214.2 | 2020-11-05T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Engineering"
] |
Vacuum properties of high quality value tuning fork in high magnetic field up to 8 Tesla and at mK temperatures
Tuning forks are very popular experimental tools widely applied in low and ultra low temperature physics as mechanical resonators and cantilevers in the study of quantum liquids, STM and AFM techniques, etc. As an added benefit, these forks being cooled, have very high Q-value, typically $10^6$ and their properties seems to be magnetic field independent. We present preliminary vacuum measurements of a commercial tuning fork oscillating at frequency 32~kHz conducted in magnetic fields up to 8~T and at temperature $\sim 10$~mK. We found an additional weak damping of the tuning fork motion depending on magnetic field magnitude and we discuss physical nature of the observed phenomena.
Introduction
Quartz tuning forks are versatile mechanical resonators with really high Q-values responsible for their superior sensitivity. No wonder that they are applied in all sub-fields of low temperature physics in the study of quantum liquids and solids, quantum turbulence, etc. [1,2,3,4,5,6,7,8] and even as the important part in scanning probe techniques like AFM, STM, etc. [9,10,11]. In the latter case the fork measurements are often carried out in vacuum and at low temperatures including strong magnetic fields applied. Despite its obvious importance, the influence of strong magnetic fields on the vacuum resonant properties of such tuning fork at mK temperature range (to our knowledge) has not been studied yet. Therefore, the main aim of our work was to investigate the response of tuning fork under such conditions.
Experimental details
To perform our measurements a commercially available quartz tuning fork resonating at ∼ 32 kHz was used. In order to be able to measure the tuning fork in strong magnetic fields a few modifications have been made. Firstly, the metal can was removed and magnetic leads were replaced with non-magnetic ones (twisted thin copper wire pair in our case). Once out of the can, the dimensions of the tuning fork were measured by an optical microscope (see fig.1). Finally, the bare tuning fork was fixed on a specially designed copper holder and this whole setup was mounted on the cold finger of our cryogen-free dilution refrigerator Oxford Triton 200, which is capable to cool our samples down to 10 mK in the magnetic fields up to 8 T. The orientation of tuning fork's prongs was parallel to the applied magnetic field. The response of tuning fork in the form of piezoelectric current on driving force was measured by verified technique [2,12]. The driving force i.e. an AC-voltage slowly swept in frequency was provided by the function generator Agilent 33521A. An additional attenuator attenuated this excitation voltage by 40 dB. When driven with AC voltage at frequency close to the fork resonant frequency, the quartz crystal starts to oscillate and the piezoelectric current flows as a direct consequence of periodic changes of the crystal lattice polarization in time. This resulting piezoelectric current was detected and converted to voltage by home-made current-to-voltage (I/V) converter with gain 10 5 V/A [13]. The output voltage signal from I/V converter was measured by the phase-sensitive (lock-in) amplifier SR 830, which splits the measured signal into two phase components: in phase (absorption) component and quadrature (dispersion) component relative to the reference signal provided by above mentioned function generator.
The typical resonant curves are shown on the right side of fig. 1. Experimental data can be fitted using well known Lorentz relations (1) to obtain the resonant frequency f 0 , the frequency linewidth ∆f and the amplitude of piezoelectric current I 0 . There are still some constant and linear backgrounds in actual measurements present, which were taken into account as well [12]. The piezoelectric current I 0 is directly proportional to the velocity v of prong tip of tuning fork. The constant of proportionality is the tuning fork constant α and can be determined experimentally using following expression [2]: Here ∆ω = 2π∆f is the angular linewidth, R is the resistance of tuning fork at resonance, that models the damping of the fork motion and can be obtained from the fit of I 0 vs. the amplitude of driving AC voltage U exc . Finally, m vac = 0.25 ρ T W L is the effective mass of one fork's prong (quartz density ρ = 2659 kg.m −3 ). The resulting effective mass for our tuning fork m vac = 1.90645 × 10 −7 kg.
Results and discussion
All measurements were performed in vacuum at ∼ 10 mK. Figure 2 shows dependencies of the piezoelectric current amplitude (left) and frequency linewidth ∆f (right) on excitation voltage amplitude V exc as measured at different values of magnetic field. The dependence of current amplitude I 0 on excitation voltage amplitude V exc suggests that the tuning fork resistance R (i.e. the damping of the fork motion) is rising with the increase of magnetic field (the corresponding slope -the conductance showed in fig 2 is decreasing). Similarly, the frequency linewidth ∆f is increasing as well. As measurements were carried out in vacuum, there is no influence of an external environment on the tuning fork motion and the tuning fork resistance in zero magnetic field R 0 reflects only an intrinsic damping processes. This intrinsic process of energy dissipation can be associated with shear friction as consequence of periodically bending crystal lattice of quartz.
The linear fits to the experimental data show a non-zero offset and there is a slight deviation of frequency linewidth ∆f observed at small excitation voltages U exc ≤ 30 µV. Both these phenomena can be attributed to the contribution of the TTL logic (used as a reference signal for lock-in amplifier) to the excitation voltage U exc due to the presence of capacitive coupling [12]. Thus, a non-zero piezoelectric current I 0 can also be detected even for zero excitation voltage amplitude U exc . However, the corresponding signal measured by lock-in amplifier is continuously shifted from 0 • to 90 • in phase as U exc → 0 V. This also explains deviation of the measured linewidths ∆f from constant value for low excitation voltage amplitudes.
The increase of the fork resistance R with the rising magnitude of magnetic field indicates the presence of of additional damping mechanism acting on tuning fork motion. Assuming that intrinsic damping process in zero field (R 0 ) i.e. the shear friction is magnetic field independent, a magnetic contribution to the tuning fork resistance R mag can be simply estimated as R mag = R B − R 0 . The fig. 3 illustrates the resulting dependence of R mag on applied magnetic field. What could be an origin of the additional, the field depended damping in quartz tuning fork? The forced motion of one prong of the tuning fork pointing in the direction of z axis in zero magnetic field is described by the Euler-Bernoulli equation where A is the arm cross-section, ρ is the density, E m is the Young's modulus of the material, J y is the moment of inertia of the prong cross-section and γ characterizes the damping process. The external force F (z) is provided by the AC-voltage applied on electrodes of the tuning fork and resulting harmonic electric field E creates the time dependent charge polarization p(t) of the crystal lattice of the tuning fork. This time dependent charge polarizationṗ(t) i.e. the piezoelectric current carries information about fork motion. Once the magnetic field is applied on tuning fork, in general, there are two effects acting simultaneously on dynamics of the tuning fork motion. The first, in our geometry -magnetic field applied in parallel with prongs of the tuning fork, magnetic field affects the excitation force provided by electric field E by adding a force acting perpendicularly to the former one (qE +ṗ × B). This additional Lorentz force has a tendency to twist the oscillating dipole moments inside quartz crystal from the direction of electric field E. This effect effectively reduces the magnitude of the piezoelectric current and thus increases the fork's resistance. The ratio (ṗ × B)/qE defines a tangent of an angle (θ), by which the oscillating dipoles are twisted. Then, using this simplest physical picture and assuming the smallness of θ, the magnetic contribution to the tuning fork resistance can be expressed in form R mag = a 1 B + a 2 B 3 , where a 1 and a 2 are constants. The second effect is a damping associated with generation of the eddy currents in fork's metallic electrodes of during its oscillating motion. An EDAX analysis of the electrodes showed that electrodes consist of Cr, Ag and Sn. The damping caused by eddy currents is proportional to b B 2 .
The lines in fig. 3 show the fits to the experimental data considering each mechanism (dipole twist and eddy currents) separately and acting together (without B 3 term). As follows from data fits the contribution due to the eddy currents does not fit experimental data properly, so additional contribution originating in dipole twist needs to be considered as well. Moreover, the dipole twist mechanism is capable to describe our experimental data by itself. Both above mentioned mechanisms should depend on relative orientation of the tuning fork and magnetic field, which could help to discriminate these two effects. However, the most important fact which comes out from data analysis is that in our configuration (tuning fork's prongs orientated in parallel with magnetic field), the magnitude of the α constant is almost independent on magnetic field within error of ∼ 5% (see Table 1.).
Conclusions
We have presented that the fork constant α of our 32 kHz quartz tuning fork is almost independent on the magnetic field applied. This result can be of large importance for measurements performed in high magnetic fields, e.g. in various scanning probe techniques utilizing quartz tuning forks as probes. The origin of additional damping of tuning fork motion due to the presence of magnetic field is still an open question. To elucidate this problem more experiments with different relative orientation of tuning fork's prongs with respect to magnetic field are needed. | 2,360.4 | 2014-07-17T00:00:00.000 | [
"Physics"
] |
Joint Resource Allocation and Power Control Based on Vehicle’s Motion Characteristics in NOMA-Based V2V Systems
Due to the high spectrum utilization of Nonorthogonal Multiple Access (NOMA), it becomes one of the potential candidate technologies for future wireless communication systems. Meanwhile, in New Radio, Vehicle to Everything (V2X) has been proposed as a promising issue in the 3 Generation Partnership Project (3GPP). +is paper studies the resource allocation mechanism with power control strategy which makes full use of vehicles’ moving characteristics in the NOMA-based Vehicle to Vehicle (V2V) communication system. Firstly, vehicles are grouped according to their moving characteristics by spectral clustering. +en, vehicles which are in the same group are allocated the same wireless resource with NOMA strategy. Two grouping methods have been designed for freeway and urban scenarios separately. After that, the transmission power of vehicles is adjusted based on the result of power control strategy utilizing Q-learning. +e simulation results show that the performance of the V2V system in terms of Packet Received Ratio (PRR) can be evidently improved by the proposed joint NOMA resource allocation and power control mechanism compared to typical energy sensing-based resource allocation method.
Introduction
With the development of Intelligent Transport System (ITS), Vehicle to Vehicle (V2V) communication has been the center of intensive research for several years. Due to the increasingly rare wireless spectrum resources along with more and more vehicles, resource allocation scheme design has become a focus of the research in both academic and industrial areas.
Among all academic researches, graph theory and optimization theory have been used the most. For example, resource allocation problem was transformed into a maximum weight matching problem in [1], while it was formulated as a three-dimensional matching problem in [2]. In industrial area, the specific group that belongs to the 3 rd Generation Partnership Project (3GPP) has been doing the standardization work for V2V communications. In Radio Access Network (RAN) 80 th meeting of 3GPP, New Radio-(NR-) Vehicle to Everything (V2X) has been put forward based on standards specified from Release 14 for Long-Term Evolution-(LTE-) V2X. In past meetings, 3GPP RAN have discussed several resource allocation mechanisms. ese mechanisms include mechanisms involving Base Station (BS) schedule resources among vehicles dynamically and the mechanisms by which vehicles select resources automatically without the aid from BS. ese two types of resource allocation mechanisms are referred to as Mode 1 and Mode 2, respectively, in 3GPP [3]. Among all discussed resource selection methods, energy sensing-based mechanism [4] is the most typical one. Otherwise, making the use of vehicles' geographical position has also been another resource selection method put forward by 3GPP [5,6]. In NR communication system, the whole frequency bandwidth is divided into subcarriers. In time domain, one transmission period includes dozens of time slots. Twelve subcarriers correspond to one Resource Block (RB) in frequency domain. Vehicles transmit signal on specific RBs and on specific time slots. Considering the limited bandwidth that V2V communication can use, for example, 10 MHz, if many vehicles exist or frequent interaction is required, the
System Model and Problem Formulation
2.1. System Model. In V2V broadcast system, each vehicle broadcasts its messages, while others receive messages and try to decode them as shown in Figure 1. T 1 and T 2 transmit their messages in the form of data packets; R 1 , R 2 , and R 3 attempt to receive packets from T 1 as well as packets from T 2 . SINR ij which denotes the Signal to Interference plus Noise Ratio (SINR) at receiver j from transmitter i can be calculated by In (1), P i is the transmitting power of transmitter i, |h ij | 2 denotes channel coefficient from transmitter i to receiver j, N 0 is the one-sided power spectral density of Additive White Gaussian Noise (AWGN), B 0 is the bandwidth that transmitter i uses to transmit messages, and N represents the number of vehicles in V2V communication system. Binary variable α ik equals one when transmitters i and k transmit messages on the same wireless resource at the same time (1a). Another binary variable s j i,k shows the result of applying SIC. When (P i |h ij | 2 /n ij ) > (P k |h kj | 2 /n kj ), vehicle j decodes message from vehicle i firstly and decodes message from vehicle k afterwards. In this situation, binary variable s j i,k equals one, which means that interference from transmitter k exists. Otherwise, s j i,k equaling zero represents interference cancelled with SIC (1b).
In Figure 1, when T 1 and T 2 use the same resource in NOMA manner, R 1 can decode the packets from both T 1 and T 2 successfully. Because R 1 is close to T 1 and far from T 2 , it can decode message with the optimal order of decreasing channel gains normalized by the noise. More specifically, R 1 decodes the signal s 1 from T 1 firstly, while it treats the signal s 2 from T 2 as interference when it receives signal s from both T 1 and T 2 . After signal s 1 from T 1 is decoded, it is cancelled in the signal s before R 1 decodes signal s 2 from T 2 subsequently, which means it subtracts s 1 from s and decodes signal s 2 after. erefore, the SINR of receiver R 1 when decoding signal from T 2 is SINR 21 � ((P 2 |h 21 | 2 )/(N 0 B 0 )) rather than SINR 21 � ((P 2 |h 21 | 2 )/ (N 0 B 0 + P 1 |h 11 | 2 )), and so does SINR of R 2 . However, R 3 cannot decode any signal at strong possibility, neither from T 1 nor from T 2 , because it receives almost the same power of signals from T 1 and T 2 . In V2V communication system, safety related messages are broadcasted by vehicles periodically. In 3GPP, PRR is defined as the typical performance metric, which is the statistical average of the probability of all packets received successfully. In essence, the probability of packets decoded correctly depends on the SINR at the receiver. e higher PRR is, the more reliable communications between vehicles are.
Problem Formulation and Analysis.
In the regulation of 3GPP, one communication type is vehicle as transmitter broadcasts its messages, while other vehicles are as receivers receiving messages. As mentioned above, PRR is the statistical average of the probability of all packets to be decoded successfully. Hence, the objective for resource allocation and power control is maximizing PRR of all vehicles. Furthermore, since the probability of packets to be correctly decoded basically depends on the SINR at the receiver, the goal alerts to maximize the total SINR of all vehicles. e relationship between PRR and SINR is shown in Figure 2, which illustrates the feasibility to replace PRR with SINR as the objective of resource allocation and power control: Assume that there are N vehicles in the V2V system. e total available bandwidth is divided into M subchannels (one subchannel consists of several RB) and one transmission period is divided into T slots. In this paper, one subchannel in frequency domain and one time slot in time domain are defined as a Resource Block Group (RBG) as the basis for resource allocation. erefore, there are totally M * T RBGs in one period. Assume that BS has perfect channel state information of all vehicles in the system via dedicated feedback channels. Each vehicle i transmits packets with power P i on the RBG, which is allocated by BS. All of vehicles except transmitters decode messages with the optimal order of decreasing channel gains as discussed in Section 2.1. For each slot, specific subchannels are allocated to vehicles to transmit packets and receivers' SINR are calculated. e total SINR of all vehicles within one period can be calculated by (2).
In this formulation, set Tr represents vehicles using subchannel m in slot t to transmit messages, while set Re represents other vehicles receiving messages (2). Constraint (2a) corresponds to the transmitting limitation that one vehicle should transmit packets in one period once with only one subchannel, which means each vehicle uses one RBG in each period for transmission. Constraint (2b) means each RBG can only be assigned to at most c users in NOMA manner, which is also the maximum number of vehicles using the same RBG. e larger c is, the more complicated the process of decoding and receiver can be. c equaling one means no resource collision happens. Binary variable x i m,t equals one only when subchannel m in slot t is allocated to vehicle i (2c). Constraint (2d) shows the result of using SIC in NOMA manner: when (P i |h ij | 2 /n ij ) > (P k |h kj | 2 /n kj ), binary variable s j i,k equals one, which means interference from transmitter k exists; s j i,k equaling zero representing interference has been cancelled with SIC. Constraint (2e) shows the transmission power of each vehicle should not exceed the maximum power P max . e optimization objective shown in (2) is a nonconvex problem and also a NP-hard problem because variables are binary and the existence of interference. However, the upper boundary of (2) can be given and that is the situation without interference between transmitting vehicles. Because of the existence of interference, this upper boundary can never be achieved. erefore, a joint resource allocation and power control algorithm based on machine learning to solve (2) is proposed in the next section.
Resource Allocation Mechanism and Power Control Strategy
It is not easy to find the optimal solution with such constraints. Even with the greedy method, the optimal solution at this moment does not mean that it is the optimal solution at next moment as well because the distance between each two vehicles is changing from time to time. To simplify the problem in (2), it is decoupled into two stages. At the first stage, assuming that transmit powers of all vehicles are the same, resource allocation mechanism based on vehicles' moving characteristics is conducted. At the second stage, power control is further done according to the resource allocation results obtained at stage one.
In this paper, two typical scenarios, freeway and urban, are considered, corresponding to comparatively simple and complicated vehicle traffic conditions. In different traffic scenarios, the importance of different moving characteristics of vehicles is different. For example, vehicles running on highway rarely change their moving statuses such as direction and speed. In most cases, link type between a transmitter and a receiver is Line of Sight (LOS). However, in urban scenario, vehicles may change their driving directions at any cross and their speed can change every now and then due to, for example, traffic jam and the control of traffic light. Even the link type between two vehicles may change because of the probability of building block. erefore, different users grouping methods are proposed in this paper to cover different scenarios.
Resource Allocation Mechanism for Freeway.
As mentioned above, the key step in the design of resource allocation mechanism is user grouping. In freeway scenario, there are some considerations in design. Firstly, vehicles which are as far away as possible should be chosen to use the same RBG in NOMA manner. e reason is that when two transmitters that are allocated to same resource are near to each other, their neighbors as receivers cannot receive and decode packets successfully from neither of them because of the large interference between them. To reduce the occurrence of the above situation, a parameter c is defined to represent the minimum distance between vehicles sharing the same resource.
In user grouping, receivers which have similar distance to the transmitting vehicles should also be taken into account. e reason is that such kind of receivers usually cannot decode the messages from any of the transmitters; for example, R 3 in Figure 1 cannot decode message from neither T 1 nor T 2 at great possibility.
A centralized scheduling mechanism that groups the vehicles according to their moving features is proposed based on the above considerations. e vehicles in the same group are allocated to the same resource in proposed resource scheduling algorithm. Vehicles in the same group are expected to have similar speed and similar moving direction. Furthermore, the distance between them should be larger than c. By this means, distance between members in the same group is relatively stable and the interference caused by close distance between transmitters sharing the same resource can be avoided. e detailed resource allocation algorithm is described as follows: In the first step, all vehicles are divided into several categories according to their speed and direction. Vehicles having similar direction and speed are in the same category G f . e roads in freeway are mainly designed for two or three kinds of vehicles' speed, like carriageway and passing lane for relatively slow and fast vehicles, respectively. us, there are two kinds of speeds adopted in simulation in Section 4.1 to simulate freeway in reality. e more dispersed the vehicle speed is, the greater the number of the category G f will be, and the less vehicles each category has and vice versa. e following steps are done in each category G f and the resource allocation algorithm will end until each vehicle belongs to a group. Hence, there is no influence on resource allocation mechanism proposed in the paper no matter how dispersed or intensive the vehicles' speed is.
In the second step, we will decide which vehicles can be chosen to be in the same group. Vehicle j is randomly chosen at the beginning of the algorithm, and suppose that vehicle j is in the r th group, g r . Vehicles that can be in the same group as vehicle j should be in the same category as vehicle j. en we check each vehicle in this category according to (3) and the vehicle with the maximum argument is selected to be in the same group as vehicle j. Suppose that there are N vehicles in the system; (3) ensures that vehicle i is in the same category as j and is far away from j. At the same time, the number of receivers which have similar distance to transmitters using the same resource is minimized.
In (3), dm ij denotes the minimum distance between vehicle i and vehicles in the same group with j (3a); N k�1,k≠i,j f kij denotes the number of vehicles that have the almost equal distance to vehicle i and vehicle j; d ij denotes the distance between vehicle i and vehicle j; binary variablef kij equals one when the difference between distance from receiver k to transmitter iand distance from receiver k to transmitter jis within αm and such kind of receiver k basically cannot decode messages from any transmitters correctly (3b); binary variable x ij equals one when i, j are in the same category (3c); vehicle i should be far away from j and the distance between them should be larger than c (3d). Considering the number of vehicles that have similar distance between the two transmitters can be a quite small number compared to the distance between two transmitters, f kij is suitably magnified through multiplying by β(β > 1).
Repeat searching until no vehicle in the same category satisfies (3). en the vehicle in the same category with j which has the minimum distance to all vehicles in the previous group is regarded as the first element of the next group. Repeat checking the satisfaction of (3) and finding the first member of next group until all vehicles in this category are chosen into groups. en another category is chosen and the above steps are repeated until all the vehicles are the members in groups.
Vehicles in the same group use the same resource. If the number of groups is larger than the number of resources, the value of c is repeatedly decreased and the vehicles are grouped until the number of groups is slightly less than the number of resources.
In order to get better performance with scarce spectrum, the minimum distance between vehicles sharing the same resource is calculated by the above steps, which means that distance depends on the amount of wireless resources. If wireless resources are adequate for vehicles information transmission, the minimum distance between vehicles sharing the same resource will be larger than that under the situation with little wireless resources. e detailed description of Algorithm 1, vehicles grouping algorithm, is shown below. e vehicle keeps transmitting on the allocated resource until it changes its motion status, like leaving the road or changing the speed, direction, and so forth. Once things like the above happen, it leaves the original group and needs to be regrouped. e group that satisfies (3) is the group for it to join in. If no group satisfies (3), it becomes the only member of a new group when idle resources exist. When no group satisfies (3) and no resource is idle, the group that satisfies (3) but relaxes (3d) can be the group for it to join in. e whole RBG reallocation algorithm for vehicles changing their moving status is shown in Algorithm 2.
Resource Allocation Mechanism for Urban Scenario.
In urban scenario, the motion characteristics of vehicles are more complicated than those in freeway scenario. Vehicles change driving direction and speed frequently and the link type between two vehicles may change unpredictably due to, for example, blocking by a building or tree. In user grouping algorithm, these characteristics should be taken into account in addition to distance considered in freeway scenario.
Spectral clustering (SC) as an unsupervised method can partition the data into different groups according to multiple features. In most cases, the superiority of SC is attributed to the design of a metric function and the affinity graph [14]. As for the constant and unpredictable vehicle traffic changes in urban scenario, SC is therefore considered to solve the resource allocation problem while regarding vehicles as data points in this paper. Vehicles' geographical position, driving direction, speed, and communication link type are the main motion features taken into consideration.
Apart from selecting proper motion features, appropriate weight should also be built upon each feature with the consideration of their influence on communication between vehicles to design metric function. Proper grouping vehicles method in urban scenario needs quantizing features about vehicles and building appropriate weight among vehicles to assess the similarity of vehicles. According to similarity, the vehicles are clustered by, for example, BS during each transmitting period. After clustering, vehicles in the same cluster share the same resource in NOMA manner.
In the beginning of the next part, a brief introduction of normalized cut which is the major step in SC is given. After that, features selection and metric establishment are described and resource allocation algorithm in urban scenario is proposed.
Brief Introduction of Normalized Cut.
When inputting a data set X with n d-dimensional samples, like X � x 1 , x 2 , . . . , x n ∈ R d×n , the clustering algorithm can group X into k 2 clusters c i , i � 1, . . . , k 2 with the aim of keeping data within the same cluster close to one another and data points from different clusters remain apart. at is to say, normalized cut not only minimizes weight of edges between different clusters but also maximizes weight of edges within cluster [15]. e main steps of normalized cut are shown as follows: (1) Construct similarity matrix S.
(2) Construct adjacency matrix W and degree matrix D.
Features Selection
Feature 1: Distance between vehicles. Similar to freeway scenario, the relative position among vehicles, namely, distance, is still important. In vehicles grouping, vehicles sharing the same resource are expected to be as far away as possible from each other. Weight calculating from distance feature is designed to be proportional to distance. e closer vehicles are, the less likely they are in the same cluster. Feature 2: Speed of vehicles. Speed is another important feature because the speed of vehicles affects the distance between them dynamically. When there is a speed difference between vehicles, the distance between them changes drastically continuously.
is brings about uncertainty of the distance. For instance, two vehicles Mobile Information Systems are far away from each other at the beginning and move in the same direction. If they are allocated to the same resource without considering their speed, their distance may become smaller after a short time if the back vehicle is faster than the front one. us, weight should be proportional to the difference of the speed inversely, which ensures that vehicles with large speed difference are in the same cluster with low probability. Considering that urban is the scenario with plenty of vehicles with different speeds, Simulation of Urban Mobility (SUMO) is used in Section 4.2. to model the real urban scenario. e speed of each vehicle is the result of normal running and complying with traffic rules. Feature 3: Moving direction of vehicles. Similar to speed, moving direction also affects the change in distance between vehicles, like approaching or separating. Feature 4: Type of communication link between vehicles. e link types we refer to here are LOS and Non-Line-of-Sight (NLOS). If the link between two vehicles is blocked by buildings or other obstacles, the link is regarded as NLOS. Otherwise, it is LOS. When type of communication link is NLOS between transmitting vehicles, the communication links' types between receiving vehicle and each transmitting vehicle are more likely to be different. us, the receiving power values from different transmitting vehicles are more likely to have larger difference, which is helpful for receiving vehicle to decode the messages from both transmitters in NOMA manner. erefore, the vehicles with NLOS Input: e sets of vehicles G f based on the moving features; u � |G f |; d ij denotes distance between i and j; N is the number of vehicles; Output: e group g r in which vehicles use the same RBG; end if (15) end for (16) until (G f � ∅) (17) end for ALGORITHM 1: Vehicles grouping algorithm.
Input:
Vehicle i which changes its moving status; m � |g r | is the number of g r depends on Algorithm 1 Output: e new group i belongs to; (1) for each r ∈ [1, m] do (2) if i ∈ g r then (3) g r � g r \ i { } (4) end if (5) end for (6) if g r satisfies (3) then (7) g r � g r ∪ i { } (8) else if m < number of RBG then (9) g r+1 � i { }, m � m + 1 (10) else if g r satisfies (3) without (3d) then (11) g r � g r ∪ i { } (12) end if ALGORITHM 2: RBG reallocation algorithm. 6 Mobile Information Systems link are expected to share the same resource to reduce interference.
Metric Function Establishment.
ere are several strategies to construct adjacency matrix. Among all strategies, the most common way to compute the adjacency matrix, namely, weight matrix, is full connection (see the following equations): Consider that different features have different effects on V2V communication. e method to compute weight matrix is designed upon different features.
where v i is the numerical value of vehicle i's speed and d i is the numerical value of vehicle i's driving direction; four moving directions, up, down, left, and right, are, respectively, represented by 1, 2, 3, and 4; Σ is twodimensional covariance matrix in . e similarity of vehicle speed and moving direction between vehicle i and vehicle j is measured by Mahalanobis distance which ensures that each variable is independent of the measurement scale. w dis ij represents the weight calculated from distance, d ij is the distance between i and j, σ 1 and σ 2 are fixed, and σ 2 is bigger than σ 1 . e closer the distance between vehicles sharing the same resource is, the more interference there will be for receivers receiving messages from any of them. As can be seen in [16], interference is relatively small when the distance between transmitting vehicles using the same resources is larger than 300 meters. erefore, 300 meters is used as demarcation point σ 1 of piecewise weight calculation formula derived from distance.
us, here, σ 1 � 300 and σ 2 � 600; the value of w dis ij is large when the distance is large; when d ij is bigger than σ 1 , w dis ij is equal to exp((d ij /σ 2 ) − 1); when d ij is smaller than σ 2 and bigger than σ 1 , w dis ij is smaller than one; when d ij is bigger than σ 2 and bigger than σ 1 , w dis ij is bigger than one; otherwise w dis ij is equal to zero. Because the multiplication of w dis ij and other terms is equal to w ij , w dis ij > 1 means that the possibility of vehicle i and vehicle j to be divided into the same group is increased and the possibility of them using the same wireless resources for transmission is increased accordingly. Otherwise, it means that they are less likely to be divided into the same group. Because vehicles are in half duplex mode, the vehicles within a certain range cannot send information at the same time, or they definitely cannot receive information from the others. us, when d ij ≤ σ 1 , w dis ij � 0 means that the possibility of vehicle i and vehicle j being divided into a cluster and using the same wireless resources for transmission is basically zero (5a). In (5b), w l ij refers to the weight calculated according to the type of communication link between vehicle i and vehicle j, l ij represents the type of communication link between vehicle i and vehicle j, and σ 3 (σ 3 > 1) and σ 4 (σ 4 < 1) are fixed, such as σ 3 � 1.2 and σ 4 � 0.9; when the communication link from vehicle i to vehicle j is NLOS, the possibility of them using the same wireless resources for information transmission is increased; otherwise, it is decreased.
Resource Allocation Mechanism. Algorithm 3 (vehicle clustering algorithm) based on normalized cut shown below is conducted in the beginning of each transmission period.
Vehicles in the same partition share the same resources; therefore, the number of clusters k equals the number of RBGs.
Signaling Process for Resource Allocation Mechanism.
Resource allocation mechanism proposed in this paper is done by BS, which means that BS needs to collect information about vehicles. Consensus has been formed that NR supports that users (UE) report assistance information to the generation Node B (gNB) after the discussion in 3GPP 94 th meeting [17]. e whole signaling process is shown in Figure 3. In the beginning, each vehicle reports geographic and speed related information to BS. Next, BS executes resource allocation mechanism. en, BS transmits message about resource allocation results to vehicles and vehicles receive that message. After decoding corresponding message, vehicles obtain which RBG it can use to transmit message to other vehicles.
Power Control Based on Q-Learning.
Q-learning is a model-free reinforcement learning algorithm, and it can learn to find optimal policy through maximizing expected reward. It shows very good performance in the complex system. In this paper, Q-learning is introduced to solve the power control problem.
In Q-learning, state, action, and reward are three main elements. Detailed contents about Q-learning can be found in [18]. In this NOMA-based V2V power control problem, state is formulated as the set of transmission power values of all transmitting vehicles using the same wireless resources. In order to limit the number of elements in the state set, it is assumed that the vehicle can use one possible discrete power value in P to send information (see the following equation): Because the upper limit of the transmission power value of each vehicle is P max , each possible power value p i is smaller than that. When K vehicles use the same resource, the state set S is expressed as follows: Action set is the change of transmission power of all transmitting vehicles using the same resource. In order to limit the number of elements in action set, the change of the transmission power value of each transmitting vehicle can only be reduced, increased, and kept unchanged, which are represented by −1, 0, and 1, respectively. When K vehicles use the same resource, there are 3 K possible actions. At this time, a i is the power changing action of vehicle i, and action set A is represented by the following equation: e upper limit of transmitting power value of each vehicle is P max . In the circumstance that vehicle uses the power value p i to transmit messages, if continuously increasing p i causes transmission power to be greater than P max , the action of increasing the transmission power will not be taken; similarly, if continuously decreasing p i causes transmission power to be equal to or less than zero, the action of reducing the transmission power will not be taken.
Reward is the sum of SINR when other vehicles receive messages from transmitting vehicles since the possibility of information being decoded successfully is proportional to the SINR. When K vehicles use the same resource, the sum of SINR when receiving vehicles receive information is as follows: Greedy search which achieves the balance between exploration and utilization is used. Explore rate ε equals one and decreases gradually. e action is selected based on the information that has already known when the random number generated in each step is bigger than ε; otherwise, the action is chosen randomly. e power control strategy learning process based on Q-learning is shown in Algorithm 4. When estimated value functions of the t-th Q t (s t , a t ) sampling and t + 1-th sampling r t+1 are known, Q t+1 (s t , a t ) can be obtained by incremental summation (10), where α t is the learning rate, which indicates how fast to give up the old value. Q t+1 s t , a t � Q t s t , a t + α t r t+1 − Q t s t , a t .
After learning, the action that gets the maximum Q value is the power control strategy.
Simulation Results
In this section, the proposed joint resource allocation and power control mechanism is evaluated through system-level simulation.
Freeway Scenario.
In this section, the proposed mechanism under the freeway scenario defined by 3GPP [19] Input: e moving features of vehicles: d ij denotes the distance between vehicle i and vehicle j; v i is the speed of vehicle i; d i is the direction of vehicle i; l ij represents the communication link type between vehicle i and vehicle j; N is the number of vehicles; Output: e partition G; shown in Figure 4 is evaluated. e major simulation parameters are shown in Table 1.
Here, sensing mechanism is selected as the comparison algorithm, which is one kind of resource allocation scheme regulated by 3GPP. Each vehicle senses the energy on every RBG and ranks them from low to high. When reallocation happens, it converts to a RBG which is one of RBGs corresponding to the lowest 20% energy ones as long as the 20 th minimum value of energy of RBGs is 3 dB less than the energy value of RBG it is using now [1]. is mechanism needs vehicles to sense energy constantly. Figure 5 shows the performance of PRR for sensing mechanism and the proposed mechanisms. It is clear that the proposed mechanism with or without power control strategy's PRR is higher than that of the sensing mechanism. Among three mechanisms, the proposed mechanism with power control has the best performance. When the distance between vehicles is larger than 250 m, the performance of sensing mechanism decreases greatly compared to that of the proposed mechanisms. e reason is that the vehicles' moving characteristics are taken fully into consideration to determine the resource allocation proposed in this paper, which makes vehicles as far as possible use the same resource.
To further evaluate the resource utilization efficiency, the utilization ratio of resource is defined as the number of RBGs that have been allocated to vehicles divided by the total available number of RBGs. As shown in Figure 6, the RBG utilization ratio is constant in the proposed mechanism, while the utilization ratio changes frequently for the sensing mechanism. is means that resource reallocation in sensing mechanism happens more frequently than that in proposed mechanism. In freeway scenario, where vehicles do not easily change moving characteristics, the proposed mechanism capturing these steady characteristics has very stable performance.
To get better performance in V2V communication system, the amount of vehicles that are allocated to the same resource should be less and the distance between them should be far to eliminate interference. Figures 7 and 8 show the relationship between the number of vehicles sharing the same resource and their distance at different time for the sensing mechanism and proposed mechanism between vehicles sharing resource in proposed mechanism is relatively farther, while that in sensing mechanism is equally distributed along the whole length of the road. It is clear that the proposed mechanism taking distance characteristic into account in the resource allocation has better performance.
Urban Scenario.
e joint resource allocation and power control mechanism for urban scenario is evaluated under the real map of Manhattan in Figure 9. Simulation scenario is further abstracted by SUMO [20] as shown in Figure 10. Moving features of vehicles are also obtained by SUMO according to the map, for example, the change of moving speed and direction at the cross. Major simulation parameters are shown in Table 2, including parameters of clustering as described in Section 3.
(1) initialize Q-table to zeros (2) for time t do (3) if rand(·) < ε then (4) select action randomly (5) else (6) choose action a t+1 � arg a t+1 max Q(s t+1 , a t+1 ) (7) end if (8) calculate reward value as (9) (9) update Q-table as (10) (10) t � t + 1 (11) end for (12) choose a � arg a max Q(s, a) ALGORITHM 4: Q-learning based power control algorithm. Mobile Information Systems Figure 11 shows the PRR performance of the proposed mechanism without power control and with power control and sensing mechanism. Compared with the PRR performance in freeway scenario, PRR in urban scenario decreases more sharply when the distance between vehicles increases. Similar to the results under freeway scenario, the PRR of the proposed mechanism is higher than that of the sensing mechanism. Among three mechanisms, the proposed mechanism with power control has the best performance. Figures 12 and 13show the relationship between the number of vehicles sharing the same resource and their distance at different time for the sensing mechanism and proposed mechanism, respectively, in urban scenario. It is clear that vehicles with larger distance have greater possibility to share the same resource. However, in urban scenario, there are more characteristics to consider in vehicles clustering, including distance. erefore, the proposed mechanism has more possibility to group vehicles with similar moving characteristics, not just vehicles that are far away from each other, to share the same resource. Figure 14 demonstrates the average number of vehicles sharing the same resource at different distance ranges. It can be observed that less vehicles share the same resource in the proposed mechanism compared to the sensing scheme at most distance values, which indicates that the proposed scheme can utilize resource more efficiently and has better PRR performance.
Analysis of Computational Complexity.
In the sensing mechanism, vehicles need to collect Schedule Assignment (SA) messages, detect received energy on each RBG, and exclude RBGs based on SA messages. Each vehicle ranks RBGs according to their own average received energy and selects RBGs for itself. All of the above steps are done by vehicles themselves, which means that vehicles' computing capabilities influence delay greatly. With the existence of the ranking procedure, the computational complexity is between O(n log n) and O(n 2 ) according to which kind of sorting algorithm is adopted, while n is the number of RBGs. e complexity of the vehicle grouping resource allocation algorithm for freeway scenario proposed in this paper is O(N * m), while N is the number of vehicles and m is the number of vehicles in the group divided by their velocity. Because in a typical freeway scenario [19] the number of vehicles N is a little bigger than the number of RBGs, while the number of vehicles in the group divided by their velocity m is a bit smaller than the number of RBGs, the computational complexity of vehicle grouping algorithm is between the lower limit of the sensing mechanism and the upper limit of the sensing mechanism. Computational complexity of the vehicle clustering resource allocation algorithm for urban scenario utilizing SC is O(N 3 ) which is larger than the sensing mechanism. However, considering that computation process in the resource allocation mechanisms for freeway and urban scenarios proposed in the paper is done by the BS which collects vehicles' geographic related information and so forth from vehicles and that BS's computing capacity is far beyond that of the vehicle equipment, delay is less or at least comparable to the sensing mechanism with higher PRR.
After receiving resource allocation results from BS, vehicles adjust their power according to power control strategy utilizing Q-learning. Owing to learning process in reinforcement learning, the computational complexity is relatively high, which depends on the number of the steps in each episode. e advantage of this method is that the transmitting power can be adjusted along with the change of the environment. In the future, some effective methods can be adopted to reduce the number of iterations and reduce the computational complexity.
Conclusion
In this paper, NOMA is introduced into V2V communication system to enhance the utilization of limited frequency resource and a joint resource allocation and power control mechanism based on vehicles' moving characteristics is proposed. According to different moving conditions in freeway and urban scenarios, two resource assignment algorithms are designed, which divide vehicles into several groups according to their moving features. After that, power control strategy is obtained through Q-learning. Systemlevel simulation results show that PRR of the proposed mechanism can be improved compared to that of the energy sensing mechanism.
Data Availability e simulation codes' data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 9,150.4 | 2020-11-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Emerging opportunities and challenges for the future of reservoir computing
Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines.
At the core of today's technological challenges is the ability to process information at massively superior speed and accuracy.Despite largescale success of deep learning approaches in producing exciting new possibilities [1][2][3][4][5][6][7] , such methods generally rely on training big models of neural networks posing severe limitations on their deployment in the most common applications 8 .In fact, there is a growing demand for developing small, lightweight models that are capable of fast inference and also fast adaptation -inspired by the fact that biological systems such as human brains are able to accomplish highly accurate and reliable information processing across different scenarios while costing only a tiny fraction of the energy that would have been needed using big neural networks.
As an alternative direction to the current deep learning paradigm, research into the so-called neuromorphic computing has been attracting significant interest 9 .Neuromorphic computing generally focuses on developing novel types of computing systems that operate at a fraction of the energy comparing against current transistor-based computers, often deviating from the von-Neumann architecture and drawing inspirations from biological and physical principles 10 .Within the broader field of neuromorphic computing, an important family of models known as reservoir computing (RC) has progressed significantly over the past two decades 11,12 .RC conceptualizes how a brainlike system operates, with a core three-layer architecture (see Box 1 and Box 2): An input (sensing) layer which receives information and performs some pre-processing, a middle (processing) layer typically defined by some nonlinear recurrent network dynamics with input signals acting as stimulus and an output (control) layer that recombines signals from the processing layer to produce the final output.Reminiscent of many biological neuronal systems, the front end of an RC network, including its input and processing layers, is fixed and nonadaptive, which transforms input signals before reaching the output layer; in the last, output part of an RC the signals are combined in some optimized way to achieve the desired task.An important aspect of the output layer is its simplicity, where typically a weighted sum is BOX 1
Number of Parameters (Billion) Amount of Petaflops Required for Training
Memory Taking Up by Parameters (GB)* (b) RC versus DL in terms of the number of parameters and computational cost (measured in the amount of petaflops) required for training [160].* Estimation of memory storage of the parameters, assuming Deep learning (DL) and reservoir computing (RC) are both machine learning techniques.They share some common characteristics.For example, both of them are data-driven frameworks for learning, taking inputs and transform them (nonlinearly) to match desired outputs.By learning the features from the input data, they are shown to be universal function approximation, so as to fulfill sophisticated tasks.However, deep learning and reservoir compuitng are different in some degrees: 1. Architecture design: DL and RC can be distinguished directly from their structures.As shown in Fig. (a), in DL, all the parameters are fully trainable, namely all connections are continuously updated during the training phase.While in RC, only the readout weights are trained.Other connections among neurons are fixed once generated and are not updated any further.This structural difference indicates that RC usually has smaller parameter size than those of DL.
2.
Training procedure: Different architectures determine that DL and RC are trained distinctly.In DL, there have been many training algorithms and tools developed, such as backpropagation (BP), stochastic gradient descent (SGD), Newton's method (NM), and so on.However, in RC, it is simple regression (e.g., linear regression, Lasso regression and ridge regression) that are usually adopted in training.The small parameter size and simple training procedure of RC together lead to much less training time and resource consumption.As the capacity of deep learning increases, the parameter size also grows, which is a challenge for practical application.For example, the memory of smart watch is around typically 2GB, so it can be equipped with GPT (~0.5 GB) or BERT-Large (~1.3 GB).For large networks such as GPT3 (~652 GB) and GPT4 (~6557 GB), only workstations or high performance cluster (HPC) can incorporate them.Inversely, since RC has much less parameters, it can be applied on diverse devices flexibly.Is has been shown that RC can realize image recognition with around 10 -5 -5 petaflops [161], indicating wide scope for further explorations.In addition, although parameter size is smaller, RC is utilized in improving the accuracy in climate modeling [117] and fulfilling weather forecast [118], which had been realized by deep learning previously.
As one of the most popular machine learning algorithms, DL has been studied widely.Nevertheless, RC seems to remain at primary stage, no matter in theoretical or algorithm level.sufficient, reminding a great deal of how common mechanical and electrical systems operate -with a complicated core that operates internally and a control layer that enables simple adaptation according to the specific application scenario.
Can such an architecture work?This inquiry was attempted in the early 2000s by Jaeger (echo state networks (ESNs) 11 ) and Maass (liquid state machines (LSMs), 12 ), achieving surprisingly high level of prediction accuracy in systems that exhibit strong nonlinearity and chaotic behavior.These two initially distinct lines of work were later reconciled into a unified, reservoir computing framework by Schrauwen and Verstraeten 13 , explicitly defining a new area of research that touches upon nonlinear dynamics, complex networks and machine learning.Research in RC over the past twenty years has produced significant results in the mathematical theory, computational methods as well as experimental prototypes and realizations, summarized in Fig. 1.Despite successes in those respective directions, large-scale industrywide adoption of RC or broadly convincing "killer-applications" beyond synthetic and lab experiments are still not available.This is not due to the lack of potential applications.In fact, thanks to its compact design and fast training, RC has long been sought as an ideal solution in many industry-level signal processing and learning tasks including nonlinear distortion compensation in optical communications, realtime speech recognition, active noise control, among others.For practical applications, an integrated RC approach is much needed and can hardly be derived from existing work that focuses on either the algorithm or the experiment alone.This perspective offers a unified overview of the current status in theoretical, algorithmic and experimental RCs, to identify critical gaps that prevents industry adoption of RC and to discuss remedies.
Theory and algorithm design of RC systems
The core idea of RC is to design and use a dynamical system as reservoir that adaptively generates signal basis according to the input data and combines them in some optimal way to mimic the dynamic behavior of a desired process.Under this angle, we review and discuss important results on representing, designing and analyzing RC systems.
Mathematical representation of an RC system
The mathematical abstraction of an RC can generally be described in the language of dynamical systems, as follows.Consider a coupled system of equations Δx = Fðx; u; pÞ, y = Gðx; u; qÞ:
& ð1Þ
Here the operator Δ acting on x becomes dx dt for a continuous-time system, x(t + 1) − x(t) for a discrete-time system, and a compound of these two operations for a hybrid system.Additionally, u 2 R d , x 2 R n , and y 2 R m are generally referred to as the input, internal state and output of the system, respectively, with vector field F, output function G and parameters p (fixed) and q (learnable) representing their functional couplings.Once set up by fixing the vector field F and the output function G and the parameters p, one can utilize the RC system to perform learning tasks, typically in time-series data.Given a time series fzðtÞ 2 R m g t2N , an optimization problem is usually formulated to determine the best q: where R(q) is a regularization term.Also, when z(t) is seen as a driving signal, the optimization problem can be regarded as a driving-response synchronization problem finding appropriate parameters q 14 .Since RC is often simulated on which is a special form of (1), but now with time steps and network parameters more explicitly expressed.In this form, f is usually a component-wise nonlinear activation function (e.g., tanh), the inputto-internal and internal-to-output mappings are encoded by the matrices W (in) and W (out) , whereas the internal network is represented by the matrix W. The additional parameters b and γ are used to ensure that the dynamics of x is bounded, non-diminishing and (ideally) exhibits rich patterns that enable later extraction.Given some training time series data {z(t)} (assumed to be scalar for notational convenience), once the RC system is set up by fixing the choice of f, γ, b, W (in) and W, the output weight matrix W (out) can be obtained by attempting to minimize a loss function.A commonly used loss function is where X = ðxð1Þ > , xð2Þ > , . . ., xðTÞ > Þ > , z = (z(1), z(2), …, z(T)) ⊤ and β ∈ [0, 1] is a prescribed parameter.This problem is in a special form of Tikhonov regularization and yields an explicit solution W ðoutÞ> = X > X + β 2 I À1 X > z.
Common RC designs
Designing is a crucial step for acquiring a powerful RC network.There are still no complete instructions on how to design optimal RC networks based on various necessities.With the unified forms Eqs. ( 1) and (2) in mind, a standard RC system as initially proposed contains everything random and fixed including the input and internal matrices W (in) and W, leaving the choice of parameters γ and β according to some heuristic rules.Based on this default setting, we show how different RC designs can generally be interpreted as optimizing in one and/or multiple parts along the following directions.Firstly, in RC coupling parameter search, with the goal of selecting a good and potentially optimal coupling parameter γ to maintain the RC dynamics bounded and produces rich pattern that allow for the internal states to form a signal bases that can later be combined to approximate the desired series {z(t)}.Empirical studies have shown that γ chosen so that the system is around the edge of chaos 15 typically produces the best outcome, which is supported by a necessary but not sufficient condition -imposed on the largest singular value of the effective stability matrix W γ = (1 − γ) + γW.Then, in RC output training, whose design commonly amounts to two aspects.One is to determine the right optimization objective, for instance the one in Eq. ( 4) with common generalizations include to change the norms used in the objective in particular the term ∥w∥ to enforce sparsity or to impose additional prior information by changing β∥w∥ into ∥Lw∥ with some matrix L encoding the prior information.On the other hand, (upon choice of the objective) to further determine the parameter, e.g., β as in Eq. (4).
Although there is no general theoretically guaranteed optimal choice, several common methods can be utilized, e.g., cross-validation techniques that had been well-developed in the literature of computational inverse problems.RC network design is crucial to determine the dynamic characteristics.With the goal of determining a good internal coupling network W.This has received much attention and has attracted many novel proposals, which include structured graphs with random as well as non-random weights 16,17 , and networks that are layered and deep or hierarchically coupled [18][19][20] .Furthermore, sometimes those designs are themselves coupled with the way the input and output parts of the system are used, for example in solving partial differential equations (PDEs) 21,22 or representing the dynamics of multivariate time series 23 .Finally, as for RC input design, although received relatively little attention until recently, it turns out that the input part of an RC can play very important roles in the system's performance.Here input design is generally interpreted to include not only the design of the input coupling matrix W (in) but also potentially some (non)linear transformation on the input u(t) and/or target variable z(t) prior to setting up the rest of the RC system.The so-called next-generation RC (NG-RC) is one such example 24 , showing great potential of input design in improving the data efficiency (less data required to train) of an RC.
In addition to the separate designs of the individual parts of an RC, the novel concept of neural architecture search (NAS) has motivated the research of hyperparmeter optimization 25 and Automated RC design to (optimally) design an RC system for not just one, but an entire class of problems and ask what might be the best RC architecture -including its input and internal coupling dynamics and training objective 25,26 , for instance using Bayesian optimization 27 .Furthermore, nonlinear functions beyond the component-wise f = tanh are often encountered in experimental settings and an active line of research is to explore new types of nonlinear dynamics such as electrooptic phase-delay dynamics [28][29][30] , optical scattering 31,32 , dynamic memristors [33][34][35][36] , enlarged memory capacity in chaotic dynamics 37 , solitons 38 and quantum states 39,40 .
Mathematical theory behind RC
The fundamental questions of exactly why, when and how RC learns a general dynamical process are important mathematical questions whose answers are expected to provide guidelines for the practical design and implementation of RC systems.These lines of queries have led to a number of important analytical results which we classify into four categories.
The first category of work focuses on the echo state property (ESP): Equivalent to state contracting, state forgetting, and input forgetting -refers to RC networks whose asymptotic states x(t → ∞) depends only on the input sequence and not on the initial network states.This property leads to a continuity property of the system known as the fading memory property where current state of the system mostly depends on near-term history and not long past 11 .Ref. 11 considers RC network with sigmoid nonlinearity and unit output function and showed that if the largest singular value of the weight matrix W is less than one then the system has ESP, and if the spectral radius of W is larger than one then the system is asymptotically unstable and thus cannot has ESP.Tighter bounds were subsequently derived in 41 .In particular, the spectral radius condition provides a practical way of ruling out bad RCs and can be seen a necessary condition for RC to properly function.
The second category is about memory capacity.Defined by the summation of delay linear correlations of the input sequence and output states, was shown to not exceed N for under iid input stream 42 , can be approached with arbitrary precision using simple linear cyclic reservoirs 16 , and can be improved using the time delays in the reservoir neurons 43 .
Universal approximation theorems can be regarded as a single category.Prior to the research of RC, universal representation theorems by Boyd and Chua showed that any time-invariant continuous nonlinear operator can be approximated either by a Volterra series or alternatively by a linear dynamical system with nonlinear readout 44 .RC's representation power has attracted significant recent interest: ESNs are shown to be universally approximating for discrete-time fading memory processes that are uniformly bounded 45 and further that the approximating family can be associated with networks with ESP and fading memory 46 .For discrete-time stochastic inputs, linear reservoir systems with either polynomial or neural network readout maps are universal and so are ESNs with linear outputs under further exponential moment constraints imposed on the input process 47 .For structurally stable systems, they can be approximated (upon topological conjugacy) by a sufficiently large ESN 48 .In particular, ESNs whose output states are trained with Tikhonov regularization are shown to approximate ergodic dynamical systems 49 .Also rigorously, the dynamics of RC is validated as a higher-dimensional embedding of the input nonlinear dynamics 43 .In addition, explicit error bounds are derived for ESNs and general RCs with ESP and fading memory properties under input sequences with given dependency structures 50 .Finally, according to conventional and generalized embedding theories, the RCs with time delays are established with significantlyreduced network sizes, and sometimes can achieve dynamics reconstruction even in the reservoir with a single neuron 43 .
The last category includes research about linear versus nonlinear transformations and next-generation RC.Focusing on linear reservoirs (possibly upon pre-transformations of the input states), recent work showed that the output states of an RC can be expressed in terms of a controllability matrix together with the network encoded inputs 17 .Moreover, a simplified class of RCs are shown to be equivalent to general vector autoregressive (VAR) processes 51 -with possible nonlinear basis expansions it forms theoretical foundations for the recently coined concept of next-generation RC 24 .
Research of how to design RC architectures, how to train them and why they work have, over the past two decades following the pioneering works of Jaeger and Maass, led to much evolved view of the capabilities as well as limitations of the RC framework for learning.On the one hand, simulation and numerical research has produced many new network architectures improving the performance of RC beyond purely random connections; future works can either adopt a one-fits-all approach to investigate very large random RCs or perhaps more likely to follow the concept of domain-specific architecture (DSA) 52 to explore structured classes of RCs that achieve optimal performance for particular types of applications, with Bayesian optimization 26,27 and NAS as powerful tools of investigation 53 .On the other hand, for a long time only few theoretical guidelines based on ESP were available for practical design of RCs; more recently several important theoretical discoveries were made establishing universal approximation theorems of RC -those results, although not yet directly useful for constructing optimal RCs, may nevertheless boost confidence and stimulate explicitly ideas of designing and even optimizing RCs for learning.In particular, despite having randomly assigned weights that are not trained, RC models are nevertheless shown to possess strong representation power with rigorous theoretical guarantees.
Physical design of RC systems: from integrated circuits to silicon photonics
To archive a controllable nonlinear high-dimensional system with short-term memory, some specific physical systems with nonlinear dynamic characteristics can be used to implement reservoirs (see Box 3), where network connections are determined by the physical interactions.As the development of integration technology for electrical and optical component, the computational efficiency can be greatly improved compared to traditional Boolean logic methods.The implementation of physical reservoir is similar to the software approach, but slightly different.
In recent years, there has been extensive research on designing and realizing RC using physical systems.A detailed review can be BOX 3
Schematic diagram of physical reservoir computing
found in reference 54 .Physical reservoirs can be roughly divided into three types based on their topological structure: discrete physical nodes reservoir, single-node reservoirs with delayed feedback and continuous medium type reservoirs.Discrete physical nodes reservoir is composed of interacting nonlinear components, such as memristors 35 , spintronics 55 , oscillators 56 , optical nodes 32 , etc.The nodes form a coupling network through real physical connections.They can be simply enlarged by increasing the number of network elements to obtain higher dimensions.Single-node reservoir is composed of a single nonlinear node and a time delay loop, which can transform the input signal into a virtual high-dimensional space through time division multiplexing using single nonlinear physical nodes, such as analog circuits 57 , lasers 58 , etc.This type of reservoir avoids the problem of large-scale interconnection, making it more hardware friendly.However, designing and implementing appropriate delayed feedback loops is not a simple task.Continuous-medium reservoir mainly utilizes the physical phenomena of various waves in a continuous medium, such as fluid and elastic media.This type of physical system can utilize the physical properties of waves, such as interference, resonance, and synchronization, to achieve extremely efficient physical RC 59 .In terms of specific physical schemes, there are also physical reservoirs implemented by mechanical 60 , biological 61 , quantum systems 39 and superconductors 62 .In this article, we mainly focus on comparing various physical implementation solutions in terms of integration, power consumption, processing speed, and programmability, as shown in Table 1.Typical high-performance physical reservoirs include traditional electronic schemes represented by Boolean logic circuits such as FPGA 63 and ASICs 64 ; Non-Von Neumann electrical reservoir scheme represented by memristor 33 and spintronics 65 devices; And photonic schemes represented by silicon photonics 66 , fiber optics 29,67,68 and free-space optics 69 .
In principle, existing morphological circuits, such as FPGAs and ASICs, can be implemented as an electronic reservoir.With its bit-level fine-grained customized structure, parallel computing ability, and efficient energy consumption, FPGAs exhibit unique advantages in deep learning applications.Using FPGAs for reservoir computing is also advantageous, as sparse connections in the reservoir model allow for simple routing techniques that match FPGA requirements.Currently, several FPGA methods have been proposed [70][71][72] .In addition, considering the high programming requirements of FPGAs, people have proposed the implementation of RC algorithm using Application Specific Integrated Circuits (ASICs) 73 , which can help improve chip performance and power consumption ratio.The disadvantage of ASICbased RC is that circuit design customization leads to relatively long development cycles, inability to scale, and high costs.But research in this area is also actively advancing 74,75 .
Besides the electric reservoir that is based on Boolean logic and von-Neumann architecture, people have been pursuing higher efficiency and lower energy consumption methods.For the reservoir model, the nonlinear analog electronic circuit can be used to directly build the reservoir model, such as the Mackey -Glass circuit 76 .Based on nonlinear electronic circuits, a single electric node, such as a memristor, or a spintronic device, with delay lines that can be constructed and combined with other digital hardware components for preprocessing and post-processing 77 .The memristor has the dimension of resistance, but its resistance value is determined by the charge flowing through it.It functions as a memory, and can generate rich reservoir states under an appropriate time division multiplexing mechanism 35 .In addition, it is also possible to construct 2D/3D memristor crossbar arrays and encode matrix elements into the embedded memristor conductance 78 .This programming can be accomplished using voltage pulses with minimal energy required.On the other hand, micro/nano spin electronic devices constructed using electron spin degrees of freedom can exhibit the physical properties of tiny magnets and can be used to simulate synaptic behavior in biological nerves 65 .At present, people have proposed several reservoir schemes based on the physical phenomena related to spintronics 79 .On the other hand, development of photonic technology has brought hope for ultra-high speed and low energy consumption hardware systems, especially for neural network training 80 .Optical systems have significant advantages over traditional microelectronic technologies in terms of high bandwidth, low latency, and low energy consumption.Reservoir networks based on optical systems have also made significant progress 81 , such as multi-scattering nodes in free space 32 , single nonlinear nodes with fiber loop 58 , and integrated onchip reservoirs 66 .The free-space reservoir is generally achieved using spatial optics and scattering media, such as diffractive optical elements (DOE), to achieve coupling between spatial optical nodes.Interconnection between neurons in the reservoir are realized through complex scattering processes 32 .Single nonlinear optical nodes, such as semiconductor optical amplifiers (SOAs), saturable absorbers (SESAM), as well as semiconductor lasers can form optical reservoirs with special fiber loop designs 81 .Integrated on-chip optical reservoir are often archived by interaction between nonlinear micro/nano optical devices, such as micro-rings 81 .Unlike the fiber delay loop architecture, utilizing multiple on chip nonlinear optical nodes makes it more convenient to take advantage of optical parallel computing.
Comparatively speaking, reservoir schemes based on FPGAs and ASICs can greatly improve the computing speed and power consumption compared with the general CPU electronic architecture, due to its non-Von Neumann/in-memory nature of the computing.Besides, there is no need for photoelectric conversion at either the input or output ends, making it convenient in data scaling and processing.However, the computing efficiency is close to the theoretical limit.For electrical non-Von Neumann architectures, such as memristors, more efficient computation can be realized theoretically, but due to their analog nature, it is usually difficult to realize ideal nonlinear mappings and high-precision matrix calculations, and the integration and stability of such devices also need to be improved.As for spintronics reservoirs, so far, most studies have only explored nanomagnetic RC in simulations, and it also faces the scalability problem similar to memristor.For optical reservoir schemes, the low delay and low energy consumption characteristics of optical devices are generally only reflected in the reservoir layer.Currently, most schemes require photoelectric conversion in data preprocessing and post-processing, and the response time of the system is essentially limited by photodetectors and the time delay of electronic control circuits.At the same time, optical processing errors and the power consumption of external auxiliary devices also pose strict limitations on the scale of the system.So it seems there is currently no solution that can be said to be the best.In the short term, electronic solutions such as FPGAs do not require photoelectric conversion in the input and output processes, and are measurement friendly, thus having advantages in hardware implementation.However, considering issues such as power consumption and latency, specialized photonic reservoirs will have more advantages in the future.Perhaps utilizing their respective advantages comprehensively and adopting a heterogeneous integration solution is a feasible path.
Application benchmarks of RC
Applications of RC are quite diverse and can be mainly divided into several categories: signal classification (e.g., spoken digit recognition), time series prediction (e.g., chaos prediction such as in the Mackey-Glass dynamics), control of system dynamics (e.g., learning to control robots in real-time) and PDE computations (e.g., fast simulation of Kuramoto-Sivashinsky equations), which we discuss below respectively.
In signal classification tasks, the input of RC are usually broadlyinterpreted (physical) signals such as audio, image or temporal waves.The target output are the corresponding labels which can be spoken digits 16,28,29,34,35,65,68,[82][83][84] , image labels 33,35,[85][86][87] , bit-symbols 16,29,68,[88][89][90][91][92][93] and so on.The effectiveness of traditional neural networks in classification tasks has been verified in lots of work.However, dealing with temporal input signal is still a challenge.Compared with traditional neural networks, RC can map temporal signals with multiple timescales to high dimension, encoding these signals with its various internal states.Furthermore, RC network has much less parameters thus requiring less training resources.Therefore, RC can be a good candidate to be utilized in temporal signal classification tasks.The signals are in various types (audio, image or temporal waves), and usually require some preprocessing before injecting to RC network.For example, in the spoken-digit recognition task, the raw signal is first transformed to frequency domain in terms of multiple frequency channels via Lyon's passive ear model, as shown in Fig. 2a.Then the 2-D signals can be directly mapped to the RC network as input u(t) via input mask, or can be transformed to 1-D input sequence u(t) by connecting each row successively.The targets are a vector of size ten corresponding to digit number from 0 to 9. The state-of-the-art of RC currently can reach a word error rate (WER) of 0.4% from memristor chip RC 35 , and 0.2% from electronic RC 94 .
For time series prediction, RC assumes the role of regression, taking input as a segment of time series up to a certain time and draws predictions for the next (few) time steps.Examples are abundant, including prediction of chaotic dynamics such as Mackey-Glass equations 11,31,34,51,95 , Lorenz system 22,26,49,51,[95][96][97] , Santa Fe Chaotic time series 16,86,89,95 , Ikeda system 95 , auto-regressive moving average (NARMA) sequence 16,28,29,93,94,98 , Hénon map 16,35,95,98 , radar signal 68 , language sentence 36 , stocks data 61 , sea surface temperatures (SST) 99 , traffic breakdown [100][101][102] , tool wear detection 97 and wind power 103 .Given a training time series fzðtÞg t2Z and prescribed prediction horizon τ, the input sequence of RC can be defined as u(t) = z(t) while the target output as y(t) = z(t + τ).(For one-step prediction we use τ = 1.)Once the parameters of RC is learned, it can be used as a predictive model, taking a temporal input and predicts its next steps.In particular, RC trained with one-step prediction can nevertheless be used to make multi-step predictions, in the following way.Suppose that a finite-length time series {u(t)} t=1,…,T is provided, we feed it into RC to compute a state y(t + 1) as a one-step prediction.We then append this state to the end of the input effectively defining u(t + 1) = y(t + 1) and through RC to compute a next state y(t + 2), and so on and so forth to obtain a series of next steps y(t + 1, t + 2, …, t + h).A schematic example of nonlinear time series prediction task is shown in Fig. 2b.In order to realize long-term prediction, there is another training scheme in which the target sequence is inserted periodically.In particular, the input to the RC now comes from its feedback or target sequence alternately, as shown in Fig. 2b (method 2).Compared with the previous case which can be regarded as an offline training scheme, here RC can acquire target data periodically, then retraining and updating the output weights regularly.This is an online training scheme.Since RC has access to target data during its evolution, it can adjust the output weights to prevent the predictive output data from diverging.Therefore, the online training typically yields longer prediction period and better prediction performances.
RC can play important roles in the control of nonlinear dynamical systems [104][105][106][107][108][109] .In particular, in the model predictive control (MPC) framework, control actions are derived based on a predictive model of the system dynamics.The predictive model is typically linear due to simplicity and low computational cost.RC as an alternative can potentially improves upon the linear prediction without introducing too much additional computational overhead, as shown in Fig. 2c.As a concrete example, in the controlling robot arm movement task 104 , the mechanical arm gives input data such as joint arm angles, destination position coordinates and joint arm torques calculated from Lagrangian equation.RC is trained with the targets which are successive joint torques needed to gradually move to the destination.In the testing phase, as long as the destination is given, the robot arm can evolve by itself to approximate to the target point.Additionally, RC jointly with adaptive feedback control technique can be used to track the unknown and unstable periodic orbits and stabilize them even when the chaotic time series are only available 14 .
RC can be applied for scientific computation such as in the numerical solution of PDEs 21,22,32 .For these tasks, RC is typically used to evolve the states of the system toward the temporal direction with a flow diagram shown in Fig. 2d.To decrease the difficulty for a single reservoir to process all inputs and improve the training efficiency, parallel reservoir architecture was proposed 21,22 which allows multiple small reservoirs to deal with different parts of input data.The input vector is split into multiple small groups with each group includes some extra adjacent points serving as extra information provided to the corresponding small reservoirs.The target is the next time step vector of the PDE.Accompanied with a nonlinear readout function, the RC network can learn and evolve Kuramoto-Sivashinsky (KS) equation relatively accurate up to a time length of around 5 Lyapunov times 21 .
Overall, RC has demonstrated strong performance across a range of benchmarks and tasks, with ongoing efforts to further improve results.A summary of trends in RC performances in typical application scenarios is shown in Fig. 3.For example, in spoken digit recognition, WER is reaching near-perfect levels (0.014%) 58 .Similarly, handwritten digit recognition boasts an accuracy of around 97.6% 35 .While RC currently has limitations in action recognition and requires preprocessing, there is potential for future development in expanding recognition abilities and reducing preprocessing needs.In time series prediction, RC excels in chaotic sequences such as Mackey-Glass, Lorenz, and Santa Fe, but real-world data such as weather 110,111 , stocks, and wind power show less impressive performance.RC is primarily used in dynamic control for MPC systems, but as system complexity increases, realtime control with greater accuracy and efficiency is necessary.Lastly, RC has been shown to compute PDEs effectively, but practical applications of this ability have yet to be fully realized.Despite attempts and preliminary successes in applying RC to problems endowed with real-world datasets, nearly none of those attempts have led to an industry-level adoption and application.An important reason is that the performance of RC on common tasks such as image classification, audio signal processing have not reached or shown to have the potential to approach the SOTA metrics offered Fig. 2 | Example applications of RC.Flow diagrams showing how RC is applied in different types of applications, here referring to as signal classification, nonlinear time series prediction, dynamical control and PDE computing, respectively.a RC for spoken-digit recognition 16,28,29,34,35,65,68,[82][83][84] , when the targets are a vector of digit numbers corresponding to 0-9.b RC for time series prediction with Mackey-Glass equations 11,31,34,51,95 as an example.In method 1 with off-line training, the training sequence starts with the first point (black point), while the target sequence starts with the second one (orange point).In method 2 with on-line retraining, the training and testing are alternately presented.c RC acts as the prediction optimizer in the general model predictive control (MPC) [104][105][106][107][108][109] framework.Top: The MPC diagram.Bottom: How RC works in the MPC system.d RC for PDE computation 21,22,32 with the Kuramoto-Sivashinsky (KS) equations as an example.The hidden layer consists of parallel multiple reservoirs, and each of them deal with part of the input data, while a nonlinear transformation is typically inserted before training the parameters of the readout layer.by deep-learning based methods.Given that theoretically RC has universal approximation capacities just as general neural networks, in principle nothing seems to be holding back RC models to push the frontiers of most challenging AI tasks, and this should be a main goal of the entire RC community.
Opportunities and technical challenges for future development of RC
We expect that research in RC can play important roles in several important application domains, which we discuss as follows.As technology continues to rapidly advance, there is an increasing demand to develop intelligent information processing systems that are both dynamic and lightweight, yet widely deployable at low cost.According to estimates, by the years 2030-2035, both wireless and optical communication will usher in the sixth generation (6G/F6G), providing connections for tens of billions of devices and multi-billion users 112,113 .It is also expected that global data centers will have a throughput of trillions of GB and require over 200 terawatt hours of power consumption 114 .Furthermore, tens and hundreds of millions of robots are set to enter our daily lives to improve labor efficiency at a low cost 115 .Virtual reality and Metaverse rely heavily on real-time simulation of the physical world 116 .These major applications require a large number of capabilities, including accurate recognition of dynamic uncertainty information, fast prediction and computation, and dynamic control, all of which can be provided by RC systems, as shown in Fig. 4. As a result, we expect that RC research will play a critical role in several important application domains, as we will discuss below.
6G
Opportunities.It is predicted that by 2030, wireless communication will advance to its sixth generation, commonly referred to as 6G.The main goal for 6G is to enhance important indicators such as transmission speed, coverage density, time delay, and reliability by 10 to 100 times compared to 5G.This would provide a never-before-seen connection experience across a wider area for numerous devices 112,113 .Challenges.In order to realize the beyond-5G vision, several technical challenges need to be addressed.The most crucial one is achieving low-latency, high-reliability network connections for complex channel environments and providing deterministic communication guarantees.The key to this is active signal processing through predicting potential changes in the channel based on the perception of the environment.This requires overturning traditional passive waveform design and channel coding, and instead, relying heavily on active sensing, accurate prediction, and dynamic optimization of complex channels to systematically optimize channel capacity.To address these technical challenges while maintaining a lightweight deployment cost, RC can play a significant role.For example, essential modules such as waveform optimization and decoding can greatly benefit from accurate identification and dynamic estimation of channel state information integrated sensing and communication.These modules can also be further improved by transforming from responsive to predictive channel estimation.Finally, real-time channel optimization, such as using RIS techniques, would require fast and adaptive control of potentially high-dimensional dynamics.Due to its compact and lightweight network structure, rich functional interfaces, and low-complexity training and computing nature, RC is expected to become a key technology base for edge-side information processing.
Next-generation optical networks
Opportunities.Optical fiber communication is often regarded as one of the most significant scientific advancements of the 20th century, as noted by Charles Kuen Kao, the Nobel Prize winner in Physics 117 .The optical network derived from optical fiber technology has become a fundamental infrastructure that supports the modern information society, processing more than 95% of network traffic.The next-generation optical fiber communication network aims to achieve a Fiber to Everywhere vision 118,119 , featuring ultra-high bandwidth (up to 800G~1.6Tbpstransmission capacity per fiber), all-optical connectivity (establishing an all-optical network with ultra-low power consumption and extending fibers to deeper indoor settings), and an ultimate experience (zero packet loss, no sense of delay, and ultra-reliability).Challenges.To attain such a significant vision, significant technological advancements must be made in areas such as all-optical signal processing, system optimization, and uncertainty control.These technical challenges can benefit from new theories, algorithms, and system architectures of RC.For instance, a silicon photonics integrated RC system, functioning as a photonic neural network, can achieve end-to-end optical domain signal processing with negligible power consumption and time delay in principle, without relying on electro-optical/ optical conversion.As a result, it has the potential to become a key technology in future all-optical networks.Additionally, adjusting the internal structure of the optical fiber can enable the enhancement of capacity by searching complex and diverse structures, which can benefit from the effective and automated modeling of the channel with RC.This approach transforms the original blackbox optimization of the system into the white-box optimization of Each domain corresponds to three specific example application scenarios.Six domains are 6G [136][137][138][139][140] , Next Generation (NG) Optical Networks 92,93,[141][142][143] , Internet of Things (IoT) 61,144,145 , Green Data Center 120,146,147 , Intelligent Robots [148][149][150] and AI for Science [151][152][153][154][155][156][157] and Digital Twins 99,103,[158][159][160][161] .
Perspective https://doi.org/10.1038/s41467-024-45187-1Nature Communications | (2024) 15:2056 the RC's output layer, likely able to improve the optimization efficiency.In terms of low-latency and reliability assurance at the optical network level, RC research can play a critical role in link failure prediction early warning, fault localization, and dynamical control.Due to the compact design of RC, embedded devices can perform intelligent processing tasks as a natural part of the network system, without requiring a centralized power center.
Internet of Things (IoT)
Opportunities.In comparison to traditional communication and interconnection services for computers and mobile phones, the Internet of Things (IoT) caters to a wider range of devices with broader coverage, posing several new technological challenges.With IoT, the quantity and types of objects served are significantly higher, including smart temperature and light control 120,121 , open-space noise cancellation 122 , air quality monitoring 123 , among others, all of which are key features of smart homes.Communication technologies used to realize the interconnection of these devices are diverse, including Bluetooth, NFC, visible light, RFID, WiFi, ZigBee, and so on.Challenges.Unlike high-end devices such as computers and mobile phones, a vast majority of IoT connected devices cannot rely on energy-hungry integrated chip technology to achieve advanced computing performance due to power consumption and volume limitations.Consequently, IoT end-side systems must utilize lowpower, programmable techniques to achieve adaptive perception and computing necessary for edge intelligence.The lightweight and dynamically controllable nature of such requirements make RC systems particularly advantageous over large AI models.With the success of domain-specific chips for audio and video processing, there is expected to be significant demand for embedded smart chips in the IoT field, which will open up new opportunities for the application of RC research.
Green data centers
Opportunities.Data centers have become an essential infrastructure for the new generation of information society due to the substantial increase in demand for massive computing and data storage.It is estimated that by 2030, global data centers will process a trillion GB of data every day, and their power consumption is expected to account for over 60% of total power generation.However, the large amount of electricity consumption and heat emissions required to operate these centers have a significant impact on the environment.Therefore, the design and development of new generation green data centers with low energy consumption and high reliability are crucial for the sustainable development of society.Challenges.The realization of lowenergy data centers relies on numerous technological breakthroughs.Energy consumption in data transfer accounts for a significant proportion, with optical modules playing a central role.Therefore, achieving low energy consumption requires reducing the energy consumption of optical modules.One promising approach is to implement all-optical signal processing based on the integrated silicon photonics on-chip RC system.Additionally, data centers comprise many components that form an extremely complex dynamic system.Maintaining the system's normal operation at the least possible cost of energy consumption, such as keeping the overall temperature stable at a low-range, can be viewed as an optimal control problem.A potential solution to this problem is through data-driven models with physical priors, which combines a structured model derived from the connection relationship and functions of physical equipment and data-driven methods to build a dynamic control framework.By monitoring and adjusting the parameter configuration of each module of the system in real-time, this framework can achieve the optimal operating status and energy consumption cost.RC has the potential to play a crucial role in this approach.
Intelligent robots
Opportunities.Robots are becoming increasingly important in today's information society due to their ability to take many forms, including intelligent physical manifestations.One example of this is large-scale commercial sweeping robots used in smart homes 115 , which have replaced traditional manual operations in various scenarios, improving both production efficiency and living standards.With advances in technology, more types of intelligent robots are expected to emerge over the next decade, capable of completing complicated tasks through autonomous perception, calculation, optimization, and control in complex environments like failure detection, medical diagnosis, and search-and-rescue operations.Biological intelligence serves as inspiration for achieving robot intelligence, which relies on three key elements: real-time intensive information collection and perception capabilities (made possible by technologies such as flexible sensing, electronic skin, and multi-dimensional environment modeling), fast information processing capabilities (enabled by technologies like decision-making optimization and dynamic control), and physical control capabilities (facilitated by nonlinear modeling and electromechanical control).Challenges.Due to physical constraints such as battery capacity and deployment environment uncertainty, the core modules supporting robot intelligence are expected to be embedded in the physical entity of the robot in an offline manner rather than relying on cloud and network capabilities to provide potential large model capabilities.Similar to the IoT scenario, machine learning that is widely relied on in robot intelligence must have the characteristics of miniaturization, low energy consumption, and easy deployment, while requiring the ability to recognize, predict, calculate, and control dynamic processes.This presents an excellent application field for RC systems to play a role.In MPC, since the role of RC merely replaces a linear predictor the overall controller architecture remains transparent and intact.In principle, it is possible to adopt RC for general controller design beyond usage in the MPC framework, e.g., directly learning control rules from data together with (some) prior model knowledge.However, the main challenge would be to pose theoretical guarantees on error and convergence neither of which have been resolved by existing works of RC.
AI for science and digital twins
Opportunities.To fully realize the ongoing information revolution, it is essential to rethink and reshape crucial aspects of industrial manufacturing through the innovative framework of AI for science and digital twins.This involves achieving full perception and precise control of physical systems through interactions and iterative feedback between digital models and entities in the physical world.Essentially, digital twins establish a synchronous relationship between physical systems and their digital representations.Using this synchronous function, simulations can be run in the digital world, and optimized designs can repeatedly and iteratively be imported into the physical system, ultimately leading to optimization and control.For systems with clear and complete physical mechanisms, synchronization models that digital twins rely on are usually sets of ODEs/PDEs.For example, simulating full three-dimensional turbulence, weather forecasting, laser dynamics, etc.Preliminary studies suggest that reservoir computing can be used to reduce the computational resources required for these expensive simulations.Arcomano et al. 111 performance and capturing long-term statistical data while requiring less training time.Challenges.Calculations of these physics-inferred equations can be challenging.In more complex industrial applications, multiple coupling modules are often present, and interactions between the system and the open environment cannot be fully described by physical mechanisms or mathematical functions.Therefore, it is necessary to consider fast calculation techniques, but also find ways to build synchronization models for non-white-box complex dynamic systems.Mathematical modeling of fusion between physical mechanisms and data-driven techniques has been significantly developed in the past decade.For instance, Physics-inspired Neural Networks (PINN) embed the structure and form of physical equations into neural network loss functions, which guides the neural network to approximate provided physics equations during parameter training 124 .Another type of physics-inspired computing system, RC, inherently provides an embedding method of the mechanism model, which is expected to provide a powerful supplement to the solver for basic physical models of industrial simulation, focusing on offering a dynamic modeling framework for the fusion of mechanisms and data.However, for reduced-order data, large-scale RC models may be unstable and more likely to exhibit bias than the BPTT algorithm.In another example of research on nonlinear laser dynamics, the authors found that RC methods have simpler training mechanisms and can reduce training time compared to deep neural networks 125 .For practical problems involving complex nonlinear physical processes, we have reason to believe that RC methods may provide us with solutions for computational acceleration.
Outlook
In summary, although RC has the potential for large-scale application in terms of functions, in order to truly solve the technical problems in the above-mentioned various major applications, there are still many key challenges in the existing RC system in various aspects.For example, in theoretical research, although the universal approximation theory of RC has advanced significantly in recent years, most of the theoretical results focus on existence proofs and lack structural design.Hence, the current approximation theory has not yet played an important guiding role in RC network architecture design, training methods, etc., nor can it quantitatively evaluate the approximation potential of a specific RC scheme for dynamic systems or time series.An important reason to further advance the mathematical theory of RC is for data-driven control applications.In most of those applications, rigorous theory on control error and convergence are necessary for the corresponding controller to be considered usable in an industrial setting.However, so far very little work has been done to address these important problems.As for algorithmic challenges, most industrial applications do not require a universal approximator, but in the same field, the approximation model needs to be generalizable.Existing RC research has very little exploration in domain-specific architecture optimization.Problems in the industrial field are divided into scenarios and categories.Therefore, it is important to construct general-purpose RC models possibly by means of architecture search.In addition, leaving aside the practicality of RC for the time being, past research has turned its advantages into constraints, such as small size, simple training, and so on.However, how strong is RC's learning ability (whether there is an RC architecture that can compare with GPT's ability), it is still unknown.
At the experimental level, there are still some gaps when mapping RC models to physical systems.The first is timescale problem of physical substrate RC: Matching the timescales between the computational challenge and the internal dynamics of the physical RC substrate is a key issue in reservoir computing.If the timescale of the problem is much faster than the response time of the physical system, the response of the reservoir will be too small or the fading memory of the reservoir will not be properly utilized, rendering the physical reservoir computing system ineffective.One intuitive solution is to adjust the physical parameters of the reservoir to match the timescale of the computational problem.This poses high requirements for the design of RC network structures and training algorithms.Using other technologies such as super-resolution and compressive sensing to overcome the resolution problem of single-point measurement and processing in RC systems may be a viable solution.The second is the real-time data processing problem: One of the significant advantages of reservoir computing is lightweight and fast computation.However, in practical physical systems, it is often unrealistic to sample and store a large number of node responses to a certain input due to limitations such as sampling bandwidth, storage depth and bandwidth, or their combinations.It is simply not feasible in many cases to probe a system with a large number of probes (10s-1000s) interfaced with AD converters.In addition to these practical challenges, hardware drift often requires regular repetition of calibration procedures, hence it cannot be a one-of optimization.Furthermore, data preprocessing and postprocessing also limit the overall computational speed of the physical RC system.One approach to address this issue is to use hardwarebased readout instead of software-based readout [126][127][128][129] .
Moving forward, it is crucial that we thoroughly explore the potential of intelligent learning machines based on dynamical systems.In the realm of theoretical and algorithmic research, it is necessary to continuously push the boundaries of performance and offer guidance for experimental design.Reservoir computing (RC) research can take root in theory and algorithms, with experiments serving as approximations to theoretical and algorithmic results.However, one disadvantage of this approach is that it can be challenging to identify equivalent devices in experiments that can achieve the nonlinear properties of RC in theory, which can lead to reduced accuracy.Alternatively, researchers can focus on building physical RC system as the ultimate goal, which requires close collaboration between theoretical and experimental teams to optimize the system jointly.This approach has the advantage of considering physical constraints and application characteristics when designing algorithms, making it more likely to achieve better solutions at the implementation level.This also raises the bar for interdisciplinary research, as participants will need to possess cross-disciplinary communication skills and knowledge, along with an openness towards multi-module complex coupling optimization.
Looking ahead, unlocking the full potential of RC and neuromorphic computing in general is critical yet challenging.In fact, this goes beyond just putting out open-source codes or solve a few specific problems.Innovative ideas and interdisciplinary research formats are much needed.As concrete suggestions, researchers of the applied mathematics and nonlinear dynamics communities who have been the main players in RC will need to get close(r) to the mainstream AI applications and try to develop next-generation RC systems to compete in these scenarios where the value of application has been established and recognized by the industry.A good starting point can be open-source tasks and datasets such as Kaggle, and more generally to directly partner with industrial research labs to put RC into real applications.On the other hand, raising awareness of the (potential) utility of RC requires attracting interest from researchers and decisionmakers who are traditionally outside of the field.For instance, themed conferences and workshops may be organized to foster such discussions among scientists and researchers from diverse fields across academia and industry.Despite the many challenges, with persistence and innovations a new and future paradigm of intelligent learning and computing may possibly emerge from the works of RC and neuromorphic computing.
3 .
Model complexity & performance: DL and RC have distinct model size, training comlexity and performance.In Fig. (b), we summrize the parameter size and required training petaflops of DL and RC.
It is still an open question where the full potential of RC is (as indicated by a question mark in Fig. (b)), and how is the training complexity if RC involves around billions of parameters, for which we draw our hypothesis in dotted lines in Fig (b).
Fig. 1 |
Fig. 1 | Selected research milestones of RC encompassing system and algorithm designs, representing theory, experimental realizations as well as applications.For each category a selection of the representative publications were highlighted.
Fig. 3 |
Fig. 3 | Trends in RC performance in typical application scenarios.Four kinds of representative scenarios are: a signal classification tasks such as spoken-digit recognition, nonlinear channel equation and optical channel equalization; b time series prediction such as predicting the dynamics of Mackey-Glass equations, Lorenz systems as well as and Santa Fe chaotic time series; c control tasks and d PDE computation.Thick, up-pointing arrows in the panels denote error values that are not directly comparable with other works.
Fig. 4 |
Fig. 4 | Application domains in which RC potentially can play important roles.Each domain corresponds to three specific example application scenarios.Six domains are 6G 136-140 , Next Generation (NG) Optical Networks 92,93,141-143 , Internet of
Table 1 |
Comparison between typical physical RC implementation methodsNote 1.The operating speed of the system is influenced by data preprocessing, A/D-D/A conversion, node nonlinear response time and other factors.Ideally, the operating speed can reach the response time limit of the nonlinear nodes.Note 2. Programmability is determined by if a reservoir can be trained and modified.
developed a lowresolution global prediction model based on reservoir computing and investigated the applicability of RC in weather forecasting.They demonstrated that a parallel ML model based on RC can predict the global atmospheric state in the same grid format as the numerical (physics-based) global weather forecast model.They also found that the current version of the ML model has potential in short-term weather forecasting.They further discovered that when full-state dynamics are available for training, RC outperforms the time-based backpropagation through time (BPTT) method in terms of prediction | 12,728.8 | 2024-03-06T00:00:00.000 | [
"Computer Science",
"Physics",
"Engineering"
] |
The effect of air pulse-driven whole eye motion on the association between corneal hysteresis and glaucomatous visual field progression
Corneal hysteresis (CH) measured with Ocular Response Analyzer (Reichert: ORA) has been reported to be closely related to the glaucomatous visual field (VF) progression. The air pulse applied to an eye not only induces corneal deformation, but also whole eye motion (WEM), which may result in an inaccurate measurement of CH. Here we investigated the influence of air pulse-driven WEM measured with the Corivs ST (CST®, OCULUS) on the relationship between CH and VF progression in primary open angle-glaucoma patients. Using the CST parameters of the maximal WEM displacement (WEM-d) and the time to reach that displacement (WEM-t), the eyes were classified into subgroups (WEM-d low- and high-group, and WEM-t short- and long-group). For the whole population and all subgroups, the optimal linear mixed model to describe mean of total deviation (mTD) progression rate with eight reliable VFs was selected from all combinations of seven parameters including CH. As a result, optimal models for the mTD progression rate included CH in the whole population, the WEM-d low- group and the WEM-t short-group, but not in the WEM-d high-group and the WEM-t long-group. Our findings indicated association between CH and glaucomatous progression can be weakened because of large WEM.
Method
The study was approved by the Research Ethics Committee of the Graduated School of Medicine and Faculty of Medicine at The University of Tokyo. Written informed consent was given by patients for their information to be stored in the hospital database and used for research. The study was performed according to the tenets of the Declaration of Helsinki.
Patients. 108 eyes of 70 primary open angle-glaucoma patients (37 males and 33 females) were included. All patients had at least eight reliable VFs measured with the Humphrey Field Analyzer II (HFA, Carl Zeiss Meditec Inc, Dublin, CA), with the 24-2 or 30-2 SITA standard program. Reliable VFs were defined as Fixation loss (FL) rate <20% and False positive (FP) rate <15% following the criteria used in the HFA software; false negative (FN) rate was not used as an exclusion criterion. All patients had undergone a VF measurement prior to observation in the current study. We chose a minimum of eight VFs because it has recently been reported that this number is needed to precisely analyze VF progression [16][17][18][19][20] . Eyes that experienced any surgical procedure, including trabeculectomy and cataract surgery, during or prior to this VF series period were excluded. Inclusion criteria were no abnormal eye-related findings except for OAG on biomicroscopy, gonioscopy and funduscopy. Eyes with a history of other ocular disease, such as age-related macular degeneration were also excluded. Only subjects aged ≧20 years old were included and contact lens wearers were excluded. The mean and standard deviation (SD) of all Goldmann applanation tonometry based-intraocular pressure (GAT-IOP) measurements during the follow up period were calculated. Axial length (AL) and central corneal thickness (CCT) were also measured in all patients using the IOL Master, ver. 5.02 (Carl Zeiss Meditec, CA) and CST, respectively. VF data. The mean total deviation (mTD) value of the 52 test points in the 24-2 HFA VF test pattern was calculated. The progression rate of mTD was determined with linear regression analysis using the eight VFs collected from each eye, similarly to the MD trend analysis employed in the HFA Guided Progression Analysis (GPA).
ORA measurements. ORA records two applanation pressure measurements, prior to and following an indentation of the cornea with the application of a rapid air jet. Due to its viscoelastic property, the cornea resists the air puff, resulting in delays in the inward and outward applanation events, which causes a measurable difference in the air puff values. This difference is called CH. ORA measurements were carried out three times with at least a five minutes' interval between each measurement. ORA measurements were taken prior to, but on the same day of the GAT-IOP measurement, and also within 180 days from the eighth VF measurement. The order of ORA and CST measurements was decided randomly. All data had a quality index >7.5. In the current study, the average values of the three measured values of CH and corneal compensated IOP (IOPcc), but not corneal resistant factor (CRF), were used for analysis since previous studies have indicated CH, but not CRF, is associated with glaucomatous progression 1-4 . Corvis ST tonometry measurement. The principles of CST are described in detail elsewhere 8 . The instrument's ultra-high speed camera, capturing 4,330 images per second, records 140 images of corneal deformation during the 30 ms air puff. The device provides the same load over the same time period, ensuring reliable quantification and comparison between eyes. The air pressure induces not only corneal deflection from which various corneal deformation-related parameters like deformation amplitude, applanation length or corneal velocity are provided, but also a backward movement of the whole cornea, the amount of which is parametrized as the WEM. Maximum WEM which usually takes place near the event of the second applanation is identified from the image sequence and the maximal displacement (WEM-d [mm]) and the time (WEM-t [ms]) to reach the maximal dislocation are calculated (see Fig. 1).
CST (software version; ver. 1.3r1512) was performed three times on the same day with ORA measurement, with at least a five minutes' interval between each measurement. Only reliable CST measurements, according to the "OK" quality index displayed on the device monitor, were used. The average values of the three measured values of WEM-d, WEM-t and CST-measured IOP (CST-IOP) were calculated and used in the analysis. CCT was also measured with the CST and used in the analyses.
Prostaglandin analogues usage assessment. Every patient's medical history was surveyed from the electronical medical record and the usage of a prostaglandin analogue (PGA) eyedrop was identified and eyes were categorized into either of "PGA usage group" or "PGA non-usage group". Statistical analysis. First, the relationships between WEM-d or WEM-t and CH and other ocular/systemic parameters (age, gender, CCT, CH, AL, initial mTD, GAT-IOP on the same date, and mTD progression rate) were calculated using the linear mixed model. Then, from all eyes, the average values of WEM-d and WEM-t were calculated and eyes were divided into two subpopulations in two ways; WEM-d low-group: eyes with WEM-d lower than the average value, WEM-d high-group: eyes with WEM-d higher than the average value, and also WEM-t short-group: eyes with WEM-t shorter than the average value, WEM-t long-group: eyes with WEM-t longer than the average value. Then, for each subgroup, the association between mTD progression rate and the seven variables of CH and other ocular/systemic parameters (age, mean GAT-IOP, SD of GAT-IOP, CCT, AL, and mTD in the initial VF) was investigated using a linear mixed model, where the patient was registered as a random effect (because one or two eyes of a patient were included in the current study). The optimal linear mixed model to describe mTD progression rate was selected according to the second order bias corrected Akaike Information Criterion (AICc) index from all possible combinations of predictors (2 7 patterns). The AICc is the corrected form of the common statistical measure of AIC that gives an accurate estimation even when the sample size is small 21 . Any magnitude of reduction in AICc suggests an improvement of the model 22,23 . The relative likelihood that one particular model is minimizing information loss compared to the model with the smallest AICc is calculated as exp((AICmin − AICx)/2), where AICx is AICc value of arbitrary model "X" and AICmin is the minimum value from all possible models 24 . In addition, further model selection seeking for an optimal model for mTD progression rate was conducted for all eyes and each subgroup, including IOPcc and CST-IOP, in addition to the above described seven parameters. In other words, the model selection was carried out amongst 2 9 patterns including IOPcc or CST-IOP.
All statistical analyses were performed using the statistical programming language 'R' (R version 3.2.3; The foundation for Statistical Computing, Vienna, Austria). Data Availability. The datasets analysed during the current study are available from the corresponding author on reasonable request.
Results
Characteristics of the patients as well as the CH, WEM-d and WEM-t values are summarized in Table 1 Fig. 2, mTD progression rate was −0.25 ± 0.32 [−1.78 to 0.27] dB/year. 55 eyes were in the WEM-d low-group and 53 eyes were in the WEM-d high-group, whereas 54 eyes were in the WEM-t short-group and 54 eyes were in the WEM-t long-group. Baseline characteristics of these subgroups are shown in Table 2 and Table 3. There was no significant difference in six variables (age, mean GAT-IOP, CCT, initial mTD, CH) between WEM-d low-group and high-group (p = 0.31 to 0.76, linear mixed model), and between WEM-t short-and long-group (p = 0.63 to 1.00, linear mixed model).
There was a significant relationship between WEM-d and WEM-t (p = 0.021, linear mixed model; see Table 4). The relationships between WEM-d or WEM-t and age, gender, CH, CCT, AL, GAT-IOP on the same measurement day, and mTD progression rate are shown in Table 4. WEM-d was significantly related to age and AL (p = 0.027 and 0.00061, linear mixed model); see Fig. 3. WEM-d and WEM-t had no significant relationship with the other variables, including CH (p = 0.30 and 0.91, respectively, linear mixed model).
The relationships between mTD progression rate and age, gender, CH, CCT, AL, WEM-d, WEM-t and mean GAT-IOP are shown in Table 5. Mean GAT-IOP was not significantly related to mTD progression rate in all eyes, or any of the subgroups (WEM-d low, WEM-d high, WEM-t short and WEM-t long groups; p = 0.22, 0.77, 0.12, 0.12 and 0.86, respectively, linear mixed model). As shown in Fig. 4, CH was significantly related to mTD progression rate in all eyes (mTD progression rate = −0.94 + 0.075 × CH. p = 0.011, linear mixed model). This significant relationship was also observed in the WEM-d low group (mTD progression rate = −1.20 + 0.11 × CH. p = 0.034, linear mixed model), however, it was not observed in the WEM-d high group (p = 0.22), WEM-t short group (p = 0.088) or the WEM-t long group (p = 0.085).
As shown in Table 6, in all eyes (108 eyes), the optimal model describing mTD progression rate was: mTD progression rate = −0.95 + 0.075 × CH (AICc = 55.79); adding other variables did not improve the model as suggested by the decrease of AICc. In the WEM-d low-group, the optimal model describing mTD progression rate was: mTD progression rate = −1.20 + 0.11 × CH (AICc = 42.78). In the WEM-d high-group, the optimal model describing mTD progression rate was: mTD progression rate = −0.64 + 0.028 × mean GAT-IOP (AICc = 17.66). In this group, the model included only CH as an independent variable; the AICc value was equal to 18.70 (mTD progression rate = −0.65 + 0.041 × CH). In the WEM-d high-group, the relative likelihood that the smallest AICc model was the optimal model compared to the CH monovariate model was calculated as 1.68.
In the WEM-t short-group, the optimal model describing mTD progression rate was: mTD progression rate = −1.54 + 0.038 × mean GAT-IOP + 0.080 × CH (AICc = 40.18). In this group, the model including only CH as an independent variable had an AICc value of 40.38 (mTD progression rate = −1.05 + 0.085 × CH), which had the second smallest AICc across all possible 2 7 models. In the WEM-t long-group, the optimal model describing mTD progression rate was: mTD progression rate = 0.51 − 0.012 × age + 0.012 × Initial mTD (AICc = 17.39). In this group, the model including only CH as an independent variable had an AICc value of 23.90, which was, again, the second smallest across all possible models. The relative likelihood that the smallest AICc model was the optimal model compared to the CH monovariate model calculated was 1.10 and 25.9 for WEM-t short-group and WEM-t long group, respectively.
Further model selection seeking for an optimal model for mTD progression rate, including IOPcc and CST-IOP as covariates, resulted in a different optimal model only in WEM-t low group. This optimal model was: mTD progression rate = −0.59 + 0.075 × mean GAT-IOP −0.048 × IOPcc (AICc = 39.23). The same variables were selected in all eyes and the other subgroups as shown in Table 6.
As shown in Table 7, 65 eyes were in the PGA usage group and 43 eyes were in the PGA non-usage group. Initial mTD was significantly different between these groups (p = 0.03, linear mixed model 23.87], respectively; these were again not significantly different (linear mixed model, p = 0.24). These differences remained non-significant after including age and AL as a covariate (p = 0.83 and 0.23, respectively). There was no significant difference in CH (p = 0.34) or the other variables between the two groups ( Table 7).
Discussion
In the current study, ORA and CST measurements were conducted in 108 eyes of 70 patients with POAG. As a result, CH was significantly related to mTD progression rate in all eyes. We also investigated the relationship between CH and mTD progression rate, in the relationship with WEM. We observed that CH was significantly related to mTD progression rate in the WEM-d low group, however, this relationship was not significant in the WEM-d high, WEM-t short and WEM-t long-groups. Furthermore, in all eyes, as well as the WEM-d low and WEM-t short-groups, the optimal models describing mTD progression rate included CH, however, this was not the case for the WEM-d high and WEM-t long-groups. The usage of PG was not significantly related to WEM nor other parameters. The associations between ORA-CH and both glaucomatous morphological and functional changes have been investigated in various previous studies. Some studies have reported that CH is associated with mean cup depth, cup-to-disc ratio, rim area, retinal nerve fiber layer average thickness and the acquired pit of the optic disc [25][26][27] . Others have reported CH is low in glaucomatous patients compared to non-glaucomatous eyes [28][29][30][31][32] . Furthermore, low CH has been reported to be associated with fast progression of glaucomatous VF and optic nerve changes [1][2][3][4] . As CH is the difference between pressures at two corneal applanation events, some studies argue that CH value reflects 'damping capacity' of cornea: i.e., function as stress absorber 9,33,34 . From this point of view, the ORA-CH measurement might be invalid if part of energy generated by the air pulse is converted to whole eye kinetic energy, resulting in an perturbed stress profile on cornea itself and inaccurate measurement of CH.
In the WEM-d high-group, as shown in Table 6, the optimal model included mean GAT-IOP, but not CH. The AICc value of the CH monovariate model in this group was higher than that of this optimal model by 1.04. In the WEM-d low-group, in contrast, the CH monovariate model itself had the minimum AICc value among all the possible models. On the other hand, in the WEM-t long-group, the optimal model included age and initial mTD. The AICc value of this optimal model was smaller by 6.51 than that of the CH monovariate model. In the WEM-t short-group, the AICc value of the CH monovariate model was higher than the optimal model, including mean GAT-IOP and CH, by 0.20. These findings suggest that larger WEM displacement (WEM-d) weakens the association between CH and glaucomatous VF progression because some proportion of the applied energy that does not only contribute to corneal deformation but also causes significant eye motion, which may counteract reliable CH measurement. Also, a larger time taken to reach the maximal displacement (WEM-t) may cause the consumption of more energy as the result of friction between the eye and orbit. Inclusion of IOPcc and CST-IOP in the model selection did not alter selected variables in the optimal model in all eyes, WEM-d low-group, WEM-d high-group, and WEM-t long-group. On the other hand, in WEM-t short-group, IOPcc and mean GAT-IOP were included (AICc = 39.23) instead of CH (AICc = 40.18). This is probably because IOPcc is closely associated with CH 35,36 , but also associated with IOP level at the time of measurement (Asaoka R, et al. IOVS. 2008;49: ARVO E-Abstract E703). Thus, these results suggest that mTD progression rate was best described by a well-established risk factor of GAT-IOP [37][38][39][40][41] , in conjunction with CH and IOP at the time of ORA measurement, in WEM-t short-group. On the contrary, in WEM-d low-group, the optimal model for mTD progression rate remained including CH. This may be because WEM-d and -t are related to IOP level at the time of measurement, so we analyzed the relationship between these WEM related parameters and three IOPs (IOPcc, CST-IOP and GAT-IOP) on the day of CST measurement. However, none of these were significantly related (p > 0.05, linear mixed model, data not shown in Result).
Previous large population studies established GAT-IOP [37][38][39][40][41] , age [38][39][40][41] , and VF damage at treatment initiation 38 as predictive factors for glaucomatous VF progression. Among these established risk factors, in the current study, mean GAT-IOP was included in the optimal model for mTD progression rate in the WEM-d high-group, and age and initial mTD were included in the optimal model for mTD progression rate in the WEM-t long group, whereas CH was not included in the optimal models in these groups (see Table 6). This is probably because, CH was not accurately measured because of large WEM, and instead these other variables were selected. There was not a significant relationship between CH and WEM-d or WEM-t in the current study. CH was evaluated with ORA whereas WEM was measured with CST. In the ORA measurement, the magnitude of the air jet applied to the cornea is proportional to IOP in ORA (P1 pressure). On the other hand, air pressure is of a uniform magnitude (70 mmHg) for all eyes, irrespective of the IOP level, in the CST measurement. Thus, a tighter relationship may be observed between CH and WEM, if the latter was measured using ORA. Also, despite its name, the WEM in CST is a measurement of the cornea, which is not necessarily identical to genuine motion of whole eye, because the shapes of eye balls are different from eye to eye. Thus, different results could be observed if the motion of the back of the eye ball was measured.
Topical usage of PGA induces various adverse effects around the eyelids, including deepening of the upper eyelid sulcus, flattening of the lower eyelid bags, orbital fat atrophy and a tight orbit, as collectively termed as 'prostaglandin-associated periorbitopathy' [10][11][12] As PGA induces a change in the connective tissue in orbit, it can also have some effects on the eye motion in the response to external stress. Further, PGA usage can alter the cornea and sclera. For instance, Harasymowycz et al. reported that the usage of travoprost decreased CCT thickness 42 . Moreover, the usage of PGA leads to upregulation of matrix metalloproteinases and downregulation of tissue inhibitors of matrix metalloproteinases associated with altered gene expression, as well as decreased collagen type I level and corneal thickness, in animal and human experiments [13][14][15] . The effects of these changes in the cornea, sclera and orbit on WEM are undoubtedly complex; the tight but fat atrophic orbit may influence WEM, but at the same time, a weakened and thin cornea would be more easily deformed by an applied external stress. In the current study, the WEM-d and WEM-t value were not significantly different between the PGA usage and non-usage groups. The current data was obtained from a real world clinic where PGA is usually prescribed in eyes with visual field progression. Thus, a future study is needed to further investigate the relationship between PGA usage and the movements of the whole eye, measuring whole eye movements before and after usage of PGA.
A limitation of the current study is that age was not matched between the PGA usage and non-usage groups. We statistically adjusted this effect in the current analyses, however, a further study would be needed to confirm the current results using age-matched groups.
In conclusion, the effect of eye motion on the relationship between CH and glaucomatous progression was investigated. As a result, the relationship between CH and glaucomatous visual field progression was weak in eyes with large WEM. PGA usage had no significant effects on WEM. | 4,795 | 2018-02-14T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Mantle lithosphere transition from the East European Craton to the Variscan Bohemian Massif imaged by shear-wave splitting
We analyse splitting of teleseismic shear waves recorded during the PASSEQ passive experiment (2006– 2008) focused on the upper mantle structure across and around the Trans-European Suture Zone (TESZ). Altogether 1009 pairs of the delay times of the slow split shear waves and orientations of the polarized fast shear waves exhibit lateral variations across the array, as well as back-azimuth dependences of measurements at individual stations. Variable components of the splitting parameters can be associated with fabrics of the mantle lithosphere of tectonic units. In comparison with a distinct regionalization of the splitting parameters in the Phanerozoic part of Europe that particularly in the Bohemian Massif (BM) correlate with the largescale tectonics, variations of anisotropic parameters around the TESZ and in the East European Craton (EEC) are smooth and of a transitional character. No general and abrupt change in the splitting parameters (anisotropic structure) can be related to the Teisseyre–Tornquist Zone (TTZ), marking the edge of the Precambrian province on the surface. Instead, regional variations of anisotropic structure were found along the TESZ/TTZ. The coherence of anisotropic signals evaluated beneath the northern part of the Brunovistulian in the eastern rim of the BM and the pattern continuation to the NE towards the TTZ, support the idea of a common origin of the lithosphere micro-plates, most probably related to Baltica. Smooth changes in polarizations of the core-mantle boundary refracted shear waves (SKS), polarizations, or even a large number of null splits northward of the BM and further across the TESZ towards the EEC indicate less coherent fabrics and a transitional character of structural changes in the mantle beneath the surface trace of the TESZ/TTZ. The narrow and near-vertical TTZ in the crust does not seem to have a steep continuation in the mantle lithosphere. The mantle part of the TESZ, whose crust was formed by an assemblage of suspect terranes adjoining the EEC edge from the southwest, appears in our measurements of anisotropy as a relatively broad transitional zone in between the two lithospheric segments of different ages. We suggest a southwestward coninuation of the Precambrian mantle lithosphere beneath the TESZ and the adjacent Phanerozoic part of Europe, probably as far as towards the Bohemian Massif.
Introduction
The Trans-European Suture Zone (TESZ) represents a distinct tectonic feature that can be traced through northwestern to southeastern Europe at a length of ∼ 3500 km and manifests the contact zone between the Precambrian and Phanerozoic Europe (Fig. 1).The two parts of Europe differ not only as to their ages, but also in their structure and in several other physical parameters, which can be traced in various geophysical models of the region, e.g. in seismic velocities, anisotropy, and heat flow (e.g.Spakman, 1991;Babuška et al., 1998;Piromallo and Moreli, 2003;Majorowicz et al., 2003;Artemieva, 2009;Jones et al., 2010;Debayle and Richard, 2012).The East European Craton (EEC) appears as a large rigid domain with a thick lithosphere that is bordered in the southwest by a relatively narrow linear Teisseyre-Tornquist fault zone (TTZ).On the other hand, the region westward of the TESZ represents a Variscan assemblage of micro-plates with varying lithosphere thickness and fabrics, partly rimmed by rifts and subduction zones reflecting micro-plate collisions (e.g.Plomerová and Babuška, 2010).The central part of the long TESZ, running through the territory of Poland, is a zone of about 150-200 km wide.Pharaoh (1999).STZ stands for the Sorgenfrei-Tornquist Zone, TBU for the Teplá-Barrandian Unit included in the Moldanubian Zone of the Bohemian Massif (BM).
The term TESZ was introduced for an assemblage of suspect terranes adjoining the EEC edge from the southwest (Berthelsen, 1992) and the TTZ thus marks the northeastern boundary of the TESZ (Dadlez et al., 2005, see Fig. 1).
Three decades of controlled-source seismic (CSS) exploration of the TESZ crust (Guterch et al., 1986(Guterch et al., , 1994;;Grad et al., 1999Grad et al., , 2003;;Janik et al., 2002Janik et al., , 2005;;Środa et al., 2002;Wilde-Piórko et al., 1999, 2010) resulted in detailed, but often different interpretations of its structure.But in general, structure of the crystalline crust of the TESZ, covered by up to 12 km thick sediments, seems to be more complicated than that of the Variscan belt to the west and of the EEC, with sudden structural changes observed laterally along the suture (Dadlez et al., 2005).The authors, as well as Narkiewicz et al. (2011), interpret the complex structure of the broad TESZ as a result of detachment and accretion of lithospheric fragments of Baltica, Avalonia and various Gondwana-derived exotic terranes.To better understand processes that formed this part of Europe, we have to look deeper beneath the crust, i.e. into the lower lithosphere and the upper mantle below, and probe their velocity structure and fabrics.
The PASSEQ array of seismic stations (Fig. 2 and http: //geofon.gfz-potsdam.de/db/station.php,network code PQ) was designed to record teleseismic data during [2006][2007][2008] for studying variations of the upper mantle velocity structure across the TESZ.The array spans across the central part of the TESZ and covers a vast band of ∼ 1000 km long and ∼ 600 km broad (Wilde-Piórko et al., 2008).Densely spaced broad-band (BB) and short-period (SP) stations are mixed in the central band of the array.Seven parallel lines of SP and of (2006)(2007)(2008) designed to study upper mantle structure of the TESZ.Labels are assigned to some of stations for easier orientation.
BB stations complement on both sides the central backbone of the array.In combination with other large-scale European passive seismic experiments, particularly with the TOR, which covered the northwestern part of the TESZ (Gregersen et al., 2002), and the SVEKALAPKO, which concentrated on upper mantle structure around the Proterozoic/Archean contact in south-central Fennoscandia (Hjelt et al., 2006), the PASSEQ array complements the international data sets needed for high-resolution studies of the European lithosphere and the upper mantle, to help in answering questions on structure and evolution of the continent.
In this paper, we present our findings on the mantle structure derived from shear-wave splitting, evaluated from teleseismic data recorded during the PASSEQ array operation.The research aims at detecting changes in anisotropy of the upper mantle beneath the TESZ and surrounding tectonic units.Mapping variations of anisotropic structure of the upper mantle helps answer questions on how the zone, approximately delimited at the surface, may continue down to the upper mantle, as well as on a possible identification of individual blocks building the lower lithosphere.
Data and method
Shear-wave splitting represents nowadays a standard method to measure seismic velocity anisotropy of the upper mantle.Various methods are applied to get splitting parameters and to model anisotropy of the continental upper mantle (e.g.Vinnik et al., 1989;Silver and Chan, 1991;Silver and Savage, 1994;Menke and Levin, 2003), each of them having both advantages and limitations (Vecsey et al., 2008;Wüstefeld and Bokelmann, 2007).To retrieve 3-D orientation of large-scale anisotropic structures in the upper mantle, we have applied a modified version (Vecsey et al., 2008;code SPLITshear, www.ig.cas.cz/en/research-teaching/software-download) of a method introduced by Šílený and Plomerová (1996).The method exploits signals on all three components of the broad-band recordings and analyses them in the ray-parameter coordinate system (LQT).To study lateral variations of the anisotropic signal in detail, for which we need densely spaced seismic stations, we included also waveforms recorded by medium-period seismographs (Ts ∼ 5 s) into the splitting analysis, because the dominant period of shear waves is in the range of 8-10 s for most of the broad-band recordings.Some stations, equipped with 2-3 s seismometers, allowed analysing shear waves as well.However, we always mark anisotropic parameters evaluated at these stations in a different way and consider them as complementary, and only if they are consistent with results of surrounding BB stations.All waveforms were filtered by the third order Butterworth band-pass filter 3-20 s.For details of the method see Vecsey et al. (2008).Here we describe only the main principles needed for understanding our figures and results.
Figure 3 shows an example of splitting of the shear wave refracted at the core-mantle boundary (SKS) recorded at temporary station PA65.In total we obtained 1009 pairs of splitting parameters from the PASSEQ recordings, including null measurements (Supplement Table S1).The fast S polarizations and split delay times could be determined at 158 stations of the PASSEQ array, with 6.4 splitting pairs per station, on average.Splitting evaluations from all 15 events were feasible at 19 stations of the array.The shear-wave splitting parameters are evaluated by minimizing energy on transverse component T (Vecsey et al., 2008), which is the original method of Silver and Chan (1991) which we modified into the ray-parameter LQT coordinate system.The broad elliptical particle motion (PM) calculated from the QT components changes to a linear one for the fast (F ) and slow (S) components after the coordinate rotation and applying a time shift correcting the splitting delay.The minimum of a misfit function in the (δt, ψ) space, where δt is a time shift between the fast and slow split shear waves and ψ is orientation of the fast shear wave in the (Q, T ) plane, defines the splitting parameters, with which one can measure the velocity anisotropy.Depth and steepness of the minimum along with the bootstrap diagrams are used to evaluate the reliability of the measurements.The orientation of the fast shear wave given by an angle ψ in the QT plane is defined by two angles -azimuth ϕ (measured from the north clockwise) and inclination angle θ measured from the vertical axis upwards.Because polarizations often differ for waves coming from opposite directions (i.e. from azimuth ϕ and from ϕ + 180 • ), in spite of their steep incidences, we always denote the polarization azimuth by an arrow pointing from a station, or from a ray-piercing point, in the down-going direction.This way of presenting the results shows fast S orientation systematically and allows us to detect boundaries between mantle domains with differently oriented anisotropy (Fig. 4).Such an approach allows us to depict variations of the splitting parameters in the full 0-360 • back-azimuth range (i.e.including different polarizations for opposite directions), though usually the parameters are plotted at modulo-90.This improves the azimuth coverage only artificially and, moreover, implements an assumption of horizontal symmetry axes.Vecsey et al. (2011) demonstrate a clear 360 • periodicity of synthetic splitting parameters calculated for a model with a tilted axis.However, noise in the data causes a tendency to 90 • periodicity, which can be misinterpreted as a double-layer model.
While processing the data of the PASSEQ array, we faced several difficulties.Careful processing of the data mostly made it possible to reveal mistakes caused, e.g. by an interchange of the N, E, Z components, or by polarity flipping, though it was not always straightforward, particularly when both errors occurred simultaneously.Nevertheless, incorrect seismometer orientation to the north proved to be the most difficult obstacle.When a suspicion of a misorientation appeared, we superimposed all particle motion PM plots at a station (Fig. 5) and searched for a systematic deviation of the PM.Poor linearity of the corrected particle motion patterns is another indication of sensor misalignment (Liu and The "arrow style" of presentation shows the domain boundary while the standard approach (azimuthal) does not.Gao, 2013).We estimate that with the use of the PM stacking technique only misorientations larger than ∼ 10 • can be identified, because individual PMs can vary due to structure and noise and can form at some stations two different groups in dependence on back azimuths.Figure 5 shows PMs that clearly identified misoriented seismometers at two stations -PC23 (temporary) and GKP (permanent) -in contrast with the PMs at JAVC with seismometer well oriented to the north.Our estimates of the deviations attain 28 and 41 • at the PC23 and GKP stations, respectively (Table 1).We can thus conclude that a distance between stations should be small relative to expected variations in structure, in order to eliminate potential technical errors, which could otherwise be misinterpreted as effects of mantle structure.
We have tested a potential danger of seismometer misorientation by analysing signals of different quality on welloriented components and then on the horizontal components rotated only by 5 • off correct direction, which simulated a seismometer misalignment.Changes in split delay times of a waveform classified as "good" lie within the error interval, but azimuths of the fast polarization differ by 15 • , if the "minimum T energy method" is used (Table 2).The "eigenvalue method" returns well the "new" polarization azimuth.On the other hand, in the case of "fair" signals the difference in polarization azimuths, evaluated by the "minimum T energy method" from original recordings and from those rotated by 5 • , attains 67 • .The "eigenvalue method" returns the fast polarization azimuth that differs by 5 • from the original recordings, but it doubles the split delay time regardless of seismometer orientation (Table 2).Vecsey et al. (2008) showed that the "minimum T energy method" is more robust than the "eigenvalue method" in the case of noise in a signal.However, as we show here, the "minimum T energy method" appears to be more sensitive to potential errors in seismometer orientation.High accuracy in the northward orientation of seismometers can and should be technically ensured, e.g. with the use of a gyrocompass during station installations, but we can hardly avoid noise completely.Stacking of individual splitting measurements from waves closely propagating through the mantle can help to reveal a distortion of splitting parameters due to noise in signals.Therefore, we consider the "minimum T energy method" as the most robust for analysing SKS waves, which should exhibit linear polarizations, i.e. no energy on the T component, when reaching the bottom of an anisotropic medium.
Results
Most papers presenting results of shear-wave splitting analysis search for an azimuth of the fast shear phase and a split delay time (δt) of the slow shear phase.The azimuth of the fast shear wave is then a priori associated with the horizontal direction of the "fast" olivine axis a of a model mantle peridotite.To summarize all shear-wave splitting parameters evaluated in such "standard" way, we plot average fast shear-wave polarizations (see Supplement Table S1 for individual measurements) as bars with their length proportional to the split delay time (Fig. 6a).Though this presentation shows only azimuthal anisotropy with the π-periodicity, we can identify main large upper mantle provinces with different anisotropic signal: the orientations from W-E prevail in the Bohemian Massif (BM) in general (see Babuška et al., 2008), less coherent fast S orientations occur to the northwest of the BM, while between the Moravian Line and the Carpathians front in the east of the region, the NW-SE average polarizations are very stable and the signal is strong even in close vicinity of the TTZ.This is not the case in the region north of the Elbe-Odra Line.Further to the east, across the TTZ, the anisotropic signals are also less coherent.Beneath the EEC the anisotropic signal is weaker in comparison with that southwest of the TTZ and particularly in the Bohemian Massif.
The location of the PASSEQ array was unfavourable for recording SKS phases, because they do not cover a complete back-azimuth range (see inset of Fig. 6a).Earthquakes, which occurred during the recording period of the array at epicentral distance larger than 85 • and with a sufficient shearwave signal/noise ratio, concentrate into two back-azimuth fans: 30-70 • and 240-300 • .By separating polarizations of SKS waves arriving from western and northeastern azimuths, one can get a better insight into geographical variations of the splitting parameters and directional variations at a site (Fig. 6b).We also show individual polarizations as arrows pointing from ray-piercing points at a depth of 80 km with their lengths proportional to the split delay times (Fig. 7).
Null-split measurements are also included (see Supplement Table S1).
The splitting parameters evaluated from the PASSEQ recordings of SKS phases depend on back azimuth and exhibit significant lateral variations within the array .Because two directions of SKS shear-wave propagation dominate, we divide the anisotropic signals into two groups comprising nearby events, whose back azimuths are very close and lie towards the NE and the NW.Combining results for nearby events allows us to eliminate incorrectly determined parameters (see also Liu and Gao, 2013) and to recognize reliably geographical changes of mantle structure.
Several provinces, exhibiting their own characteristics of the shear-wave PM and apparent splitting parameters, can be delimited around the TESZ.Broad elliptical polarizations within the BM with mostly NW-W oriented fast S polarizations, progressively turn to narrow PMs and null splits at stations north of the BM for waves from the NE (Figs. 8 and 9).In comparison with the lateral extent of the BM, there are only small regions indicating a consistent anisotropic signal in the upper mantle to the north of the massif along the PASSEQ array.Clear and coherent anisotropic signals come from shear waveforms at stations in a relatively small region around the 14 • E longitude and between 51.5 to 52 • N latitude, in the central part of the array crossing the TESZ and at some stations located in the EEC, east of the TTZ (Fig. 8, see also Fig. 1).SKS phases arriving at stations located along the northwestern rim of the array do not split at all, only with the exception of the small region mentioned above.
Three bands of marked PMs evaluated from recordings of the BB stations (Fig. 9 Waves propagating from the NW (Figs. 10 and 11) also clearly demonstrate regional variability of the splitting parameters, though for these directions we evaluate a large number of apparent null splits from very narrow PMs in a much larger portion of the PASSEQ array than for waves from the NE.Null splits dominate in the western part of the array beneath the TESZ, between the BM and TESZ and beneath a large part of the BM.On the other hand, strong and coherent fast polarizations are evaluated at most stations of the eastern part of the array, as well as at several stations north of the TTZ in the EEC, the latter with less well coherent polarization orientations.
At some stations (e.g.CLL, Fig. 12), we evaluate splitting parameters which differ significantly even for data from a narrow band of azimuths, even if only relatively stable solutions are considered.We show how sensitive are the results to a width of the elliptical particle motion for a subset of the PASSEQ stations.As expected, the wider the PM, the more stable a splitting solution we get (compare the results for stations PC21, MOX and CLL, Fig. 12).Split delay times at the CLL attain values from near null split (i.e.undefined δt) to δt = 1.2 s, with diffused fast polarization azimuths.In general, we attribute the different polarization azimuths to a signal distortion due to noise, or to a local structure including a shallow one.The CLL station is located at the boundary between the consistently split shear waves in the BM and null splits northwest of the BM.The complex structure in the rim of the BM affects significantly the splitting parameters evaluated even from waves arriving from very close directions.
Not only the amount of energy content on the T component (see Fig. 3), determining the width of PM ellipse, is decisive for reliability of splitting results.For example, if the Q/T amplitude ratio is ∼ 10 : 3 then a signal/noise ratio ∼ 4:1 on the T component is a minimum value indicating a good reliability of the results (Table 2), besides the bootstrap measures (Vecsey et al., 2008) in the case of splitting being classified as "good".Interpreting results at stations which have only few data and without proper quality checking could lead to wrong inferences on the upper mantle structure (see also Liu and Gao, 2013).
Discussion
Similarly to other continental regions (e.g.Plomerová and Babuška, 2010), anisotropic signals that originate in the upper mantle vary in different provinces covered by the PASSEQ array.Respective mantle regions seem to be delimited by distinct tectonic features.Two types of changes of apparent polarization parameters, i.e. fast S polarization and time delay δt variations, need to be considered -(1) at individual stations of the array in dependence on direction of wave propagation as well as (2) regional variations for particular directions of propagation.The former leads to 3-D modelling of a structure of individual mantle domains, the second to delimiting approximate domain boundaries.Reliable modelling of anisotropic structures in 3-D requires a good directional coverage, which is impossible in the case of the SKS waves.Nevertheless, a regionalization of the mantle, based on changes of evaluated anisotropic parameters is plausible.
We concentrate on the variable component of the splitting parameters which we associate with the lithosphere structure.The southern part of the PASSEQ array covers the Bo-hemian Massif (BM), where detailed and intensive research of the anisotropic structure of the lithosphere has been carried out.Joint inversion of anisotropic parameters of body waves (shear-wave splitting and P wave travel residuals) resulted in the retrieval of several domains of mantle lithosphere with different anisotropic structure forming the massif (see Babuška and Plomerová, 2013, for review).North of the BM regional changes of anisotropic signal are smooth and less distinct.
The anisotropic signal detected in different regions is used in association with the present-day flow in the asthenosphere.However, European plate moves very slowly without a clear direction (e.g.Gripp and Gordon, 2002).Also recent geodynamic models of mantle flow (Conrad and Behn, 2010) give a very slow flow, if any, in the mantle beneath the whole of Europe.We thus cannot expect a substantial contribution from the asthenosphere to the overall anisotropy pattern.Therefore, similarly to the BM lithosphere, we associate a substantial component of the evaluated anisotropy with mantle lithosphere structure.Though small-scale anisotropic structures are common in the crust, it is generally accepted that only up to ∼ 0.3 s of the split delay time can be attributed to anisotropy of the heterogeneous crust (e.g.Huang et al., 2011).Moreover, the steeply propagating SKS waves do not split in transversally isotropic media with vertical axis of symmetry (e.g.sedimentary basins).
Lateral changes of splitting parameters and tectonics westward of the TTZ
Complex tectonics of Phanerozoic Europe -westward of the TTZ -is reflected in variations of the PMs and the splitting parameters at stations in this part of the PASSEQ array.The north-south oriented Variscan Front (VF) around the ∼ 16 • E, paralleling the Moravian Line (Fig. 1), separates the narrow PM beneath the Brunovistulian (BV), Upper Silesian (US), Malopolska (MM) and Lysogory (LU) terranes from the strong anisotropic signal within the major part of the Bohemian Massif for waves from the NE (Fig. 9).Similarly, this part of the VF separates weak anisotropic signals in the BM for waves from the NW and the significant anisotropic signal in the Brunovistulian, US, MM and LU (Fig. 10).This means that anisotropic structures west and east of this part of the VF differ and none of them can be approximated by a simple anisotropic model with horizontal symmetry axis.Split delay times around 1s locate the main source of the anisotropy into the upper mantle and the regional character of the splitting in correlation with the large-scale tectonics indicates that a major part of the anisotropic signal most probably originates in the mantle lithosphere.A simple estimate of a depth interval, where the source of anisotropy might be located if considering Fresnel zones of rays approaching two nearby stations (e.g.Alsina and Snieder, 1995;Chevrot et al., 2004), can be used only in the case of azimuthal anisotropy, i.e. when the mantle fabric can be approximated by anisotropic models with horizontal symmetry axis.However, this is not generally valid for complex fabrics of the continental mantle lithosphere (e.g.Babuška and Plomerová, 2006).Particularly, there is the issue of the upper limit of estimated depth interval (minimum depth) to which the source of anisotropy can be located.Considering anisotropy with an inclined symmetry axis and evaluating the splitting parameters in the QT plane, we get different splitting parameters for waves approaching the station steeply, but from opposite azimuths.The resulting splitting (fast S polarization and δt) depends on direction of propagation, while when considering the azimuthal anisotropy (i.e. as a 2-D phenomenon), the fast S polarization is "constant" and the fast S azimuth is generally used in association with the orientation of the symmetry axes.In the case of dipping symmetry axes, we lose information about the minimum depth below which the source of anisotropy might be located (e.g.depth z1 in Alsina and Snieder, 1995) and we cannot associate the fast S polarization azimuth (either average values or polarizations for a particular back azimuth) directly with the symmetry axis, but have to invert for that.
Previous studies of the upper mantle structure beneath the BM, based on data of a series of passive seismic experiments from 1998 to 2009 and with the use of different seismological techniques, model the BM mantle lithosphere as an assemblage of several domains retaining their own fossil fabrics (Plomerová et al., 2007(Plomerová et al., , 2012a;;Karousová et al., 2012Karousová et al., , 2013;;Geissler et al., 2012;Babuška and Plomerová, 2013).Joint analysis and inversion of anisotropic parameters of body waves resulted in 3-D self-consistent anisotropic models of the domains with differently oriented and inclined symmetry axes.Processing data from dense networks of the BOHEMA II and III passive seismic experiments identified two domains in the Brunovistulian mantle lithosphere.Its southern part underthrust the eastern edge of the BM up to about 100 km westward beneath the Moldanubian (MD) part of the massif (Babuška and Plomerová, 2013).The northern part of the Brunovistulian mantle lithosphere, covered by the US crustal terrane, steeply collides with the Sudetes in the northeastern BM (Plomerová et al., 2012a).The authors suggested that the southern and northern fragments of the Brunovistulian micro-plate, separated by the Elbe Fault Zone (EFZ, dashed line in Fig. 1) might have originally belonged to different plates, i.e.Gondwana and Baltica, respectively.Seismic data from the PASSEQ array including directional variations of P-wave residuals suggest a continuation of the northern Brunovistulian anisotropic signal without significant changes towards the TTZ (Vecsey et al., 2013), which thus provides additional support for the idea.Moreover, anisotropic signals in P-spheres in the northern half of the PASSEQ stations (Plomerová et al., 2012b) resemble, in general, those found beneath the southernmost tip of the Baltic Shield (Plomerová et al., 2002;Eken et al., 2010).
In this paper, we mainly concentrate on the region north and northeast of the BM, where anisotropic signal changes significantly.Our shear-wave splitting measurements from PASSEQ data indicate prevailingly smooth changes in mantle fabrics northward of the BM.Null splits or weak anisotropic signals prevail at stations along the Rheic Suture and in the easternmost part of the Rhenohercynian domain that parallels the TESZ .However, within this domain of potential low anisotropy, two relatively small regions with consistent anisotropic signal are detected by waves propagating from the NE.The first one is located between the most bent part of the VF and the Rheic Suture, the second one seems to be linked with crossing of the VF and Moravian Line, in close vicinity to the TTZ.However, apart from the complex tectonics, waveforms at stations in the TESZ suffer from noise due to the thick sedimentary cover of the crystalline basement.Distinct SKS polarizations of waves from the NW in the Brunovistulian domain, as well as delay times between 1 and 2 s, remain almost unchanged across the TESZ towards the EEC (Fig. 10), whereas polarizations of SKS waves arriving from the NE change abruptly at the TTZ (see station line II in Fig. 8).
Lateral changes of splitting parameters and tectonics eastward of the TESZ
Regional variations of the splitting parameters, as well as their back-azimuth dependences, occur also eastward of the TESZ, but groups of stations with similar anisotropic parameters are less coherent than those in Variscan provinces westward of the TTZ.Also, linking these variations with the large-scale tectonics of this Precambrian region is not as straightforward as it is in the Phanerozoic part of Europe, or as is possible in the case of the northern Fennoscandian lithosphere, where Plomerová et al. (2011) relate, e.g. a significant change in mantle fabrics to the Baltic-Bothnia megashear Zone (BBZ).Nevertheless, the splitting parameters at PASSEQ stations in the EEC and the sensitivity of the splitting parameters on back azimuth of arriving waves indicate a domain-like structure also in this part of the EEC.Unfortunately, insufficient amount of shear waveforms, needed for a detailed analysis and modelling of the upper mantle fabrics, were recorded in this part of the PASSEQ array.In general, both the directional and lateral variations in the splitting parameters confirm our previous inferences (e.g.Vecsey et al., 2007;Babuška et al., 2008;Plomerová et al., 2012a) that fabrics of the continental mantle lithosphere have to be modelled in 3-D with generally oriented symmetry axes.
In light of the domain-like structure of the continental lithosphere identified in different tectonic provinces (e.g.Babuška and Plomerová, 2006), it is surprising that we do not observe a distinct change of the apparent splitting parameters across the TESZ/TTZ, one of the most prominent tectonic features in the European continent.Instead, we evaluate mainly smooth changes in SKS polarizations, or even a large number of null splits northward of the BM and further across the TESZ towards the EEC.Such observations indicate less
Changes of splitting parameters and tectonics in the northwestern (Thor, STZ) and central (TTZ) parts of the TESZ
The two sutures in the western part of the TESZ -the Thor Suture and Sorgenfrei-Tornquist Zone (STZ, see Fig. 1) sharply delimit domains of the mantle lithosphere of the Baltic Shield, the Danish block (Laurentia), and the North German Platform (Avalonia, see Pharaoh, 1999).The domains, representing fragments of Fennoscandia, Laurentia and Avalonia, differ in fabrics and lithosphere thickness distinctly (Plomerová et al., 2002;Cotte et al., 2002;Shomali et al., 2002;Babuška and Plomerová, 2004).On the other hand, a similar sharp change in lithosphere structure linked with the central part of the TESZ covered by the PASSEQ array, where the TTZ marks the crustal edge of the EEC on the surface, is not evident.Anisotropic signal can be detected if the SKS propagates through an anisotropic block of a thickness which is comparable with the wavelength (Plomerová et al., 2011).Moreover, from lateral changes of anisotropic parameters of body waves we can assess an inclination and thickness of boundary zones between the anisotropic domains of mantle lithosphere.For example, steep boundaries were retrieved in the MC (Babuška et al., 2002), in the BM (Plomerová et al., 2007), and in northern Fennoscandia (Plomerová et al., 2011), whereas an inclined boundary was modelled in the Proterozoic/Archean contact zone in south-central Finland (Vecsey et al., 2007).
In analogy with the previous results, we can deduce that the narrow near-vertical TTZ in the crust, representing the northeastern boundary of the TESZ (Dadlez et al., 2005), does not have a steep and narrow continuation in the mantle lithosphere.Instead, we suggest a complex transition zone between the Precambrian and Phanerozoic Europe, where various lithospheric fragments, possibly originally belonging to the EEC, underthrust the Phanerozoic domains.Berthelsen (1992) suggested that the TESZ crust was formed by an assemblage of suspect terranes adjoining the EEC edge from the southwest.Our measurements of anisotropy indicate a relatively broad transitional zone in between the two lithospheric segments of different ages.Depth estimates of the lithosphere-asthenosphere boundary (LAB) situate this important "discontinuity" to ∼ 140 km in the west and down to ∼ 200 km in the east of the TESZ (Plomerová and Babuška, 2010;Knapmeyer-Endrun et al., 2013).The mantle lithosphere thus seems to be thick enough to accommodate anisotropic signal detected by the shear-wave splitting analysis.However, considering the SKS wavelength of ∼ 40 km, which corresponds to ∼ 8-10 s dominant periods of teleseismic shear waveforms, the crust thickness of ∼ 40 km and a wedge-like structure of the contact with a transition be-tween the blocks, we do not observe a consistent pattern of anisotropic signals in the split shear waves and a sharp change of the splitting parameters which would reflect a sharp change of the upper mantle structure.
A note on the geodynamic development of the region
around the TESZ Dadlez et al. (2005) suggested a scenario of the tectonic development of the TESZ involving detachments of elongated and narrow slivers of the Baltica crust, their northwest wandering along anticlockwise rotated Baltica (Ordovician-Early Silurian; Torsvik et al., 1996) and later their reaccretion to Baltica meeting with docked Avalonia.Nowadays, these pieces are supposed to form the basement of the TESZ crust in northwestern and central Poland.Grad et al. (2008) interpret the high-velocity lower crust extending southwestward of the TESZ as far as beneath the Fore-Sudetic block, as the edge of Baltica crust.Malinowski et al. (2013) revealed a complex pattern of the Paleozoic and Alpine accretion at the EEC margin.But based on a deep seismic reflection profile, they interpret a westward extent of the EEC lower crust only to the TTZ.Further to the southwest they do not associate the reflective horizon with the top of the EEC crystalline basement, but with a different reflective zone in the uppermost part of the lower BM crust towards the Carpathian Fold-and-Thrust belt.Our results on deep lithosphere structure suggest that fragments of the Precambrian mantle lithosphere most probably underthrust the Proterozoic platform west of the TTZ and might even penetrate the mantle southward as far as to the EFZ in the eastern BM (northern part of the Brunovistulian).The complex structure of the upper mantle, as well as underthrusting of microplate fragments in the TESZ, might contribute to the largest discrepancy in magnetotelluric and seismological LAB depth estimates ever found in the European continent (Jones et al., 2010).Prevailingly smooth changes of the anisotropic signal (including the nulls) across the TESZ contrast with significant changes in splitting parameters along the TTZ.The notable change occurs around the TTZ intersection with ∼ 18 • E longitude, close to the edge of the LU and MM units (Pharaoh, 1999; see also Fig. 1), which are along with the Brunovistulian domain associated with Baltica (Dadlez et al., 2005).NW of this "triple junction", a narrow band of the Avalonian fragment is squeezed in between the TTZ and the VF.Narkiewicz et al. (2011) study in detail crustal seismic velocity structure and demonstrate a "preserved memory" of a pre-Devonian terrane accretion at the East European Platform margin.The authors took into consideration geological and potential field evidence that allowed them to interpret Upper Silesia, Malopolska and Lysogory blocks as separate crustal units, though without precise marking sutures between the particular exotic terranes identified by sharp lateral gradients in the velocity models.This may also lead to discrepancies in delimiting units in tectonic schemes of different authors (e.g.Pharaoh et al., 1999;Dadlez et al., 2005) and to leaving distinction between some of the units as an open question (Narkiewicz et al., 2011).Babuška et al. (1998) deduced from depth variations of surface-wave radial and azimuthal anisotropy that the lateral extent of the mantle lithosphere of Precambrian units is larger than the extent of mapped crustal terranes.Offsets between mantle and crust boundaries of tectonic units, attaining several tens of km as a result of the lower-crust/mantle decoupling, are often observed (e.g.Babuška et al., 2008).Therefore, based on characteristics of the anisotropy evaluated from shear-wave splitting, we suggest that the EEC mantle lithosphere can penetrate into the Phanerozoic part of European plate southwest of the TTZ, beneath the TESZ and probably even farther beneath the Variscan provinces, regardless of which interpretations of the crustal terranes, concerning particularly the Baltica lower-crust extent, are adopted.
Conclusions
We have analysed splitting of shear waves (SKS phases) recorded during the PASSEQ passive experiment focused on a study of the upper mantle structure across the Trans-European Suture Zone (TESZ).1009 pairs of the delay times of the slow split shear waves and orientations of the polarized fast shear waves exhibit lateral variations within the array, even if evaluated from the same event.Individual measurements at a station depend on back azimuths as well.Particular attention was paid to tests of the northward orientation of seismometers to avoid misinterpretations of the mantle structure due to the instrument misalignment.We identified seismometer misorientations exceeding 10 • not only at several portable stations, but also at some observatories.
While a distinct regionalization of the mantle lithosphere according to anisotropic structure exists in the Phanerozoic part of Europe, a correlation with the large-scale tectonics around the TESZ and in the East European Craton (EEC) is less evident.No general and abrupt change in the splitting parameters can be related to the TTZ, marking the edge of the Precambrian province on the surface.Significant change of the mantle lithosphere structure appears at the northern edge of the Variscan Bohemian Massif (BM).Distinct regional variations of anisotropic structure can also be followed along the TESZ/TTZ, while changes across the zone are gradual.Based on geographical variations of shear-wave splitting, we suggest a southwestward continuation of the Precambrian mantle lithosphere beneath the TESZ, and probably even further southwest.
Figure 1 .
Figure 1.Simplified tectonic sketch of the Trans-European Suture Zone (TESZ) and adjacent areas according to Pharaoh (1999).STZ stands for the Sorgenfrei-Tornquist Zone, TBU for the Teplá-Barrandian Unit included in the Moldanubian Zone of the Bohemian Massif (BM).
Figure 2 .
Figure 2. Seismic stations of the passive experiment PASSEQ(2006)(2007)(2008) designed to study upper mantle structure of the TESZ.Labels are assigned to some of stations for easier orientation.
Figure 3 .
Figure 3. Example of evaluation of SKSac phase splitting at station PA65 in the central part of the PASSEQ array (see Fig. 2) for an earthquake in the Chile-Argentina border region: 2006-08-25_00:44, 24.34 • S 67.01 • W, 185 km deep, 5.8 Mw.Epicentral distance to the station is 105.2 • , back-azimuth 250.0 • and incidence angle 7.5 • .For more details of the method see Vecsey et al. (2008).
Figure 4 .
Figure 4. Fast S polarizations evaluated for synthetics propagating through two blocks with divergently inclined fast symmetry axes.The "arrow style" of presentation shows the domain boundary while the standard approach (azimuthal) does not.
Figure 5 .
Figure5.Horizontal shear-wave particle motion (PM) across the PASSEQ array for an event from the NW (left), located in Guerrero region, documenting incorrect northward orientation of seismometers at stations GKP and PC23.PMs rotated to the back azimuths and stacked for all events evaluated at stations PC23 and GKP with misoriented seismometers, and correctly aligned seismometer at JAVC (right).Only sufficiently large errors (∼ > 10 • ) in seismometer misorientation can be revealed by this method.Smaller deviations of the PM can be caused by a weak anisotropy in the upper mantle.
Figure 6 .
Figure 6.Shear-wave splitting presented in a standard way, i.e. the fast shear-wave polarization azimuths (Supplement TableS1) as bars with length proportional to the split delay time: (a) averages calculated from all measurements regardless of wave back azimuth and (b) averages calculated separately for waves arriving from the west and from the northeast.Inset shows epicentre distribution of 15 events used in this study relative to the PASSEQ array (star).
Figure 7 .
Figure 7. Fast shear-wave polarizations (ψ, δt) evaluated in the LQT coordinate system presented at ray-piercing points at depth of 80 km.The arrows mark azimuths ϕ of the polarized fast split shear waves and point in down-dip directions.See also Fig. 3 and related text.
Figure 8 .
Figure 8. Azimuths ϕ of the fast shear-wave polarizations and the split-delay times δt evaluated for three events from the NE back azimuths.Anisotropic signal dominate in the Bohemian Massif, null splits or small provinces with coherent polarizations exist west and north of the Bohemian Massif.Complementary measurements at stations equipped with 2-3 s seismometers are shown in light-grey colour.
Figure 9 .
Figure 9. PMs for three events from the NE (the same as in Fig. 8).To emphasize variations of the PM across and along the TESZ, three profiles of the BB stations are marked by coloured bands, whose widths are in relation to the width of the PM ellipses: orange -three areas of broad PMs (in the BM, TESZ/TTZ and EEC) along profile I; red -broad PMs in the BM, followed by narrow PMs, getting gradually broader in the EEC along profile II; yellow -mostly linear PMs along profile III.
Figure 10 .
Figure 10.Azimuth ϕ of the fast shear-wave polarizations and delay times δt evaluated for four events from the NW back azimuths.Green arrows represent results stacked for two events.Nulls or nearnull splitting prevail in the BM and in the western part of the array, whereas stations east of the Moravian Line show strong anisotropic signal for this back-azimuth interval.
Figure 11 .
Figure 11.PMs for the same events from the NW as in Fig. 10.
Figure 12 .
Figure 12.Shear-wave polarizations evaluated at a part of the PASSEQ array from recordings of three events.Splitting parameters evaluated from narrow PM of waves arriving from very close directions differ at station CLL, while we get identical splitting parameters from the broad PM at e.g.station PC21.Complex structures can affect significantly the splitting parameters of waves arriving even from very close directions.
L.
Vecsey et al.: Mantle lithosphere transition from the EEC to the Variscan BM coherent fabrics and a transitional change of mantle structure beneath the surface trace of the TESZ/TTZ. | 9,527 | 2014-08-06T00:00:00.000 | [
"Geology"
] |
A Precise Geoid Model for Africa: AFRgeo2019
In the framework of the IAG African Geoid Project, an attempt towards a precise geoid model for Africa is presented in this investigation. The available gravity data set suffers from significantly large data gaps. These data gaps are filled using the EIGEN-6C4 model on a 15 0 (cid:2) 15 0 grid prior to the gravity reduction scheme. The window remove-restore technique (Abd-Elmotaal and Kühtreiber, Phys Chem Earth Pt A 24(1):53–59,1999; J Geod 77(1–2):77–85, 2003) has been used to generate reduced anomalies having a minimum variance to minimize the interpolation errors, especially at the large data gaps. The EIGEN-6C4 global model, complete to degree and order 2190, has served as the reference model. The reduced anomalies are gridded on a 5 0 (cid:2) 5 0 grid employing an un-equal weight least-squares prediction technique. The reduced gravity anomalies are then used to compute their contribution to the geoid undulation employing Stokes’ integral with Meissl (Preparation for the numerical evaluation of second order Molodensky-type formulas. Ohio State University, Department of Geodetic Science and Surveying, Rep 163, 1971) modified kernel for better combination of the different wavelengths of the earth’s gravity field. Finally the restore step within the window remove-restore technique took place generating the full gravimetric geoid. In the last step, the computed geoid is fitted to the DIR_R5 GOCE satellite-only model by applying an offset and two tilt parameters. The DIR_R5 model is used because it turned out that it represents the best available global geopotential model approximating the African gravity field. A comparison between the geoid computed within the current investigation and the existing former geoid model AGP2003 (Merry et al., A window on the future of geodesy. International Association of Geodesy Symposia, vol 128, pp 374–379, 2005) for Africa has been carried out.
Introduction
The geoid, being the natural mathematical figure of the earth, serves as height reference surface for geodetic, geophysical and many engineering applications. It is directly connected with the theory of equipotential surfaces (Heiskanen and Moritz 1967;Hofmann-Wellenhof and Moritz 2006), and its determination needs sufficient coverage of observation data related to the earth's gravity field, such as gravity anomalies. In this investigation, a geoid model for Africa will be determined. The challenge we face here consists in the available data set, which suffers from significantly large gaps, especially on land. The available data for this investigation is a set of gravity anomalies, both on land and sea. The geoid is computed using the Stokes' integral, which requires interpolating the available data into a regular grid. In order to reduce the interpolation errors, especially in areas of large data gaps, the window remove-restore technique Kühtreiber 1999, 2003) is used. The window technique doesn't suffer from the double consideration of the topographic-isostatic masses in the neighbourhood of the computational point, and accordingly produces un-biased reduced gravity anomalies with minimum variance.
In order to control the gravity interpolation in the large data gaps, these gaps are filled-in, prior to the interpolation process, with an underlying grid employing the EIGEN-6C4 geopotential model (Förste et al. 2014a,b). Hence the interpolation process took place using the unequal weight least-squares prediction technique (Moritz 1980). Finally, the computed geoid within the current investigation is fitted to the DIR_R5 GOCE satellite-only model by applying an offset and two tilt parameters. This adjustment reduces remaining tilts and a vertical offset in the model. Previous studies (Abd-Elmotaal 2015) have shown that the DIR_R5 GOCE model is best suited for this purpose on the African continent.
The first attempt to compute a geoid model for Africa has been made by Merry (2003) and Merry et al. (2005). A 5 0 5 0 mean gravity anomaly grid developed at Leeds University was used to compute that geoid model. We regret that this data set has never become available since then again. For the geoid computed by Merry et al. (2005), the remove-restore method, based on the EGM96 geopotential model (Lemoine et al. 1998), was employed. Another geoid model for Africa has been computed by . This geoid model employed the window removerestore technique with the EGM2008 geopotential model (Pavlis et al. 2012), up to degree and order 2160, and a tailored reference model (computed through an iterative process), up to degree and order 2160, to fill in the data gaps.
Due to problems with a data set in Morocco, used in the former solution AFRgeo_v1.0 (Abd-Elmotaal et al. 2019), the computed geoid has been compared only to the AGP2003 model (Merry et al. 2005) in the present paper.
Gravity Data
The available gravity data set for the current investigation comprises data on land and sea. The sea data consists of shipborne point data and altimetry-derived gravity anomalies along tracks. The latter data set was derived from the average of 44 repeated cycles of the satellite altimetry mission GEOSAT by the National Geophysical Data Center NGDC (www.ngdc.noaa.gov) Makhloof 2013, 2014). The goal of the African Geoid Project is the calculation of the geoid on the African continent. Data within the data window which are located on the oceans (shipborne and altimetry data) are used to stabilize the solution at the continental margins to avoid the Gibbs phenomenon. The land point gravity data, being the most important data set for the geoid at the continent, have passed a laborious gross-error detection process developed by Abd-Elmotaal and Kühtreiber (2014) using the least-squares prediction technique (Moritz 1980). This gross-error detection process estimates the gravity anomaly at the computational point using the neighbour points and defines a possible grosserror by comparing it to the data value. The gross-error detection process deletes the point from the data set if it proves to be a real gross-error after examining its effect to the neighbourhood points. Furthermore, a grid-filtering scheme (Abd-Elmotaal and Kühtreiber 2014) on a grid of 1 0 1 0 is applied to the land data to improve the behaviour of the empirical covariance function especially near the origin (Kraiger 1988). The statistics of the land free-air gravity anomalies, after the gross-error detection and the grid-filtering, are illustrated in Table 2. Figure 1a shows the distribution of the land gravity data set.
The shipborne and altimetry-derived free-air anomalies have passed a gross-error detection scheme developed by Abd-Elmotaal and Makhloof (2013), also based on the least-squares prediction technique. It estimates the gravity anomaly at the computational point utilizing the neighbourhood points, and defines a possible blunder by comparing it to the data value. The gross-error technique works in an iterative scheme till it reaches 1.5 mgal or better for the discrepancy between the estimated and data values. A combination between the shipborne and altimetry data took place (Abd-Elmotaal and Makhloof 2014). Then a grid-filtering process on a grid of 3 0 3 0 has been applied to the shipborne and altimetry-derived gravity anomalies to decrease their dominating effect on the gravity data set. The statistics of the shipborne and altimetry-derived free-air anomalies, after the gross-error detection and grid-filtering, are listed in Table 2. The distribution of the shipborne and altimetry data is given in Fig. 1b and c, respectively. More details about the used data sets can be found in Abd-Elmotaal et al. (2018).
Digital Height Models
If the computation of the topographic reduction is carried out with a software such as TC-program, a fine DTM for the near-zone and a coarse one for the far-zone are required. The TC-software originates from Forsberg (1984). In this investigation a program version was used which was modified by Abd-Elmotaal and Kühtreiber (2003). A set of DTMs for Africa covering the window ( 42 ı Ä Ä 44 ı I 22 ı Ä Ä 62 ı ) are available for the current investigation. The
A Short History of Used Data
The data used to calculate the current geoid solution for Africa have been described in Sects. 2.1 and 2.2. Data acquisition is a continuous tedious task, especially for point gravity values on land. As can be seen in Fig. 1, significant data gaps still need to be closed despite great efforts. In fact, since the first basic calculation of an African geoid by Merry (2003) and Merry et al. (2005), the point gravity data situation is continuously improving, although the original data of Merry et al. (2005) are no longer available. This can be concluded from Table 1. It should be mentioned that no ocean data had been used in the former AGP2003 solution.
GravityReduction
As stated earlier, in order to get un-biased reduced anomalies with minimum variance, the window remove-restore technique is used. The remove step of the window remove-restore technique when using the EIGEN-6C4 geopotential model (Förste et al. 2014a,b), complete to degree and order 2190, as the reference model can be expressed by Kühtreiber 1999, 2003) (cf. Fig. 3) where g wi n-red refers to the window-reduced gravity anomalies, g F refers to the measured free-air gravity anomalies, g EIGEN-6C 4 stands for the contribution of the global reference geopotential model, g TI wi n is the contribution of the topographic-isostatic masses for the fixed data window, g wi ncof stands for the contribution of the harmonic coefficients of the topographic-isostatic masses of the same data window and n max is the maximum degree (n max D 2;190 is used). For the underlying grid, which is intended to support the boundary values, particularly in areas of data gaps, the freeair gravity anomalies are computed by (2) on a 15 0 15 0 grid. This is three times the resolution of the output grid. To avoid identical grid points between the underlying grid and the output grid, the underlying grid is shifted by 2:5 0 relative to the output grid. Therefore both grids are called unregistered.
The contribution of the topographic-isostatic masses g TI wi n for the fixed data window ( 42 ı Ä Ä 44 ı ; 22 ı Ä Ä 62 ı ) is computed using TC-program (Forsberg D 0:40 g/cm 3 ; where T ı is the normal crustal thickness, ı is the density of the topography and is the density contrast between the crust and the mantle. The contribution of the involved harmonic models is computed by the technique developed by Abd-Elmotaal (1998). Alternative techniques can be found, for example, in Rapp (1982) or Tscherning et al. (1994). The potential harmonic coefficients of the topographic-isostatic masses for the data window are computed using the rigorous expressions developed by . Table 2 illustrates the statistics of the free-air and reduced anomalies for each data category. The great reduction effect using the window remove-restore technique in terms of both the mean and the standard deviation for all data categories is obvious. What is very remarkable is the dramatic drop of the standard deviation of the most important data source, the land gravity data, by about 84%. This indicates that the used reduction technique works quite well. Table 2 also shows that the underlying grid has a compatible statistical behaviour with the other data categories, which is needed for the interpolation process.
Interpolation Technique
An unequal weight least-squares interpolation technique (Moritz 1980) on a 5 0 5 0 grid covering the African window (40 ı S Ä Ä 42 ı N , 20 ı W Ä Ä 60 ı E) took place to generate the gridded window-reduced gravity anomalies g G wi n-red from the pointwise windowreduced gravity anomalies g wi n-red . The following standard deviations have been fixed after some preparatory investigations: underlyi ng grid D 20 mgal : The generalized covariance model of Hirvonen has been used for which the estimation of the parameter p (related to the curvature of the covariance function near the origin) has been made through the fitting of the empirically determined covariance function by employing a least-squares regression algorithm developed by Abd-Elmotaal and Kühtreiber (2016). A value of p D 0:364 has been estimated. The (5) Figure 4 shows the excellent fitting of the empirically determined covariance function performed by the above described process. Figure 5 illustrates the 5 0 5 0 interpolated windowreduced anomalies g G wi n-red generated using the unequal weight least-squares interpolation technique employing the relative standard deviations described above and specified in Eq. (4). In most areas, the anomalies are less than 10 mgal, indicating that the modelling is appropriate for the remove step. This is particularly evident in the regions on the African mainland and there especially in the areas with large data gaps. Thus it can be concluded that the reduction and interpolation methods, especially developed for this data situation, have not led to any irregularities in the reduced anomalies. The efficiency of the used reduction and interpolation method has been validated by Abd-Elmotaal and Kühtreiber (2019), employing independent point gravity data not used in the interpolation process; this validation proved an external precision of about 7 mgal over various test areas on the African continent indicating the good feasibility of the applied approach.
Geoid Determination
For a better combination of the different wavelengths of the earth's gravity field (e.g., Featherstone et al. 1998;Abd-Elmotaal and Kühtreiber 2008), the contribution of the reduced gridded gravity anomalies g G wi n-red to the geoid N g win-red is determined on a 5 0 5 0 grid covering the African window using Stokes' integral employing Meissl (1971) modified kernel, i.e., where K M . / is the Meissl modified kernel, given by A value of the cap size ı D 3 ı has been used. S. / is the original Stokes function. The choice of the Meissl modified kernel has been made because it proved to give good results (cf. Featherstone et al. 1998;Abd-Elmotaal and Kühtreiber 2008). The full geoid restore expression for the window technique reads Kühtreiber 1999, 2003) where N TI wi n gives the contribution of the topographicisostatic masses (the indirect effect) for the same fixed data window as used for the remove step, EIGEN-6C 4 gives the contribution of the EIGEN-6C4 geopotential model, wi ncof stands for the contribution of the dimensionless harmonic coefficients of the topographic-isostatic masses of the data window, and .N / wi n is the conversion from quasigeoid to geoid for the terms related to the quasi-geoid, i.e., EIGEN-6C 4 and wi ncof . The term .N / wi n can be determined by applying the quasi-geoid to geoid conversion given by Heiskanen and Moritz (1967, p. 327) (see also Eq. (11)). This gives immediately where g EIGEN-6C 4 and g wi ncof are the free-air gravity anomaly contributions of the EIGEN-6C4 geopotential model and the harmonic coefficients of the topographicisostatic masses of the data window, respectively, and N is a mean value of the normal gravity.
In order to fit the gravimetric geoid model for Africa to the individual height systems of the African countries, one needs some GNSS stations with known orthometric height covering the continental area. Unfortunately, despite our hard efforts, this data is still not available to the authors. As an alternative, the computed geoid is embedded using the GOCE DIR_R5 satellite-only model , which is complete to degree and order 300. It represents the best available global geopotential model approximating the gravity field in Africa; this has been investigated by Abd-Elmotaal (2015). In the present application, the DIR_R5 model was evaluated up to d/o 280, since the signal-tonoise ratio for higher degrees is greater than one, and thus the coefficients of higher degrees are not considered. The general discrepancies between the GOCE DIR_R5 geoid and our calculated geoid solution have been represented by a trend model consisting of a vertical offset and two tilt parameters. These parameters have been estimated through a leastsquares regression technique from the residuals between the two geoid solutions. This parametric model has been used to remove the trend which may be present in the computed geoid within the current investigation. This trend may be caused by errors in the long-wavelength components of the used reference model EIGEN-6C4 or the point gravity data. The Dir_R5 geoid undulations N Dir_R5 can be computed by where Dir_R5 refers to the contribution of the Dir_R5 geopotential model, and the term .N / is computed by (Heiskanen and Moritz 1967, p. 327) .N / D g Dir_R5 2 ı GH N H ; where g Dir_R5 refers to the free-air gravity anomalies computed by using the Dir_R5 geopotential model, G is Newton's gravitational constant, and ı is the density of the topography, given by Eq.
(3). Figure 6 shows the AFRgeo2019 African de-trended geoid as stated above. The values of the AFRgeo2019 African geoid range between 55:34 and 57.34 m with an average of 11.73 m.
Geoid Comparison
As stated earlier, the first attempt to determine a geoid model for Africa "AGP2003" has been carried out by Merry (2003) and Merry et al. (2005). Since then, the data base has been further enhanced. In particular, the calculation method, statistical combination of the various types of gravity anomalies, has been revised and further developed. This has led to a significant improvement of the African geoid model. Figure 7 shows the difference between the de-trended AFRgeo2019 and the AGP2003 geoid models. The light yellow pattern in Fig. 7 indicates differences below 1 m in magnitude. Figure 7 shows that the differences between the two geoids amount to several meters in the continental area, especially in East Africa. The large differences over the Atlantic Ocean arise from the fact that the AGP2003 didn't include ocean data in the solution. Figure 7 shows some edge effects, which are again a direct consequence of using no data outside the African continent in the AGP2003 solution.
As the AFRGDB_v1.0, which has been the basis for computing the AFRgeo_v1.0, has been greatly influenced by a wrong data set in Morocco (cf. , it has been decided to skip the comparison between AFRgeo_v1.0 and the current geoid model.
Conclusions
In this paper, we successfully computed an updated version of the African geoid model. The computed geoid model is based on the window remove-restore technique (Abd-Elmotaal and Kühtreiber 2003), which gives very small and smooth reduced gravity anomalies. This helped to minimize the interpolation errors, especially in the areas of large data gaps. Filling these data gaps with synthesized gravity anomalies using the EIGEN-6C4 geopotential model, complete to degree and order 2190, has stabilized the interpolation process at the data gaps. The reduced gravity anomalies employed for the AFR-geo2019 geoid model show a very good statistical behaviour (especially on land) because they are centered, smooth and have relatively small range (cf. Fig. 5 and Table 2). The smoothness of the residuals indicates that the interpolation technique proposed by Abd-Elmotaal and Kühtreiber (2019) did not induce aliasing effects, especially in the areas with point data gaps. Hence, they give less interpolation errors, especially in the large gravity data gaps. The reduced gravity data were interpolated using an unequal least-squares interpolation technique, giving the land data the highest precision, the sea data a moderate precision and the underlying grid the lowest precision. In order to optimally combine the spectral components in the remove-compute-restore technique, the Stokes function in the Stokes integral is replaced by a modified kernel function. In the geoid solution presented, the modification according to Meissl (1971) was used. Alternative modifications have been discussed by Wong and Gore (1969), Jekeli (1980), Wenzel (1982, Heck and Grüninger (1982), Featherstone et al. (1998) or Sjöberg (2003.
Finally, the computed geoid model for Africa has been de-trended by the use of the DIR_R5 GOCE model. In comparison with the previous model AGP2003, the progress made in determining the African height reference surface becomes visible.
Unfortunately, despite of strong efforts, extended precise GNSS positioning data over the African continent have not been made available to the authors. Thus, a rigorous comparison of the presented geoid model with an independent data set can only be made with further international efforts.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 4,797.8 | 2020-01-01T00:00:00.000 | [
"Geology",
"Physics"
] |
An Acetyltransferase Conferring Tolerance to Toxic Aromatic Amine Chemicals
Aromatic amines (AA) are a major class of environmental pollutants that have been shown to have genotoxic and cytotoxic potentials toward most living organisms. Fungi are able to tolerate a diverse range of chemical compounds including certain AA and have long been used as models to understand general biological processes. Deciphering the mechanisms underlying this tolerance may improve our understanding of the adaptation of organisms to stressful environments and pave the way for novel pharmaceutical and/or biotechnological applications. We have identified and characterized two arylamine N-acetyltransferase (NAT) enzymes (PaNAT1 and PaNAT2) from the model fungus Podospora anserina that acetylate a wide range of AA. Targeted gene disruption experiments revealed that PaNAT2 was required for the growth and survival of the fungus in the presence of toxic AA. Functional studies using the knock-out strains and chemically acetylated AA indicated that tolerance of P. anserina to toxic AA was due to the N-acetylation of these chemicals by PaNAT2. Moreover, we provide proof-of-concept remediation experiments where P. anserina, through its PaNAT2 enzyme, is able to detoxify the highly toxic pesticide residue 3,4-dichloroaniline in experimentally contaminated soil samples. Overall, our data show that a single xenobiotic-metabolizing enzyme can mediate tolerance to a major class of pollutants in a eukaryotic species. These findings expand the understanding of the role of xenobiotic-metabolizing enzyme and in particular of NATs in the adaptation of organisms to their chemical environment and provide a basis for new systems for the bioremediation of contaminated soils.
Aromatic amines (AA) are a major class of environmental pollutants that have been shown to have genotoxic and cytotoxic potentials toward most living organisms. Fungi are able to tolerate a diverse range of chemical compounds including certain AA and have long been used as models to understand general biological processes. Deciphering the mechanisms underlying this tolerance may improve our understanding of the adaptation of organisms to stressful environments and pave the way for novel pharmaceutical and/or biotechnological applications. We have identified and characterized two arylamine N-acetyltransferase (NAT) enzymes (PaNAT1 and PaNAT2) from the model fungus Podospora anserina that acetylate a wide range of AA. Targeted gene disruption experiments revealed that PaNAT2 was required for the growth and survival of the fungus in the presence of toxic AA. Functional studies using the knock-out strains and chemically acetylated AA indicated that tolerance of P. anserina to toxic AA was due to the N-acetylation of these chemicals by PaNAT2. Moreover, we provide proof-of-concept remediation experiments where P. anserina, through its PaNAT2 enzyme, is able to detoxify the highly toxic pesticide residue 3,4-dichloroaniline in experimentally contaminated soil samples. Overall, our data show that a single xenobiotic-metabolizing enzyme can mediate tolerance to a major class of pollutants in a eukaryotic species. These findings expand the understanding of the role of xenobiotic-metabolizing enzyme and in particular of NATs in the adaptation of organisms to their chemical environment and provide a basis for new systems for the bioremediation of contaminated soils.
Aromatic amines (AA) 3 represent one of the most important classes of occupational or environmental pollutants. Many AA are toxic to most living organisms due to their genotoxic or cytotoxic properties (1). AA account for 12% of the 415 chemicals that are either known or strongly suspected to be carcinogenic in humans (2). AA are common by-products of chemical manufacturing (pesticides, dyestuffs, rubbers, or pharmaceuticals), coal and gasoline combustion, or pyrolysis reactions (3). Moreover, the presence of AA in groundwater or soil samples subject to industrial, agricultural, or urban pollution is of increasing concern, particularly for persistent toxic AA contaminants, such as pesticide-derived anilines (4).
The identification of mechanisms by which living organisms can tolerate harmful chemicals, such as AA, is of prime importance to understand their adaptation to stressful environments. In addition, deciphering the molecular mechanisms underlying this tolerance may lead to novel biotechnological and pharmaceutical applications.
Fungi are environmentally ubiquitous and are found with great diversity in both terrestrial and aquatic environments. Fungi are known to tolerate a large range of chemicals of natural or anthropogenic origin by developing mechanisms to act on xenobiotic and natural compounds (5,6). Fungi are therefore good models to identify and to understand tolerance mechanisms to xenobiotics (7,8). Moreover, characterization of the mechanisms by which fungi tolerate certain toxic xenobiotics can potentially lead to the identification of new targets for the treatment of fungal infections in vertebrates (7,8) or plants and to the development of new bioremediation tools for cleaning up contaminated environments (5,9).
Using the common ascomycete Podospora anserina as a model, we provide here the demonstration that a single enzyme can mediate tolerance to toxic AA chemicals in a eukaryotic species. This enzyme was identified and characterized as an arylamine N-acetyltransferase (NAT), a xenobiotic-metabolizing enzyme that acetylates efficiently several toxic AA. Targeted disruption of this NAT gene led to the complete loss of tolerance to AA, thus confirming that this enzyme enables the fungus to detoxify AA that would otherwise prove toxic. These findings will help to understand the enzymatic mechanisms contributing to adaptation of living organisms to their environment. In particular, our data demonstrate that the NAT-dependent detoxification mechanisms may provide a eukaryotic organism with tolerance to toxic AA. Moreover, we provide proof-of-principle experiments, using soils contaminated with the highly toxic pesticide residue 3,4-dichloroaniline, proving that the fungal NAT-dependent detoxification pathway may represent a novel model with reasonable cost and a low environmental impact for the bioremediation of AA-contaminated environments.
EXPERIMENTAL PROCEDURES
Strains, Culture Conditions, Basic Protocols, and Phenotypic Characterization-The S strain of P. anserina was used for all experiments. The culture conditions for this organism have been described elsewhere (10). The methods currently used for genetic analysis, the extraction of nucleic acids and proteins, and genetic transformation are available from the Podospora anserina Genome Project. P. anserina was grown in the well defined M2 synthetic medium providing the wild-type (WT) strain with optimal growth conditions. Phenotypic analyses were performed on WT and mutant (⌬PaNat1, ⌬PaNat2 single mutants, and ⌬PaNat1/2 double mutants) P. anserina strains. The vegetative part of the life cycle was investigated in laboratory conditions by assessing growth rate and mycelial morphology: that is, the presence or absence of aerial hyphae and accumulation of pigments, as described for other mutants (11). Senescence and other types of cell degeneration were evaluated by measuring life span and investigating "crippled growth," as described previously (12). Hyphal interference was evaluated by placing P. anserina mycelia in the presence of Penicillium chrysogenum and Coprinopsis cinerea (13). Completion of the sexual cycle was investigated by measuring male and female fertility, perithecium maturation and content, and the timing of ascospore germination (11).
Chemical Synthesis of N-Acetylated AA-The acetylated forms of 3,4-dichloroaniline (3,4-DCA), 2-aminofluorene (2-AF), and 4-butoxyaniline (4-BOA) were synthesized from 3,4-DCA, 2-AF, and 4-BOA (Sigma) using acetic acid chloride in the presence of triethylamine (base). Column chromatography purification was carried out on silica gel 60 (70 -230 mesh American Society for Testing and Materials, Merck). The structure of all compounds was confirmed by IR and NMR spectra. IR spectra were obtained in paraffin oil with a ATI Mattson Genesis Series Fourier transform infrared spectroscopy spectrometer; 1 H NMR spectra were recorded at 200 and 50 MHz, respectively, in CDCl3, on a BRUKER AC 200 spectrometer using hexamethyldisiloxane as an internal standard. Yields of acetylated products were around 80%.
Production and Purification of Recombinant Proteins-Escherichia coli C41 (DE3) cells containing pET-28A-based plasmids encoding PaNat1 or PaNat2 were used to produce and purify His 6 -tagged recombinant proteins. The purified PaNat1 and PaNat2 enzymes were reduced with 10 mM dithiothreitol before dialysis against 25 mM Tris-HCl, pH 7.5, 1 mM EDTA.
Yields of recombinant proteins were in general 15-20 mg/liter of bacterial culture.
SDS-PAGE and Western Blot Analysis-Protein samples were separated by SDS-PAGE and transferred to a nitrocellulose membrane for Western blot analysis. An antibody raised against the Salmonella typhimurium NAT enzyme was used to detect the purified recombinant PaNat1 and PaNat2 proteins (14).
Deletion of PaNat1 and PaNat2 and Complementation of the ⌬PaNat2 Mutation by the PaNat2 ϩ Gene-The null ⌬PaNat1 and ⌬PaNat2 alleles were constructed separately, using the approaches indicated in supplemental Fig. 1. The native PaNat1 and PaNat2 genes were replaced by recombinant defective alleles in which all or most of the coding sequences were replaced by the hygromycin B-and phleomycin-selectable markers, respectively. Tests carried out with the hygromycin Band phleomycin-resistant strains obtained after transformation of the WT with the marker genes showed that neither of these genes interfered with resistance/sensitivity to AA. Independent hygromycin B-or phleomycin-resistant transformants were tested for homologous integration of the defective alleles at the PaNat1 and PaNat2 loci by appropriate PCR and Southern blot analyses (see supplemental Fig. 1). ⌬PaNat1/2 double mutants were recovered from the progenies of crosses between ⌬PaNat1 mutants and ⌬PaNat2 mutants of opposite mating types. The ⌬PaNat2 mutation was complemented by cotransforming the ⌬PaNat2 mutant with a PCR-amplified DNA fragment encompassing the PaNat2 gene and the pBC-hygro vector (15). Six of the 16 transformants tested were resistant to AA, whereas none was resistant in the control transformation carried out with the vector only in the absence of the PaNat2 gene. Genetic analysis of four randomly picked transformants showed that resistance to AA cosegregated with the hygromycin B resistance marker, confirming that the restoration of AA resistance was due to the introduced PaNat2 gene.
Extract Preparation and Enzyme Assays-Fungi were grown for 2 days at 27°C in M2 liquid medium supplemented with 2.5 mg/ml yeast extract. The fungal mass was then harvested under sterile conditions by filtration. Typically ϳ1.5 g of fungal dry mass was obtained from each 100-ml culture. Total extracts were prepared by grinding fungal pellets in 25 mM Tris-HCl, pH 7.5, supplemented with 0.1% Triton X-100, 1 mM dithiothreitol, and protease inhibitors. The resulting suspensions were subjected to ultracentrifugation at 100,000 ϫ g for 1 h. The supernatants were removed and used for enzyme assays.
Remediation Studies-WT or ⌬PaNat1/2 mutant P. anserina (1.5 g of fungal dry mass) was grown in 25 ml of M2 medium broth in the presence or absence of DCA (400 M) for 2 days at 27°C. The medium was centrifuged and filtered (0.22-m pores) and used for the preparation of agar plates (50% conditioned medium). Sterilized Lactuca sativa seeds were germinated on agar plates and photographed after 7 days (room temperature, illumination for 12 h/day). For the remediation of 3,4-DCA (25 mg/kg) contamination in soil (Terre Végétale NF-U44-551, Truffaut, Paris), soil samples (3 g) were inoculated with WT or ⌬PaNat1/2 mutant P. anserina (0.3 or 0.9 g of fungal dry mass) and incubated for 2 days at 25°C. Aliquots (0.5 g) of soil were mixed with absolute ethanol at different time points for the quantification of 3,4-DCA and acetyl-3,4-DCA by HPLC. For the germination and growth of L. sativa seeds in soil, contaminated soil samples (20 g/pot, 80 mg/kg of 3,4-DCA) were inoculated three times (every 24 h) with 0.5 g of WT or ⌬PaNat1/2 strains and incubated for 72 h at 25°C. Seeds (20) were then sown in soil samples and allowed to germinate and grow at 25°C for 8 days (illumination for 12 h/day). Controls were carried out with acetyl-3,4-DCA (80 mg/kg) and H 2 O.
Enzyme Assays and Detection of Acetylated Aromatic Amines by HPLC-NAT activity was measured in the 5,5Ј-dithiobis-(2nitrobenzoic acid) assay, as described previously (16). Recombinant enzymes and aromatic amine substrates (500 M final concentration) in assay buffer (25 mM Tris-HCl, pH 7.5) were incubated for 5 min at 37°C in a 96-well plate. AcCoA (400 M final concentration) was added, and the plate was incubated at 37°C (for up to 30 min). The reaction (100 l total volume) was quenched with 25 l of guanidine hydrochloride solution (6.4 M guanidine/HCl, 0.1 M Tris-HCl, pH 7.5) supplemented with 5 mM 5,5Ј-dithiobis-(2-nitrobenzoic acid), and absorbance was measured at 405 nm. Kinetic analyses were performed by varying the aromatic amine substrate concentrations. Kinetic constants were determined by non-linear regression analysis, with the Kaleidagraph program (Synergy Software, Reading, PA). The rate of acetylation of aromatic amines by fungal extracts was measured by HPLC, as described previously (17). N-Acetylated aromatic amines were detected in samples (M2 liquid medium) or ethanol eluates of soil by HPLC on a C18 column, using 60% sodium perchlorate (20 mM, pH 3) and 40% acetonitrile as the mobile phase. Chemically acetylated AAs (acetylated 3,4-DCA; acetylated 2AF, and acetylated BOA) were synthesized as reported above and used as HPLC standards to identify enzymatically acetylated AAs in samples.
Statistical Analysis-Data are expressed as the mean Ϯ S.D. of three independent experiments, with quadruplicate assays for each experiment. Student's t test was used to determine the statistical significance of differences between means. Significance was defined as a p Յ 0.05.
Identification and Characterization of P. anserina NAT Enzymes and Their Activity toward Aromatic Amine
Substrates-AA are an important class of chemicals that are used in the manufacturing of pesticides, dyes, and pharmaceuticals. AA are also found in tobacco smoke and food pyrolysis products (18). Many AA are toxic for living organisms, and several AA are recognized as carcinogens. It has been suggested that certain soil fungi can detoxify and tolerate toxic AA (19). Therefore, fungal models are of interest to identify and understand mechanisms responsible for tolerance to toxic AA in eukaryotic organisms (7,8).
Four well known fungal species (Fusarium graminearum, Phycomyces blakesleeanus, P. anserina, and Rhizopus oryzae) from different ecosystems, for which complete genome sequences were available, were screened for radial growth in the presence of three toxic AA: 3,4-DCA (pesticide residue), 2-AF (carcinogen), and 4-BOA (chemical intermediate) (14). The growth of R. oryzae and P. blakesleeanus was almost completely abolished by these three AA at concentrations of 100 -250 M in standard minimal growth medium. In the same conditions, little effect on growth was observed in the other two species studied (P. anserina and F. graminearum) (data not shown), suggesting that these two species have mechanisms of tolerance to AA. We aimed to decipher the mechanisms underlying fungal survival in AA-contaminated environments.
Tolerance to potentially toxic xenobiotics often depends on specific biotransformation pathways catalyzed by endogenous enzymes, and in particular, xenobiotic-metabolizing enzyme, such as cytochrome P-450 varieties or glutathione S-transferases. AA may be biotransformed via several routes involving different types of xenobiotic-metabolizing enzyme, in particular, arylamine NATs (20). In eukaryotes, NAT enzymes have long been known to biotransform AA (20). However, it is still unclear what role NAT enzymes actually play in either preventing or enhancing toxic response to AA (21). So far, the studies on the knock-out NAT mouse models (21-23) have not demonstrated clearly the relevance of this pathway to AA tolerance in living organisms.
BLAST analysis indicated that P. anserina had two putative NAT enzymes and that F. graminearum had three such enzymes, whereas the two sensitive fungi had no genes encoding NAT enzymes (see supplemental Table 1 and supplemental Fig. 2). A broader BLAST screening of the complete genome sequences of eumycete fungi showed that many of these fungi had NAT genes (see supplemental Table 1 and supplemental Fig. 2). We investigated the possible role of these genes in AA tolerance, focusing on the filamentous ascomycete P. anserina, which is highly tolerant to 2-AF, 3,4-DCA, and 4-BOA and amenable to reverse genetics techniques. The two NAT genes present in the genome of P. anserina (24) encode two putative NAT enzymes, PaNat1 and PaNat2 (Coding Sequence Regions (CDS) numbers: Pa_2_13150 and Pa_4_4860, respectively). These genes are expressed as an expressed sequence tag was identified for each of these genes in the data base (Podospora anserina Genome Project). The P. anserina PaNat1 and PaNat2 genes encode two polypeptides, which, at 333 and 303 amino acids in length, are larger than any previously described NAT enzyme (20). The P. anserina NATs contain all the known NAT-specific functional motifs (25). Sequence analyses (see supplemental Fig. 2) showed the percentage of identity between the P. anserina NAT isoforms (32%) to be lower than that between NAT isoforms in any other eukaryotic species (67-94%). This unusually low level of identity between two paralogous eukaryotic NAT enzymes may reflect functional divergence (26). The percentage of identity between the two P. anserina NAT and the other predicted fungal NAT proteins were found to range from 15% (with Batrachochytrium dendrobatidis NAT) to 55% (with Chaetomium globosum NAT1 and NAT2) (supplemental Fig. 2). PaNAT1 and PaNAT2 were found to share around 30% identity with a newly characterized NAT isoform (called FDB2) from Fusarium verticilloides, which is involved in benzoxazolinone metabolism (27). When compared with characterized mammalian NAT enzymes such as human NAT1 and NAT2, identities were found to be around 25-30% (data not shown). Protein sequence identities between PaNAT1 and PaNAT2 and known bacterial NAT enzymes such as the Mycobacterium smegmatis or S. typhimurium NAT isoforms were around 15-20% (data not shown). Predicted fungal NAT enzymes and the mammalian and bacterial isoforms also differ in their protein sequence lengths (supplemental Fig. 2). Almost all mammalian and bacterial NAT enzymes identified so far are less than 295 amino acids long (28). On the contrary, all fungal NAT identified in this study (including PaNAT1 and PaNAT2) range between 303 and 387 amino acids (supplemental Fig. 2). So far, no NAT enzyme has been identified in plants.
We purified recombinant P. anserina NAT isoforms and showed that they were readily detected with an anti-NAT antibody (supplemental Fig. 3). We further characterized the puri-
-Acetylation of known aromatic NAT substrates by purified recombinant P. anserina enzymes (PaNAT1 and PaNAT2)
The rate of hydrolysis of acetyl-CoA (AcCoA) (nmol⅐min Ϫ1 ⅐mg Ϫ1 of NAT) was measured in the presence of purified PaNAT1 or PaNAT2, AcCoA (400 M), and aromatic substrate (500 M). fied P. anserina NATs by investigating whether they catalyzed the AcCoA-dependent acetylation of several AA (Fig. 1a and Table 1). We tested a series of different substrates, including a carcinogen (2-AF), drugs (SMX, SMZ, 4-AS, 5-AS, INH, HDZ; definitions are available in Table 1), industrial chemical intermediates (4-BOA, 4-EOA, 4-PD, 4-ANS, 4-AMV; definitions are available in Table 1), and pesticide residues (4-BA, 4-IA, 3,4-DCA; definitions are available in Table 1). Kinetic parameters (V m , k cat , and K m ) were estimated for aromatic NAT substrates (14) (Fig. 1, b and c). P. anserina NAT enzymes were highly active against most of the AA substrates tested, with PaNat2 systematically more active than PaNat1 (Table 1 and Fig. 1, b and c). The catalytic efficiency (k cat /K m ) of PaNat2 was up to 80 times higher than that of PaNat1 (Fig. 1c). The catalytic efficiency of PaNat1 with 3,4-DCA was 2.5 times higher and that of PaNat2 was five times higher than that of the Pseudomonas aeruginosa NAT, which is currently considered to be the most efficient NAT enzyme ever described, particularly with 3,4-DCA (29). When compared with the two NAT isoforms from the plant symbiotic bacterium Mesorhizobium loti, the catalytic efficiency toward 3,4-DCA by PaNAT2 (k cat /K m ϭ 17,400 M Ϫ1 ⅐s Ϫ1 ) was 220 and 120 times higher than that of M. loti NAT1 and M. loti NAT2, respectively (14).
Class/compound
Effects of the Targeted Disruption of NAT Enzymes on the Tolerance of P. anserina to Toxic AA-We investigated the functions of the proteins encoded by the PaNat1 and PaNat2 genes in P. anserina, focusing particularly on their contribution to AA tolerance, by carrying out targeted gene disruption (see "Experimental Procedures" and supplemental Fig. 1). The phenotypes of mutants lacking either one (⌬PaNat1 or ⌬PaNat2) or both (⌬PaNat1/2) NAT genes were compared with that of the WT strain (11). No obvious differences in key biological features, including growth, differentiation, defense against competitors, aging, sexual reproduction, and ascospore germination, were observed (data not shown). We then evaluated the AA tolerance of P. anserina WT and mutant strains. We first assessed AA tolerance in a minimal medium to which selected aromatic compounds (3,4-DCA, 2-AF, 4-BOA) were added. All the fungal isolates (WT and mutants) were screened by assessing radial growth for 3 days. The growth of strains lacking PaNat2 was strongly impaired in the presence of 2-AF, 3,4-DCA, or 4-BOA, whereas strains lacking PaNat1 were less affected and grew similarly to the WT (Fig. 2a). Sensitivity to AA cosegregated in crosses with the phleomycin resistance marker gene used to inactivate PaNat2, and the introduction of a PaNat2 gene by cotransformation with a hygromycin B resistance marker restored WT levels of growth in the ⌬PaNat2 mutants. Hypersensitivity to AA therefore resulted from PaNat2 deletion. The WT and mutant strains grew similarly in the presence of chemically synthesized N-acetylated forms of 2-AF, 3,4-DCA, and 4-BOA (Fig. 2a). The N-acetylation of these three AA by PaNat2 therefore seems to be a key mechanism underlying tolerance to these toxic aromatic chemicals. Measurements of N-acetylation activity in various fungal extracts confirmed that 3,4-DCA, 2-AF, and 4-BOA were N-acetylated principally by PaNat2 (Fig. 2b). A low level of AA acetylation was detected with the ⌬PaNat1/2 extracts (Figs. 2b and 3a). This is likely due to background acetylation by non-NAT acetyltransferases present in P. anserina.
Detoxification of the Highly Toxic Pesticide Residue 3,4-DCA and Bioremediation Applications Using Experimentally Polluted Soils-We characterized the role of PaNat2 in tolerance to AA in more detail, focusing on 3,4-DCA, a highly toxic pesticide-derived AA persistent in soil, surface water, and groundwater. This compound is the major breakdown product of the phenylamide herbicides diuron, linuron, and propanil (30). The N-acetylated form of 3,4-DCA has been shown to be much less toxic than the parental compound (19). We therefore investigated whether 3,4-DCA was acetylated in vivo by WT and mutant P. anserina strains. Similar amounts of the N-acetylated form of 3,4-DCA were detected by HPLC in the liquid medium of WT and ⌬PaNat1 strains grown in the presence of a toxic dose of 3,4-DCA (250 M; Fig. 3a). By contrast, very low levels of acetyl-3,4-DCA were found in the media of strains lacking PaNat2, mainly due to the very poor growth of these strains in the presence of 3,4-DCA. Acetyl-3,4-DCA generation was timedependent. After 3 days of incubation, 45% of the 3,4-DCA had been biotransformed into its acetylated product, 3,4-dichloroacetanilide, in WT cultures, versus 38% in ⌬PaNat1 cultures and only 5% in ⌬PaNat2 cultures (no significant differences were found between ⌬PaNat2 and ⌬PaNat1/2; Fig. 3a). No other 3,4-DCA metabolite was detected, suggesting that the PaNat2-dependent N-acetylation of 3,4-DCA was the main biotransformation pathway in vivo. Lettuce (L. sativa) seeds have been shown to be highly sensitive to 3,4-DCA (at concentrations Ͼ10 mg/kg of soil), with the complete abolition of germination and growth (31). After 7 days (Fig. 3b), no seed germination was observed in 3,4-DCA-contaminated medium (200 M) or in 3,4-DCA-contaminated medium previously incubated with the ⌬PaNat1/2 strain. Conversely, seeds were able to germinate and grow in contaminated medium previously incubated with WT P. anserina. Thus, PaNat2 is sufficient for the detoxification of 3,4-DCA in vivo. These data also suggest that no toxic (at least for the seeds) fungal compound was released by P. anserina strains.
P. anserina is found on herbivore dung in nature but can grow in soil in the presence of plant debris. This species reproduces by sexual means only, with mating occurring only between partners of opposite mating types (32). The spread of this non-pathogenic fungus is therefore easy to control, making it an attractive candidate for safe bioremediation. As proof of principle, we assessed the capacity of the NAT-dependent acetylation pathway of P. anserina to N-acetylate 3,4-DCA present in soil samples. For this purpose, we inoculated soils highly contaminated with 3,4-DCA (final concentration 25 mg/kg of soil) with WT or ⌬PaNat1/2 P. anserina strains and incubated the mixtures at 25°C for 2 days. We then extracted 3,4-DCA and its acetylated metabolites for detection by HPLC (Fig. 4a). Acetylated 3,4-DCA was readily detected in soil samples incubated with WT P. anserina. The amount of acetyl-3,4-DCA was found to depend on the amount of fungus used for soil inoculation (Fig. 4a, black and dotted lines). We found that 40% of the 3,4-DCA present could be N-acetylated within 48 h (Fig. 4b). In the same conditions, no acetylation of 3,4-DCA was detected with ⌬PaNat1/2 P. anserina (Fig. 4a, dashed line, and 4b). We analyzed L. sativa seed germination and growth in 3,4-DCA-contaminated (80 mg/kg of soil) soils after incubation with WT or ⌬PaNat1/2 P. anserina strains for 72 h (Fig. 4c). L. sativa seed germination and early growth were completely abolished in soils contaminated with 3,4-DCA. No seed germination or growth was observed with 3,4-DCA-contaminated soil inoculated with the ⌬PaNat1/2 strain (Fig. 4c). Conversely, in contaminated soil treated with WT P. anserina, seed germination and growth were restored to the levels observed with soil treated with chemically acetylated 3,4-DCA (Fig. 4c). Thus, the inoculation of a 3,4-DCA-contaminated soil with a P. anserina strain harboring a functional NAT-dependent acetylation pathways leads, in situ, to significant detoxification of this toxic compound, making possible the germination of L. sativa seeds and the early growth of the seedlings. Concentrations around 100 g/kg of 3,4-DCA have been reported in contaminated soils (33). Our study shows that even at higher 3,4-DCA con- At different time points, acetyl-3,4-DCA and 3,4-DCA were detected in the growth medium (absorbance at 254 nm) and quantified by HPLC (data were normalized to take into account the molar absorbance at 254 nm of acetyl-3,4-DCA being 2.6 times higher than that of 3,4-DCA). b, WT, ⌬PaNat1, ⌬PaNat2, and ⌬PaNat1/2 strains (1.5 g of fungal dry mass) were grown in M2 medium in the presence of 400 M 3,4-DCA for 2 days at 27°C. M2 media were filtered and used for the preparation of agar plates (50% v/v conditioned and normal M2 medium). Sterilized L. sativa seeds (n ϭ 10) were allowed to germinate and grow for 7 days at room temperature. The data shown are representative of three independent experiments. centrations (25 mg/kg), P. anserina mediates efficient bioremediation of this compound in soil.
These results pave the way for use of the fungal NAT metabolic pathway in the bioremediation of AA pollution in soils. The potential of fungi metabolic pathways for the bioremediation of AA pollution, particularly in soils contaminated with aniline pesticide residues such as 3,4-DCA, has been little studied. Certain fungi and bacteria have nonetheless been shown to biotransform 3,4-DCA to its acetylated form (19,34). In the bacterium M. loti, a NAT isoform has been shown to acetylate 3,4-DCA (14). In plants, glucosylation of 3,4-DCA was described, but this pathway was considered as non-effective at detoxifying this AA (35). Our results should facilitate prospective studies of AA bioremediation and the rationalization of future strategies based on the fungal NAT pathway for AA detoxification. In addition, our findings further emphasize that certain well characterized fungi may constitute a more efficient alternative model, with a reasonable cost and a low environmental impact.
Studies on model fungi have provided much of our understanding of biological processes. Such models also help to understand the general mechanisms by which living organisms protect themselves against potentially toxic effects of natural products or xenobiotics present in their environment (7,8). The NAT-dependent xenobiotic biotransformation pathway is found in many organisms ranging from bacteria to humans (26). However, the relevance of this pathway to AA tolerance in living organisms has remained unclear. So far, studies done on knock-out NAT mouse models have not demonstrated a role for NAT enzymes in preventing AA toxicity (21). Our P. anserina model provides the first clear molecular and functional evidence indicating that the NATdependent xenobiotic-biotransformation pathway can afford complete tolerance toward toxic AA in a eukaryotic organism. In addition to the existing knock-out mouse model, our P. anserina model should be helpful to uncover the potential endogenous functions of NAT enzymes. Overall, our data underline the role of certain xenobiotic-metab- . Remediation of soils contaminated with 3,4-DCA. a, WT or ⌬PaNat1/2 (0.3 and 0.9 g of fungal dry mass) was used to inoculate soils (3 g) contaminated with 3,4-DCA (25 mg/kg of soil), which were then incubated for 2 days at 25°C. At various time points, 0.5 g of soil was sampled, and 3,4-DCA and acetyl-3,4-DCA were extracted with absolute ethanol and quantified by HPLC. The dashed line corresponds to ethanol extractions from soil incubated with ⌬PaNat1/2 (0.9 g); the solid and dotted lines correspond to ethanol extractions from soil incubated with 0.3 and 0.9 g of WT P. anserina, respectively. b, time course of the disappearance of 3,4-DCA (conversion to its acetylated form) from contaminated soil treated with ⌬PaNat1/2 (0.9 g) (triangles) or WT P. anserina (0.9 g) (squares). Error bars indicate S.E. c, 3,4-DCA-contaminated soils (20 g/pot, 80 mg of 3,4-DCA/kg of soil) were inoculated every 24 h (over a period of 72 h) with 0.5 g of WT or ⌬PaNat1/2 strains and incubated for 72 h at 25°C. L. sativa seeds (20/pot) were sown and allowed to germinate and grow for 8 days at 25°C. Controls were set up with acetyl-3,4-DCA (80 mg/kg of soil) and H 2 O. olizing enzymes in the adaptation of organisms to their chemical environment and emphasize the potential biotechnological applications of such enzymatic pathways. | 6,787.2 | 2009-05-05T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Smart automatic petrol pump system based on internet of things
ABSTRACT
INTRODUCTION
Petrol pumps are operated manually making fuel dispensing and filling, a time-consuming procedure [1]. The progression of innovation has added to the wellbeing and security of people alongside their property and make life increasingly simpler [2][3][4]. Especially, one of the most significant features of utilizing the renewal in gas stations is offering types of helps to individuals particularly give unwavering quality and security [5][6][7]. The smart gas station has advanced frameworks through which can be controlled a portion of the things in a gas station, for example, lights, and it can legitimize vitality utilization, and different capacities by utilizing sensors [8].
In general, designing a smart feul pump is one way to assess the level of a gas station system [9,10]. The thought on a smart petrol pump includes three basic characteristics. firstly, reconnaissance is significant through sensor systems to get data or information concerning the station. Secondly, instruments control the utilization of correspondence between gadgets to permit computerization and faraway access. At long last, UIs like advanced mobile phones and PCs permit clients to allot needs add to introduce data to individuals about these prioreties [5,[11][12][13][14].
In the course of recent years, various methodologies have been executed to structure a smart petroleum pump. This paper plans to audit current methodologies for planning brilliant petroleum pump 1805 frameworks with low expenses by controlling and observing utilizing microcontrollers and exploit present day innovation, for example, cell phones to control numerous gadgets dependent on IoT [15]. The remainder of this paper is organized as follows. Section 2 gives the background of internet of things (IoT) and radio frequency identification (RFID) concepts, structure and components, etc. Section 3 presents smart petrol pump using RFID and IoT. Section 4 discusses the drawback of state of the art works and Section 5 present conclusion and future works to address.
IOT USING RFID TECHNOLOGY ARCHITECTURE
In this section, overview of internet of thing (IoT) architecture and its application in smart petrol pump was studied based on prior researches.
Overview of internet of things (IoT) architecture
It is mostly perceived that the cutting edge web will be the IoT which statement individuals and things to be associated whenever, wherever, with anything and anybody, preferably profit any way/organize and any help [16][17][18], as shown in Figure 1. Radio recurrence recognizable proof innovation is for the most part observed as a key empowering influence of the web of things, on account of its capacity to help machines or PCs to distinguish items and track an enormous number of remarkably recognizable articles, minimal effort, little size and low-upkeep highlights [19,20]. For example, the electronic product code class-1 generation-2 (EPC C1 G2) ultra high frequency (UHF) passive RFID protocol allows low-cost tags and has been widely used in supply chains [20]. Arranged RFID frameworks through servers are required because of the constraint of understanding reach and the basic structure of labels, and ordinarily have a various leveled structure comprises of three principle layers: an upper system, a center system and a lower organize as appeared in Figure 2. There are three primary kinds of segments: servers, perusers and labels. They are associated in three-chain of importance systems [20]. In the upper network, servers connecting to reader clusters via wired (such as universal serial bus (USB), recommended standard-232 (RS-232), Ethernet, etc.) or wireless such as wireless fidelity (WiFi), global system for mobile communications/general packet radio services (GSM/GPRS), global positioning system (GPS), Zigbee, worldwide interoperability for microwave access (WiMAX), etc. connection. One reader cluster may have one reader or multiple readers [21,22]. Servers (database servers and remote control servers) could be computers [23]. They store and procedure all data detailed from perusers. Servers are likewise doors to outer Internet and can be gotten to by cell phones and different PCs by means of online administrations [24]. In the middle network, readers within a reader cluster can be ad hoc connected. The aim of the middle network is to provide efficient cooperation of readers and deal with reader collisions. In the lower network, passive RFID protocols, such as EPC C1 G2 protocol, are adopted to mitigate tag collisions and collect sensing information from tags. The IoT system architecture is generally divided into three layers from the field data acquisition layer (sensing layer) at the bottom to the application layer at the top [25] as shown in Figure 3. a. Sensing layer collects and gathers physical parameters such as the temperature, humidity, and air composition. It comprises: Data acquisition devices that gather the data and information by various data collection technologies such as RFID, UWB, near field communication (NFC), camera, sensors and etc.
Overview of RFID system architecture
RFID is a programmed distinguishing proof innovation, which utilizes remote radio correspondences to remarkably recognize articles or individuals, and is one of the quickest developing automatic data collection (ADC) advancements, for example, standardized identification, attractive inks, optical character acknowledgment, voice acknowledgment, contact memory, brilliant cards, biometrics and so on [29][30][31][32]. This innovation isn't altered; actually, it is as of now being utilized in various applications all through the world. It was initially actualized during World War II to distinguish and verify partnered planes, in a recognizable proof framework known as ID companion or enemy, is as yet being utilized today for similar purposes [33]. RFID innovation, in correlation with scanner tag, conquers certain restrictions found in certain applications. Since it is longer understood range, supporting bigger memory and not requiring view. Likewise RFID transmits information remotely and is a perused/compose innovation, so it can refresh or change the information encoded in the tag during the following cycle [16,34]. A basic RFID framework is appeared in Figure 4, which includes at least one perusers and RF labels in which a peruser sends information, power and the clock to labels. The labels react to the orders of the peruser utilizing the backscattering method. Perusers contain sequential information interfaces such RS232, RS485 to RFID server which catches information from perusers and afterward keenly channel and course the information to neighborhood passageway and web-servers [22,35,36].
RFID tag
The tag contains the data which is transmitted to the reader when the tag is inquire by the reader [37]. The most widely recognized labels today incorporate a recieving wire and an integrated circuit (IC) with memory, which is basically microchips chip. The label types are typically grouped basing after controlling modes: dynamic, semi-aloof, and detached tag. The principle properties of three distinct labels are as appeared in Table 1.
RFID reader
The reader is a gadget intended to distinguish and peruse labels to get the data put away subsequently, and is likewise liable for imparts that data to a server [16]. On account of inactive and semiactive labels, the peruser gives the vitality required to initiate or empower the tag in the peruser's electromagnetic field. The range of this field is commonly dictated by the size of the radio wire on the two sides and the intensity of the peruser. The size of the reception apparatus is commonly characterized by application prerequisites. Be that as it may, the intensity of the reader, which characterizes the force and reach of the electromagnetic field created, is commonly restricted by guidelines. Every nation has its own arrangement of principles and guidelines identifying with the measure of intensity created at different frequencies [29].
RFID server
The RFID server is a host computer fundamentally used for data pilling and information running. Data management system can be the easy local software which merged in RFID running module [38]. In briefly, tag and reader are responsible for identifying and capturing data, RFID server is responsible for managing and manipulating the data transmitted.
SMART PETROL PUMP USING RFID AND IoT
Smart petrol pump system implementation in different countries can be accessed in many literatures all these projectes used RFID card as payment card. 1n [5], this paper presents the design and implementation of smart petrol pump in which we are going to measure the level of fuel in the gas station and show it to the central server. If the fuel level is low then the central will provide fuel supply to that station. Their motive is to create a website which takes the fuel level as input from the fuel station where hardware is installed then rerum it to the site which is accessible by admin and users. Admin can change the data and update it. Users cannot change the data but they will only see it. They use ultrasonic sensor which measures the level of fuel. In [39], this paper, it deals with automation of fuel station retail outlet; this system will give the sales and stock report to the owner for every hour. In this proposal the client can get to the current status of the fuel station just as the stock upkeep through the web application. Through this we can use time in effective way and labor likewise and afterward this proposed framework is intended to include a security for the e-adjustment unit in the fuel pump.
In [6], this venture, the client having the shrewd card. The card is nothing; attractive memberis inserted in the card. The peruser circuit produces superb sign to peruse the magnificent number. At the point when client shows this card on the peruser, the peruser peruses that grand number and given the comparing sign to microcontroller. In [40], this paper utilizes biotelemetry framework to verify the individual client with their petro card. The biotelemetry here utilized as a unique mark sensor. The drunk and drive accidents are diminished by utilizing the liquor sensor in Petrol bunks. The liquor sensor detects the liquor focus in the blood by relaxing. In [41], this paper, built up a robotized fuel station the board framework which can defeat the drawbacks of present framework. We executed programmed fuel filling framework by utilizing GSM and AT mega 328. This framework can improve the fuelling procedure so as to make it simpler, solid and secure.
In [7], this venture is to actualize the security framework for filling petroleum at the Petrol bunks by keeping away from the association of individuals. RFID shrewd card maintains a strategic distance from the danger of conveying cash without fail and furthermore gives the element of paid ahead of time revive. In [42], the straightforward and appropriate utilization of microcontroller and GSM innovation gives an absolute security and atomization in the dispersion of fuel. It has simple worked cell phone framework and designs UI (GUI). It interfaces with fast fuel container which is advantageous for the buyer to work. In this framework the secret word will be given to the client and client needs to enter this secret word on the LCD gave by the fuel station which will enable the petroleum to organization to make validation for the client additionally the appropriation of the fuel is unimaginable until it gets confirmed by the database. In [43], the fundamental point of the task is to structure a framework which is prepared to do naturally deducting the measure of petroleum administered from client card dependent on RFID innovation. This paper proposes the usage of RFID innovation in controlling fuel apportioning for an Indian urban communities. In [44], this paper proposed petroleum siphon station utilizes Arduino mega as the primary cerebrum for its siphoning equipment framework and Texas Instrument CC3000 Wi-Fi shield is utilized to build up remote association between the petroleum siphon and the control framework. The petroleum siphon control framework is made utilizing HTML to screen and control the siphon station as far as deals, security and petroleum amount inside thousand miles away. Other than that, the control framework programming likewise gives a couple of different capacities, for example, to set the petroleum cost and as far as possible. These checking and controlling procedure are finished by sending and getting of the information between the CC3000 Wi-Fi Shield and the nearby server. In [45], this paper implements the Automation of the filling fuel at petrol bunk using RFID and GSM technology. The transactions are made customer friendly i.e., it has the ease of operation at customer's fingertips with customer's smart phone [46]. This we realized mechanized oil siphon by using GSM and RFID. In this system, all clients have a particular card called RFID card which can be stimulated by a couple centers. The gas station is equipped with a keen card peruser which recognizes the sum in the card alongside all the security subtleties and will show it on the LCD. In [47], this paper represent the mentioned system by aid of modern programs (Labview, Protues, PIC C Compiler) to achieve a high accurate control of the required parameters, in addition to using SCADA for supervising and controlling all the system to avoid any unexpected faults; like fire catastrophe and making the system work with high accuracy. Table 2 shows a number of designs that have already been suggested in designing a smart fuel pump.
RESULTS AND DISCUSSIONS
For the past few yearss, there was quite a lot of research focusing on design of protype of petrol pump system based on RFID and IoT. However, this protype architecture which consists of feature descriptors and extractors are problem-dependent. In other words, those protype features can only apply to specific problems. It could not be used for other problems which is irrelevant to apply in a real scenario. Apart from that, the smart petrol pump architecture in these designs used could only extract low-level security, it is very challenging in capturing for high-level secure information [48]. Thus, the advancement in technology leads to the emergence of deep learning-based sport video analysis Thus, the advancement in technology leads to the high level of security in design of petrol pump system when suggest another tool as core to control and hybrid communication technique between devices for these pump system and user interfaces . Since smart petrol pump-based RFID and IoT is still a new and growing research field, there were only a few studies found. Most of the studies are focusing on low-cost features [49]. However, the main drawback of using prototype with high level security are its expensive cost meaning to gain secure features automatically increase cost of project [50]. Besides, to elevate the performance level of the prototype model it needs high-performance GPUs [51]. But, the recent advancement in technology and the growth in big data have overcome these issues.
Previously, arduino with raspberry pi as core of prototype which is one of the approach has shown success in image recognition [52]. However, in the analysis of video input data researchers face many challenges because video sequences dynamically evolve with time. It is difficult to extract temporal information. With the continuous study in video analysis using arduino with raspberry pi. These approaches are able to extract temporal information in video input data. There were some researches work on the combination of both RFID as payment tool and control remotly on petrol pump systen using hybrid cores extract spatio-temporal information to rise secrity level. But only a few researches were found in extraction high-level secure information transmission. Despite astonishing performance of smart petrol system based RFID and IoT, the advancement achieves in image classification and low-cost have not been reached in certain field like video classification and rise security. It is still an open issue in smart petrol pump-based research in which many researchers try to solve and it is an ongoing research work.
CONCLUSION
This paper contributes a comprehensive survey on design of protoype of petrol pump system by overview both smart petrol pump based on RFID and IoT approach. In summary, smart petrol pump with hybrid core approach has overcome the limitations encountered by traditional methods in activity recognition of video analysis and rise security level for transmission information and control on it remotely. However, only a few research has focused on video analysis and hybrid technique. So, in future studies, the researchers can focus on extraction high-level secure information in design smart petrol pump which will be used by customers to gain time and reliable and more secure performance. Moreover, future research should also concentrate more on hybrid communication between devices to rise secrity level of design smart petrol pump. ISSN: 2088-8708 | 3,799.6 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
3 0 M ay 2 01 9 Melting holographic mesons by applying a magnetic field
In the present letter we use holographic methods to show that a very intense magnetic field lowers the temperature at which the mesons melt and decreases the mass gap of the spectrum along with their masses. Consequently, there is a range of temperatures for which mesons can be melted by applying a magnetic field instead of increasing the temperature. We term this effect Magnetic Meson Melting (MMM), and we are able to observe it by constructing a configuration that makes it possible to apply gauge/gravity methods to study fundamental degrees of freedom in a quark-gluon plasma subject to a magnetic field as intense as that expected in high energy collisions. This is achieved by the confection of a ten-dimensional background that is dual to the magnetized plasma and nonetheless permits the embedding of D7-branes in it. For such a background to exist, a scalar field has to be present and hence a scalar operator of dimension 2 appears in the gauge theory. We present here the details of the background and of the embedding of flavor D7-branes in it. Since our results are obtained from the gravity dual of the gauge theory, the analysis is also interesting from the gravitational perspective.
INTRODUCTION AND MAIN RESULTS
It has become increasingly accepted that an intense magnetic field is produced in high energy collisions and that understanding its effects is relevant to properly analyze experimental observations [1][2][3][4]. A tool that has proven to be very useful to study the quark-gluon plasma produced in these collisions is the gauge/gravity correspondence [5]. In this context, adding N f massive flavor degrees of freedom to a gauge theory corresponds to embedding N f D7-branes in its gravitational dual [6]. The incorporation of the magnetic field in this setup was done in [7,8], where said field was introduced as an excitation of the D7-branes, and the latter were considered to be a probe in a fixed ten-dimensional background. The results obtained in [8] showed that the effect of the magnetic field was to increase the dissociation temperature of the mesons along with their masses, thus providing an holographic realization of magnetic catalysis (MC). However, it is also well known from lattice calculations [9] and linear sigma models [10] that for some mesons, such as the neutral pion, the magnetic field has the opposite effect on both, its mass and dissociation temperature. It then becomes necessary to find a manner in the gauge/gravity correspondence to reproduce this inverse magnetic catalysis (IMC) for meson dissociation (IMC for other physical phenomena has been reported in various holographic models [11][12][13]).
A different approach to incorporate the effects of a strong magnetic field in the correspondence was followed in [14], where the plasma contained massless matter in the adjoint representation. Trying to implement the embedding of the flavor D7-branes in the ten-dimensional uplift of the five-dimensional geometry employed in [14] turns out to be highly complicated because the compact part of the resulting geometry warps in a way that prevents an easy identification of the right 3-cycle that the D7-brane must wrap [15]. Given this complication, our approach is to construct a family of solutions to tendimensional type IIB supergravity that accommodates a magnetic field and has a compact five-dimensional space that factors as a warped 3-sphere, a 1-cycle, and an angular coordinate θ that determines the volume of these two spaces. By keeping the 1-cycle and the θ direction perpendicular to the rest of the spacetime, we will be able to proceed as in previous approaches [16], and have the D7-brane naturally wrapping the 3-cycle. We will see below that the construction of such a family requires the excitation of a scalar field ϕ dual to a scalar operator of dimension 2 in the gauge theory, that hence saturates the BF bound [17].
In [18] we adopted a five-dimensional effective perspective to study this family of backgrounds, and found a critical intensity b c for the magnetic field above which the solutions become unstable.
The principal purpose of this letter is to show how, in the holographic setup that we study, a magnetic field lowers the temperature at which the meson melting transition happens, implying that for certain temperatures such a magnetic field can be used to cause this transition. Adjacent to this result, by studying mesons that are dual to perturbations on the embedding of the brane, we find that their masses decrease with the intensity of the magnetic field, as does the mass gap of the spectrum. Thus, we provide an holographic realization of IMC for meson dissociation.
Not less important, regarding the gauge/gravity correspondence, is the presentation we do of the tendimensional family of backgrounds and the embedding of D7-branes into it, both of which we constructed numerically, since analytic solutions eluded our treatment.
Numerical details of what we present here, along with further information about physical quantities affected by the magnetic field in this context, will be available in [19].
THE GRAVITATIONAL BACKGROUND
What we need is a solution to ten-dimensional type IIB supergravity with a metric that asymptotically approaches AdS 5 × S 5 , and accommodates a deformation that encodes the dual of a magnetic field in the gauge theory, while still permits to factor out the compact part of the metric in the way described in the introduction.
As it turns out, a general line element that allows such a solution is given by where X = e 1 √ 6 ϕ(r) , ∆ = X cos 2 θ + X −2 sin 2 θ, and the line element ds 2 5 of the non-compact subspace has the form while the one of the 3-cycle is given by (3) The coordinate r measures a radial distance, and we expect (1) to approach AdS 5 × S 5 as r → ∞, making the directions t, x, y, and z, dual to those in which the gauge theory lives.
We see that the 1-form A parametrizes an infinitesimal rotation involving a periodic direction of the compactifying manifold that, in turn, codifies the internal degrees of freedom of the dual gauge theory. If we keep A and its exterior derivative in the cotangent space to the directions dual to those of the gauge theory, it will represent a U(1) vector potential that allows the introduction of the desired magnetic field in the latter. In the family of solutions that will provide the background for our current calculation we set A = b x dy, automatically satisfying Maxwell equations [18] and introducing a constant magnetic field F = bdx ∧ dy in the gauge theory.
Given that the scalar ϕ and metric potentials U, V, and W, that we numerically constructed in [18] are solutions to 5D gauged supergravity, their substitution in the expressions above guaranties for (1) to solve the equations of motion of type IIB supergravity in 10D [20], just as long as the 5-form field strength proper to this theory is given by the expression also included in [18]. This is the family of backgrounds that we will use, of which all elements posses a regular horizon at some finite r h , providing the gauge theory with a finite temperature T = 3r h 2π , while the non compact part of (1) asymptotes to AdS 5 for large r. Since the 5-form does not couple to the D7-brane, it will not appear in our current calculations and will be omitted it in this letter.
EMBEDDING OF THE D7-BRANE
To find how a D7-brane is embedded in our background we must extremize the Dirac-Born-Infeld action given by where T D7 is the tension of the D7-brane, g 7 the metric induced over it, and the integration is to be performed over its world volume. An extreme of (4) can be consistently found at fixed φ given that (1) does not depend on this coordinate, and because the direction that φ represents remains orthogonal to the rest of the spacetime. Notice that achieving this orthogonality is what made the introduction of ϕ necessary.
Concerning the 3-cycle in (1), we notice that its volume depends on the position θ and, regardless of the value of b, this cycle becomes maximal at θ = 0, while for b = 0 it reduces to S 3 . For non-vanishing b, the 3-cycle gets tilted towards the five dimensional non-compact part of the spacetime in a manner that is volume preserving within the eight dimensions of this two spaces together.
We see then that a D7-brane can consistently extend along the directions of ds 5 2 and dΣ 3 2 , and since the volume of this subspace depends solely on θ and r, an embedding that extremizes (4) can be found by setting φ to a constant and determining the right profile for θ(r). This embedding becomes supersymmetric at zero temperature when we turn off ϕ and A.
The expressions to follow simplify significantly if they are written in terms of χ(r) ≡ sin θ(r), so that the line element induced on the D7-brane is where the wrapping factor simplifies to ∆ = X + χ 2 (X −2 − X). We find the embedding by varying (4) with respect to χ after substitution of (5) and solving the resulting equation.
The behavior of the embedding close to the boundary is given by χ =M π/ √ 2r + ..., whereM is related to the quark mass by M q =M √ λ/2, with λ the t'Hooft constant. For values ofM not too large in comparison to the temperature, the embedding at any r remains close enough to the equator, χ(r) = 0, and the brane falls trough the horizon as r → r h , receiving the name of black hole embedding. On the contrary, for large enough values ofM the embedding stays distant from the equator and the brane does not touch the horizon, so it is refereed to as a Minkowski embedding. There is an intermediate range ofM values for which there are both types of embeddings, and a thermodynamic analysis is necessary to determine which one is favored.
Once a D7-brane has been introduced in this background, open strings with both ends on it can exist. The low energy states of these strings are dual to mesons, i.e. quark-antiquark bound states, in the gauge theory. These states are codified as excitation of the D7-brane governed by the DBI action, and their spectrum can be determined by finding the stable vibrational and U(1) perturbations of the brane. It turns out [16,21] that for Minkowski embeddings the spectrum is discrete and has a non-vanishing mass-gap, while for black hole embedding it is continuous and gapless. From this it is concluded that Minkowski embeddings are dual to a phase of the gauge theory where stable mesons exist, while black hole embeddings correspond to a different phase in which the mesons have dissociated or melted. When the ratiō M /T is considerably above the value at which the system transit from one phase to the other,M is related to the mass gap of the meson spectrum [22][23][24][25], so we see that the transition is governed by how this physical quantity compares to the temperature.
If the transition between embeddings is thought of at fixedM but changing temperature, we see that at low temperatures, represented by small r h , the D7 does not fall through the horizon, and stable mesons exist. As the temperature increases the system gets to a point in which the brane does fall through the horizon and the mesons melt.
MAGNETIC MESON MELTING (MMM)
To visualize our main result, in Fig. 1 we display several profiles at the same temperature T = 3 4π , grouped by their value forM . Our first finding is that for stronger magnetic fields the brane bends closer to the horizon, lowering the value of T /M at which the mesons melt. A second observation is that there is a range forM , 0.38 serving as an example, for which the intensity of the magnetic field can change the embedding from Minkowski type to black hole. This means we can keep T /M fixed and use the magnetic field to drive the transition that will happen at a certain critical intensity b mmm . Importantly, since the bound b c for the intensity of the magnetic field is inherited from the analysis in [18], this melting can only occur for a certain range in T /M . To study this transition more carefully we compute the free energy associated to a number of embeddings of both types covering those close to the transition. The free energy is given by the product F = T S D7 of the temperature T characterizing the background and S D7 , which is the renormalized Euclidean continuation of (4) evaluated on the corresponding solution χ(r).
The details of the renormalization for S D7 , that will be available in [19], indicate the existence of some freedom in choosing the renormalization scheme. Since all the qualitative results are scheme independent, in this letter we work in a fixed scheme of which the particulars are not relevant, and leave other instanses for [19].
In Fig. 2 we fix T = 3 4π and show the behavior of F as a function of b/T 2 for three values ofM /T that permit both phases for different intensities of the magnetic field. These plots explicitly show that the phase transition can be driven by the magnetic field and demonstrate our main result. The inset shows the difference in free energy between the black hole and Minkowski embeddings close to the melting point of the mesons for the caseM /T = 1.53. In this example b mmm /T 2 = 2.52.
Extending the thermodynamic analysis of the transition requires to find the temperature dependence of the free energy at fixed magnetic field, the entropy density computed from it, and the specific heat C b . This will be presented in [19].
MESON SPECTRUM
The spectrum of the mesons in the gauge theory can be determined by finding the stable normal modes of the Minkowski embeddings [22][23][24][25][26] for excitations of either vibrational modes or those of the world volume U(1) field. To demonstrate the effect that the magnetic field has over the meson spectrum, in this letter we will only study those that correspond to perturbations of the embedding in directions perpendicular to it and leave other sectors to be presented in [19].
A general excitation of this kind can be implemented by writing χ(X) = χ 0 (r) + δ χ 1 (X) and φ(X) = φ 0 + δ φ 1 (X), where the naughted functions are the solutions discussed earlier and the perturbations, denoted by a subindex 1, can depend on any coordinate of the tendimensional space. The equations of motion for these perturbations show that χ 1 (X) and φ 1 (X) decouple from each another, and furthermore, their dependence on the 3-cycle coordinates, on r, and on the gauge theory directions, can be factored. Once reduced over the 3-cycle, the dependence of the perturbations on its coordinates is related to the r-charges of the states in a Kaluza-Klein tower. For concreteness we will focus on perturbations χ 1 (X) that do not depend on the coordinates of the 3cycle and leave other examples for [19].
Even if the magnetic field makes our gauge theory not isotropic, it remains invariant under translations, so we can write χ 1 (X) = e iωt−kµx µ χ r (r) in the bulk. Lorentz invariance is broken by the non-vanishing temperature, so what is understood as the mass of a meson is frame specific. Our choice follows [16], and it is to consider the mass to be defined in the rest frame of the mesons, which is then given by ω with vanishing three-momentum k µ = 0. The allowed values for ω are those for which χ 1 remains normalizable near the boundary. By following this procedure we get to a discrete spectrum of frequencies from which we compute the corresponding masses m n , that have a non vanishing value m 0 for the ground state, establishing the existence of a mass gap.
To display the impact of the magnetic field on the spectrum, in Fig. 3 we present a plot of its first three masses as a function of b/T 2 atM /T = 1.53.
DISCUSSION
We have exhibited that the presence of an intense magnetic field lowers the temperature at which mesons melt, and demonstrated that this transition can be alternatively driven by applying a magnetic field.
It is important to remember that what we study here is neither the confinement/deconfinement transition, since the existence of a horizon implies that the adjoint degrees of freedom are deconfined, nor the chiral symmetry breaking, since in both phases of our system the expectation value of the chiral condensate is different from zero. It is expected that the transition we study does not coincide with any of the former two, since lattice calculations [27][28][29] show that the temperature at which the mesons dissociate is indeed higher than that at which deconfinement happens. So, what we explicitly report in this letter is an inverse magnetic catalysis (IMC) for meson dissociation.
IMC for the confinement/deconfinement transition has been studied using holographic methods in [12,13], while IMC for chiral symmetry restauration has been reported to happen using different approaches [30][31][32][33][34], including the holographic Sakai-Sugimoto model at nonzero chemical potential [11]. We mention this because the geometric reading in [11] is very similar to ours, since applying a magnetic field makes the D8 andD8 branes to bend closer to the horizon causing an early restoration of the chiral symmetry. So, even if the chiral and meson melting IMC are different processes, it seams that their dual gravitational description is of similar nature.
The inset in Fig. 2 shows the typical pattern of a first order phase transition, making it clear that a more extensive thermodynamic analysis would be enriching, but since this goal is beyond the scope of the present letter, it will be left for future research.
Another result of significant physical relevance is the decrees of the masses in the meson spectrum, including its mass gap, that the magnetic field causes, as can be appreciated in Fig. 3. If this effect extrapolates to QCD, it implies that the masses at which some resonances are detected experimentally in non-central collisions are shifted by the influence of the magnetic field produced in them, requiring for some experimental explorations and conclusions to be adapted accordingly. Fig. 3 also exhibits a value for b/T 2 at which the mass gap vanishes, providing further evidence that the magnetic field dissociates mesons.
Part of our results provide a non-perturbative confirmation of those fund in [10], where a one-loop calculation is done in a linear sigma model to find that the mass of the neutral pion is diminished by the application of a magnetic field, and are in complete agreement with the lattice results in [9] for the same pion. The spectrum of the η ′ meson was already computed in the holographic context [35], but as the authors conclude, their calculation, that indicates an increase in the mass with the magnetic field, should be corrected by taking back-reaction effects into account. The same corrections should be applied to [36], where the melting transition is studied out of equilibrium. Our results provide such a correction and show that the effect of the field is indeed inverse to the one reported in [35].
Consistently with previous results, the values that the plots in Fig. 3 approach at b/T 2 = 0 coincide with those in [16] forM /T = 1.53.
Among the novel results that we will present in [19] will be the existence of a conformal anomaly in the gauge theory with fundamental degrees of freedom. This anomaly will not only be responsible for the renormalization scheme dependence of some physical quantities, but will also permit to study the enhancement of direct photon production that a magnetic field in its presence has been speculated to cause [3].
The holographic study of a hot quark-gluon plasma with massive fundamental degrees of freedom subject to an intense magnetic field is accessible through the supergravity construction presented here, so the lines of research that can be followed using it, and the results of this letter, are numerous. The dynamical nature of many of these processes puts them out of reach for lattice calculations, making the use of our holographic construction particularly appealing. One example is the aforementioned impact of a magnetic field on the luminous spectrum of this system, which we are currently investigating as an extension to the results in [37,38]. Other examples are the effect of an intense magnetic field on jet quenching or the drag force over quarks on the plasma. It is not our intention to present a comprehensive list of the projects that we are pursuing using the construction that we presented, but we would like to finish by mentioning that we expect many results to derive from our current study.
It is our pleasure to thank Gary Horowitz and Alberto Güijosa for very useful discussions, and Francisco Nettel for careful proofreading of this manuscript. We also acknowledge partial financial support from PAPIIT IN113618, UNAM. | 5,007.8 | 2019-08-10T00:00:00.000 | [
"Physics"
] |
Investigating the Chemically Homogeneous Evolution Channel and its Role in the Formation of the Enigmatic Binary Black Hole Progenitor Candidate HD 5980
Chemically homogeneous evolution (CHE) is a promising channel for forming massive binary black holes. The enigmatic, massive Wolf-Rayet (WR) binary HD 5980 A&B has been proposed to have formed through this channel. We investigate this claim by comparing its observed parameters with CHE models. Using MESA, we simulate grids of close massive binaries then use a Bayesian approach to compare them with the stars' observed orbital period, masses, luminosities, and hydrogen surface abundances. The most probable models, given the observational data, have initial periods ~3 days, widening to the present-day ~20 day orbit as a result of mass loss -- correspondingly, they have very high initial stellar masses ($\gtrsim$150 M$_\odot$). We explore variations in stellar wind-mass loss and internal mixing efficiency, and find that models assuming enhanced mass-loss are greatly favored to explain HD 5980, while enhanced mixing is only slightly favoured over our fiducial assumptions. Our most probable models slightly underpredict the hydrogen surface abundances. Regardless of its prior history, this system is a likely binary black hole progenitor. We model its further evolution under our fiducial and enhanced wind assumptions, finding that both stars produce black holes with masses ~19-37 M$_\odot$. The projected final orbit is too wide to merge within a Hubble time through gravitational waves alone. However, the system is thought to be part of a 2+2 hierarchical multiple. We speculate that secular effects with the (possible) third and fourth companions may drive the system to promptly become a gravitational-wave source.
INTRODUCTION
Detections of gravitational waves resulting from binary black holes and neutron star mergers have begun to reveal the properties of the population of double compact objects (Abbott et al. 2023).How these double compact objects form and how they end up in orbits tight enough to induce a gravitational wave inspiral are still unknown.Various promising scenarios have been proposed (see reviews by Mapelli 2020;Mandel & Broekgaarden 2022), and it seems likely that several of them contribute to the population of double compact objects (e.g., Wong et al. 2021;Zevin et al. 2021;Bouffanais et al. 2021;Stevenson & Clarke 2022;Godfrey et al. 2023).To advance our understanding and constrain their relative contributions, it is necessary to investigate the progenitors and side products of these scenarios, which may be detectable with conventional electromagnetic (EM) observations.In this work, we investigate the constraints derived from EM observations for a very massive binary system, which has been suggested to be the result of chemically homogeneous evolution (CHE).The CHE channel is one of the promising contenders for the formation of massive binary black holes (de Mink et al. 2009a;Mandel & de Mink 2016;de Mink & Mandel 2016;du Buisson et al. 2020;Riley et al. 2021).
CHE is a hypothesized mode of evolution where stars experience enhanced mixing.This allows hydrogen-rich material from the outer layers of the star to reach the central regions, where it is used to power core hydrogen burning.At the same time, helium produced in the central regions is mixed throughout the star, including the outer layers.The result is a star with a chemically homogeneous composition.Such stars stay compact and result in very massive helium stars.Eventually, after completing the subsequent burning stages, the star will most likely collapse and form a black hole (e.g.Maeder & Meynet 1987;Yoon et al. 2006).
de Mink et al. (2009a) proposed that CHE may occur in very close binary systems, where enhanced mixing may be expected as a result of tidal deformation and tidal spin up (cf.Song et al. 2013;Hastings et al. 2020).As the stars remain very compact, this type of evolution can prevent the two stars from merging and produce two very massive helium stars.This makes the CHE pathway of interest as a formation pathway for gravitational wave sources, as initially suggested independently by Mandel & de Mink (2016) and Marchant et al. (2016).Since then, several studies have further explored this pathway (de Mink & Mandel 2016;Marchant et al. 2017;du Buisson et al. 2020;Riley et al. 2021;Ghodla et al. 2023;Dorozsmai et al. 2024) Chemically homogeneous evolution has also been considered in the case of very rapidly rotating single stars, where mixing processes resulting from rotation may be responsible for keeping the stars homogeneous (Maeder & Meynet 1987;Yoon & Langer 2005;Yoon et al. 2006;Brott et al. 2011).In wider binaries, CHE may also occur for the mass-gaining star that is spun up as a result of accretion (e.g., Cantiello et al. 2007;de Mink et al. 2013;Ghodla et al. 2023).In our work, we will refer to this as "birth-spin CHE" and "accretion-induced CHE" respectively, to distinguish them from "tidal CHE" as described in the previous paragraph.
In this work, we study the evolution of a very massive binary system that is arguably the most promising candidate for CHE: the massive WR binary HD 5980 (e.g., Koenigsberger et al. 2014;Shenar et al. 2016, cf. a review of further candidates for CHE in Appendix A).Our aims are (1) to investigate the claim made by Marchant et al. (2016) that this system can be explained as a system that evolved through tidal CHE, (2) to explore the effect of variations in the uncertain mass loss and internal mixing on the channel, and their effect on our ability to match the observed properties of the system, (3) to infer its initial parameters and current evolutionary state under the assumption that it formed through tidal CHE, and (4) to speculate on its future evolution and final fate as a possible binary black hole system.
Our paper is organized as follows.In Section 2, we provide an extensive description of HD 5980 and summarize earlier work that considered a chemically homogeneous origin for this system.In Section 3, we describe the stellar evolutionary code and choices for the physical assumptions.In Section 4, we present our model grids then discuss the window for chemically homogeneous evolution and how it depends on our assumptions for mixing and mass loss.In Section 5, we discuss our Bayesian fitting procedure for comparing HD 5980 with our models, and we present the posteriors for the physical parameters.In Section 6, we speculate on the further evolution of the system.We argue that it is a likely progenitor of a binary black hole system, and we discuss the possible role of the third and fourth companion.In Section 7, we provide a critical evaluation of the limitations of our models and discuss alternative evolutionary explanations.In Section 8, we provide a brief summary and conclusion.
2. THE CASE OF HD 5980 2.1.Observational constraints HD 5980 is a remarkable multiple system in the SMC, which contains a massive WR binary system (with component masses of about 60 M ⊙ , Koenigsberger et al. 2014;Shenar et al. 2016).Table 1 provides an overview of the observed properties.Despite having been observed spectroscopically since the 1960s (Feast et al. 1960), its evolutionary origin remains a puzzle to this day.The system contains a binary pair in a 19.3 day orbit and a third light source which itself is in a ∼ 97 day orbit around an undetermined companion (Koenigsberger et al. 2014).We will refer to the inner WR binary as HD 5980 A and B and the third (light source) and fourth (unseen) objects as HD 5980 C and D, respectively.We note that it has not been confirmed that the two binaries are gravitationally bound to each other, nor is this orbit characterized, but the literature classifies them under these naming conventions.
The inner binary, HD 5980 A&B, has an eccentric orbit, e = 0.27, and a 19.3 day period (Koenigsberger et al. 2014).Both stars exhibit high surface He-abundance (see Table 1).The inferred post-eruption effective temperature of star A lies in the range 40 − 45 kK (Georgiev et al. 2011;Shenar et al. 2016;Hillier et al. 2019).However, given the strong mass-loss rates of HD 5980, the stellar surface is effectively embedded within an extended, optically thick wind region, causing the observed value of T eff to be significantly lower than the surface temperature comparable to those predicted by stellar structure models.For example, MESA models typically define the photosphere and T eff at τ = 2/3, without modelling any optically thick winds.Because of this, we do not constrain our study based upon T eff (see also the discussion in Section 7).
HD 5980 A has been observed to undergo eruptive outburst events, with two events observed in 1993 and 1994, each lasting less than a year (Bateson & Jones 1994;Barba & Niemela 1995;Barba et al. 1995;Koenigsberger et al. 1995;Sterken & Breysacher 1997), and one inferred to have occurred in the 1960s (Koenigsberger et al. 2010).Georgiev et al. (2011) note that these bursts could be driven by the hot and cool iron opacity bumps (the bi-stability bump, Lamers et al. 1995;Vink et al. 1999).Koenigsberger et al. (2014) further argue that these luminous blue variable (LBV)-type outbursts might be related to a super-Eddington layer that turns unstable (Humphreys & Davidson 1994;Stothers & Chin 1996).Alternatively, Foellmi et al. (2008) speculate that this eruptive behavior could be linked to gravitational perturbations from the periastron passage between the A+B and C+D binaries.The cause of the outburst and the system's unusual properties are not well understood, making HD 5980 a focal point of numerous studies (see Breysacher & Perrier 1980;Koenigsberger 2004;Georgiev et al. 2011;Shenar et al. 2016;Hillier et al. 2019;Nazé et al. 2018;Ko laczek-Szymański et al. 2021).The outburst also affected the observed properties, like the mass-loss rate and temperature of HD 5980 (e.g., Koenigsberger et al. 1995;Georgiev et al. 2011).For the purposes of this paper, we consider only the quiescent, non-outburst parameters for star A.
A Chemically Homogeneous Origin
HD 5980 was first suggested to be evolving chemically homogeneously by Koenigsberger et al. (2014).They argue that the progenitor stars required to explain the massive components of HD 5980 (about 120 M ⊙ at the zero-age main sequence, ZAMS, Koenigsberger 2004) would not fit within its current Roche-lobe radius (of about 57 R ⊙ ).They further argue that HD 5980 is difficult to explain with stable binary mass transfer given the large He surface abundance and rather evolved appearance of both stars.This leaves CHE as one of the most viable solutions.They use the Binary Evolution Code (Heger et al. 2000;Petrovic et al. 2005;Yoon et al. 2006) to evolve a single system with P init = 12 d, ZAMS masses of 90 M ⊙ and 80 M ⊙ , and initial rotational velocities of 500 km s −1 , and find that this birth-spin CHE can adequately explain many of the observed parameters of star A. Schootemeijer & Langer (2018) reinforce the conclusions from Koenigsberger et al. (2014).They compute two grids of stellar models at SMC metallicity using MESA, one with low initial rotation velocities and one where the stars rotate rapidly at birth and hence evolve in part or completely chemically homogeneously.They conclude that HD 5980 cannot be explained well by their models which experience stable mass transfer.Instead, it is best explained by models with high initial rotational velocity (about 520-540 kms −1 ) that evolve chemically homogeneously (birth-spin CHE).They find corresponding initial component masses of 70 − 80 M ⊙ , depending on if the components of HD 5980 are currently core-hydrogen or core-helium burning.
Lastly, Marchant et al. (2016) present an extensive model grid of near-and overcontact binary systems that lead to tidal CHE.Based on this grid, they suggest that HD 5980 may be explained by tidal CHE, beginning as a close contact system and widening to its present-day state due to stellar wind mass-loss.An in-depth analysis of this scenario for HD 5980 was outside the scope of their work.In this work we consider the last scenario: tidal CHE as the evolutionary explanation for HD 5980.
METHOD
We use the open source Modules for Experiments in Stellar Astrophysics, MESA (Paxton et al. 2010(Paxton et al. , 2011(Paxton et al. , 2013(Paxton et al. , 2015(Paxton et al. , 2018(Paxton et al. , 2019)), version 12778 to model the evolution of massive, initially closely orbiting binary star systems that evolve chemically homogeneously, closely following the approach of Marchant et al. (2016).Below we describe the physical choices and set up of our grid.
Physical Assumptions
We adopt an initial metallicity of Z = Z ⊙ /4 = 0.00425, using Z ⊙ = 0.017 and solar-scaled abundances as in Grevesse et al. (1996).We use the basic nuclear network to follow H-and He-burning, which is sufficient for our purposes.For convection, we use the Ledoux criterion and the standard mixing-length theory (Böhm-Vitense 1958) with a mixing length parameter of α = 1.5.We model semiconvection as in Langer et al. (1983), and set the efficiency parameter α sc = 1.0.Core overshooting during core hydrogen burning is incorporated as in Brott et al. (2011).
For the effect of tides on the orbital evolution, we adopt Hut (1981) for stars with a radiative envelope (see Paxton et al. 2015, for details).We initiate our models such that the orbit is circular and the spins of the stars are synchronized to the orbit at ZAMS.This is a reasonable assumption because the synchronization timescale is only a very small fraction of the main-sequence life- Spectral type WN6h WN6-7 O log 10 L 6.35 ± 0.10 c 6.25 ± 0.15 c 5.85 ± 0.10 c M orb [ M⊙ ] 61 time (see e.g., de Mink et al. 2009b), even when assuming less efficient tides (such as e.g., Yoon et al. 2010).
Where applicable, we compute the contact binary phase following the approach of Marchant et al. (2016), including the treatment of L2 overflow.We ignore possible heat exchange during over contact phases, which is appropriate when modelling equal mass twin stars (Fabry et al. 2022(Fabry et al. , 2023)).For the effect of magnetic fields on the transportation of angular momentum, we assume the Spruit-Tayler dynamo (Spruit 2002), implemented as described in Petrovic et al. (2005).The effect of the centrifugal acceleration on the stellar structure equations is accounted for following Kippenhahn & Thomas (1970) and Endal & Sofia (1976) as described in Paxton et al. (2019).
Chemical mixing -We account for mixing of chemical elements and the transport of angular momentum by rotational instabilities.Among the relevant processes, Eddington-Sweet circulation is the dominant process, which we treat as a diffusive process.We also allow for secular (Endal & Sofia 1978) and dynamical (Zahn 1974) shear instabilities and the Goldreich-Schubert-Fricke instability (Goldreich & Schubert 1967;Fricke 1968), using f c = 1/30 following Endal & Sofia (1976) and Pinsonneault et al. (1989) as described in Heger & Langer (2000).To account for the effects of tidal deformation on mixing in very close binary systems, we follow Hastings et al. (2020) and increase the diffusion coefficient for Eddington-Sweet to D ES = 2.1 for our fiducial model grid.
The efficiency of chemical mixing is generally uncertain.We therefore explore two variations of D ES : creating one model grid to represent decreased mixing, adopting D ES = 1.0 (labeled ∼×1/2 mixing), and one model grid that represents increased mixing, where we set D ES = 4.0 (∼×2 mixing).
Stellar winds -For stellar winds, we follow the implementation in Brott et al. (2011) of Yoon et al. (2006).For hydrogen-rich stars with surface helium abundance Y s < 0.4, the recipe is taken from Vink et al. (2001), while for hydrogen-poor stars with Y s > 0.7, the prescription from Hamann et al. (1995), divided by a factor of ten, is used.For surface helium abundances between 0.4 and 0.7 (0.4 < Y s < 0.7), the wind mass-loss rate is interpolated between the calculated values for Vink et al. (2001) and Hamann et al. (1995).These recipes account for the reduction of the mass-loss rate at the metallicity of the SMC.Winds are enhanced due to rotation as in Heger & Langer (2000).
Stellar wind mass-loss remains one of the main uncertainties in massive star evolution, and, in particular, the winds of very massive stars (with masses above ∼ 100 M ⊙ ) are poorly constrained by observations.
Several works that focus on the optically thin winds of OB stars (e.g., Lucy 2007;Krtička & Kubát 2017;Sundqvist et al. 2019) report much lower mass-loss rates than those of Vink et al. (2000).Bestenlehner (2020) finds that the prescriptions from Vink et al. (2000) under predict by a factor of a few the observed rates from WN5h stars, while heavily overpredicting the wind mass-loss rates from results from O8V stars.This is also in line with results from Gräfener (2021) and Sabhahit et al. (2022), who suggest that the wind mass-loss rates of very massive H-rich stars are higher than those predicted by Vink et al. (2001).Additionally, for H-poor stars, Sander & Vink (2020) and Sander et al. (2020) argue that the prescription from Hamann et al. (1995) will over-predict the wind mass-loss rate for lower mass stars, but might under-predict the rate for the highest mass stars.
In order to explore the effects of uncertainties in the wind mass-loss, we consider two variations with respect to our fiducial input physics assumption: a decreased winds variation, where we reduce all wind mass-loss rates by a factor three (labeled ×1/3 Winds), and similarly an increased winds variation where we increase all wind mass-loss rates by a factor three (×3 Winds).
We account for the effect of wind mass-loss on the orbit as in Paxton et al. (2015).For the shortest period stages of evolution, this potentially underestimates angular momentum losses due to tidal coupling between the wind and the binary system (MacLeod & Loeb 2020).However, due to the uncertainties on the strength of this effect, we ignore it in our evolution models.Additionally, as discussed in Section 1, HD 5980 has been observed to experience eruptive mass-loss episodes in the past.Lacking a physical model for these outbursts, we assume that all mass is lost through steady winds (see also the discussion in Section 7).
Model Grid Setup
We evolve binary star models starting from the zeroage main sequence (ZAMS) for several two-dimensional model grids in the initial period and initial component mass.For each grid, we assume that both stars in the system have equal initial masses (q=1).The range of initial periods is 1.4 days to 5.0 days, with linear steps of 0.2 days.Initial component masses range from 1.9 to 2.45 in log(M/M ⊙ ) (≈ 79 M ⊙ to 281 M ⊙ ), with steps of 0.025 in log-space.The five computed model grids differ from each other by the value of D ES and the wind mass-loss rate, described in the previous section.
We allow models to run through core helium burning.However, due to difficulties with modeling such highmass systems, most models terminated just before or during core helium ignition.Thus, in our analysis, we cut the models' evolution at the beginning of core Heburning (which we define as the time when the center helium mass fraction is decreasing and falls below 0.994).We do not follow the evolution of models that are not evolving chemically homogeneously.That is, we stop evolving systems when the central and surface helium mass fraction differs by more than 0.2 (as in Marchant et al. 2016).
THE WINDOW FOR CHEMICALLY HOMOGENEOUS EVOLUTION
For each input physics variation explored, we map out which initial masses and initial periods lead to chemically homogeneous evolution.We show the resulting 'window of chemically homogeneous evolution' for each model grid in Figure 1.Models that evolve chemically homogeneously are shown in blue.Different shades of blue indicate if and when the model first experienced RLOF through the L1 Lagrangian point.Nonchemically homogeneously evolving models are colored in grey, while failed models are left white.The center panel shows our fiducial input physics assumption.The black line shows the upper boundary for CHE from the fiducial model grid.For comparison, we show the fiducial boundary over each of the model grid panels.
It is interesting to know if our models experience any mass transfer.The shades of blue indicate the moment of first mass transfer; the darkest blue indicates RLOF at ZAMS, medium blue indicates RLOF post-ZAMS, and the lightest blue shows models that do not experience any RLOF before core-helium ignition (when we end our simulations).The radii of pre-main sequence stars are generally larger than ZAMS radii in single-star models, which could lead to the merger of the two objects before even reaching the main sequence.However, it is not known whether stars that form in a close binary have the same properties as single stars, so whether they would merge is an open question.
The left-most and right-most panel show the models with decreased and increased mixing efficiencies, respectively.We see that increasing the mixing efficiency expands the window for CHE evolution: i.e. at constant initial mass, larger initial orbital periods result in CHE.Similarly, decreasing mixing closes the window, and only systems with shorter initial periods exhibit CHE.We also find that more/less systems experience mass transfer in the decreased/increased mixing grid.We attribute this to more efficient mixing keeping the stars more compact.Additionally, the increased mixing grid produces CHE at larger orbital separations, causing them to be less likely to undergo mass transfer.
Our reduced and fiducial mixing models correspond to the standard and enhanced grid from Hastings et al. (2020), respectively.However, our models extend to much longer periods (5 days instead of 2 days) and higher masses (log(M/M ⊙ ) = 2.45, or M = 281 M ⊙ , instead of log(M/M ⊙ ) = 2, or M = 100 M ⊙ ).We can therefore only compare the bottom left of our left-most Log Initial Mass (M ) Initial Mass (M ) Figure 1.The window for chemically homogeneous evolution in initial periods and masses, every coloured square represents one MESA model.Blue coloring corresponds to models that evolve chemically homogeneously.Shades of blue indicate when the models experienced RLOF; dark blue indicates RLOF at ZAMS, which occurs through the L1 point (L1OF), medium blue shows models which experience L1OF post-ZAMS, while the lightest blue shows models which do not experience any RLOF during the model run.We find that our models do not experience L2OF.Models that do not evolve chemically homogeneously are colored grey.White squares correspond to models that failed to converge.Each panel represents a variation in input physics, described in Section 3. The middle panel shows our fiducial physics assumptions; the bottom and top panel show variations with respectively decreased and increased wind mass-loss rates; and the left and right panels show variations with decreased (DES = 1) and increased (DES = 4) mixing efficiency respectively.In each panel, a black line shows the edge of the chemically homogeneous window for the fiducial input physics assumption.
and center panels to the first column of their Fig. 7.We find that the window for CHE is slightly larger in our study.E.g., for log m 0 = 1.9, all models up to initial periods of 2 days evolve homogeneously, while Hastings et al. (2020) find that in their standard grid (our reduced mixing grid) CHE ends at periods wider than ∼ 1.75 days.We believe this to be a result of different MESA versions (Hastings et al. 2020 uses MESA r10398, which was prior to the r11532 update's changed treatment of rotating stars: see Paxton et al. 2019).
In the upper panel of Figure 1, we see the effect of increasing the stellar winds by a factor of three.Comparing this panel to the fiducial, we see that increasing the mass loss rate widens the window for CHE drastically for higher mass systems.For these systems, the mass loss rates are so high that the remaining hydrogenrich envelope is stripped off, leaving a homogeneous core (e.g.Brott et al. 2011;Köhler et al. 2015).For initial masses above m 0 > 160 M ⊙ we find all models appear chemically homogeneous, irrespective of their initial orbital period.In contrast, for lower mass systems we find that increasing the wind mass-loss rates reduces the window for CHE.This can be understood as a result of the increased mass loss causing the binary to widen and the stars to spin down.This spin down reduces Eddington-Sweet circulations and hence prevents CHE.
In the bottom panel, we see the effect of decreasing the wind mass-loss rates by a factor of three.The chemically homogeneous window does not significantly change from the fiducial, although we observe a slight widening around m 0 ≈ 150 M ⊙ .Since models with lower wind mass-loss rates widen less, the tidal effects imposing synchronous rotation cause them to remain fast-spinning.Additionally, more/fewer models experience mass transfer in the decreased/increased wind mass-loss grid.This is a combined effect of the wind widening and envelope stripping.The decreased wind mass-loss models widen much less, and, so, if any given star increases in radius, it is more likely to overflow in a tight binary as opposed to a wide one.Additionally, if stars are stripped less, as in the decreased mass loss grid, then they have more mass and correspondingly larger radii.
FINDING THE BEST FITTING MODEL FOR
HD 5980
Bayesian Fit Procedure
Bayes Theorem states: the probability of a model given the data, P(M|D), is equal to the probability of the data given the model, P(D|M), times the probability of that model, also known as the prior, P(M) divided by the probability of the data, P(D),
Probability of the data given the model -We estimate P(D|M) using the sum of the square differences between observed data and the corresponding model predictions, scaled by the allowed error for each parameter squared.We define where D i , M i , and σ i respectively represent each observed and corresponding predicted parameter i and its allowed error.We consider seven observables, namely, the orbital period P orb , the masses of both components, M A and M B , their luminosities, L A and L B and the surface hydrogen mass fractions, X HA and X HB .We do not use temperature, as we believe the observed value of temperature is a lower bound (see Section 7).We adopt the standard deviation of each parameter for its allowed error, σ, except for the case of orbital period.The observed orbital period is determined very precisely (Sterken & Breysacher 1997;Koenigsberger et al. 2014, see Table 1).The error in period will therefore be dominated by the rough spacing of our model grid.With this in mind, we take the uncertainty on the orbital period to be 2 days, corresponding to the period spacing of models at termination.This is approximately 10% of the measured period, which is consistent with the order of magnitude of the uncertainties on the other observables.
Prior probability -For P(M), we combine the individual prior probabilities of our free parameters: the initial mass, m 0 , initial orbital period, p orb,0 , and age, t.
We assume a Salpeter Initial Mass Function (IMF, Salpeter 1955): To get P(m 0 ), we integrate between the logarithmic halfway points between each consecutive initial mass in our model grid.We assume systems form following Opik's Law in period ( Öpik 1924): To get P(p orb,0 ), we integrate between the linear halfway points between each consecutive initial period.We further assume a constant rate of star formation such that Thus, P(t) behaves such that each step in the simulation is given a weight equal to the length of its timestep.
Probability of the data -Lastly, P(D) is a constant for all models.We treat it as a normalization constant obtained by calculating the value from marginalizing across all model timesteps in all five of the model grids.
For each evolutionary track (specified by the initial period and initial component mass) and for each time step on that track (specified by the age), MESA gives us a prediction for the observed parameters we wish to compare to.We determine P(M snapshot |D), which refers to the probability given the data for a specific timestep.We then marginalize over each model run by summing over all P(M snapshot |D) in an evolutionary track to get P(M track |D), the total probability of the evolutionary track given the data.
Finally, we marginalize over each model grid by summing over all evolutionary tracks within the grid to calculate its total (relative) likelihood, L. We use this quantity to compute the Bayes factor (see Table 2) by dividing this quantity for each variation of the input physics by the quantity from the fiducial grid.
Result Obtained from the Fiducial Model Grid
Using the fit procedure explained in Section 5.1, we compare our models against the observed data of HD 5980.We compute P(M track |D) for each evolutionary track of a binary system with specific initial parameters.The results for our fiducial model grid are shown in Figure 2. The white star indicates the evolutionary track that has the highest probability given the data, or, simply put, our best fitting model in this grid.
We draw attention to the diagonal trend in Figure 2. The best fitting models follow a trend whereby more massive systems lead to better solutions at shorter orbital periods.This is due to the requirement to match the present-day orbital period of about 19 days and mass of about 60 M ⊙ .When starting with a shorter orbital period, more mass loss is needed to widen the orbit sufficiently to match that of the present-day; the initial mass must thus be correspondingly higher to result in the present-day mass.
To illustrate the behavior of the evolutionary tracks, we select five example tracks marked with gray crosses and a white star in Figure 2 and plot their periods, luminosities, and hydrogen abundances as a function of component mass in Figure 3. Here, we use the component mass as a tracer of the evolution, and the passage of time is tied to evolution from higher to lower masses.See also Appendix B for a comparison of evolutionary tracks between all variations of the input physics.
The upper panel shows how the orbit widens as the mass decreases.This is expected for mass loss in the form of fast stellar winds.To show that the wind mass-loss can explain the bulk of the orbital widening, we show the analytical solution for orbital widening in dashed lines (the so-called Jeans mode approximation, see, e.g., Soberman et al. 1997).Using conservation of angular momentum and Kepler's law one can derive that the factor by which the orbital period P orb widens depends on the factor by which the total mass changes as, We see that our MESA evolutionary tracks closely follow the analytical solution for Jeans mode mass loss.The remaining discrepancy can be explained by stellar spins and tides, which are not included in Eq. 7.
We find that the best fitting solutions are near the end of the evolutionary tracks.Since our initial models typically start with period of 2-4 days, we need an in-crease in period by a factor between 5-10 to match the present-day orbital period of about 20 days.Eq. 7 explains why the initial masses of our best fitting models are thus in the range of 2 − 3× higher than the present day observed mass, e.g. 120 − 180 M ⊙ .
In the middle panel of Figure 3 we show how luminosity evolves as a function of star mass.For each track, we initially observe an increase in the luminosity as the system evolves.This is because of the increased helium content as the star burns hydrogen.Later in the evolution, we see a turnover where the luminosity begins to decrease.This is the result of high mass loss.The best fitting models tend to be somewhat over luminous, although they are still well within 2-σ of the observations.
Finally, in the bottom panel, we show the evolution of the surface hydrogen abundance.We note that the best fitting snapshots tend to predict lower surface hydrogen abundances (X ≲ 0.1) than observed (which are ≈ 0.25).We further discuss this discrepancy in Section 7.
Posteriors for the Model Parameters
In Figure 4 we show the posterior probability density functions for the initial parameters of the binary systems and the age.The orange-shaded regions correspond to the intervals bounded by the 5th and 95th percentiles.
The median (and 5th/95th percentiles) of the posterior distribution of the initial mass is 172 +36 −20 M ⊙ , as can be seen in the top panel.This is high compared to most known massive stars, but stars with such masses have been claimed to exist in the Large Magellanic Cloud.
In particular, such massive stars exist in the center of the massive star cluster R136 (e.g.Crowther et al. 2010Crowther et al. , 2016;;Brands et al. 2022) and also nearby (Bestenlehner et al. 2011;Renzo et al. 2019), although we note that HD 5980 has been observed on the outskirts of its host cluster, NGC 346.We stress that the stellar evolutionary models for such high mass stars are still very uncertain and this result should be taken with appropriate caution.
The median of the initial orbital period posterior distribution is 3.3 +0.4 −0.7 days, (middle panel).Such short periods are typical for early type stars (e.g.Sana et al. 2012).This period is also reminiscent of the massive galactic binary WR20a, which contains two hydrogen WR stars in an orbit with a period 3.7 days (see Appendix A and Rauw et al. 2005;Bonanos et al. 2010).
For the age of the system we infer a median value of 2.8 +0.1 −0.1 Myr (bottom panel).This is consistent with the age derived for nearby components of NGC 346, which is approximately 3 ± 1 Myr (see Table 1).Posterior probability distributions of our model parameters: initial component mass, initial orbital period and the age.The median value is displayed above the curve, and the region bounded by the 5th and 95th percentiles are indicated by shading under the curve.In the upper and middle panel, the width of the bins corresponds to spacing in our model grid.In the bottom panel, bins are chosen by eye to be wide enough such that discrete features from our grid do not show up.
Variations in Wind Mass-Loss and Chemical Mixing
In Figure 5 we show the probability maps for each input physics variation.We see the same diagonal band in each panel as for our fiducial grid, indicating a degeneracy between increasing initial mass and decreasing orbital period.The decreased winds panel appears to be the exception, but in this case the probabilities are too low to see the variation due to the chosen colorbar (and the trend still exists).
Most striking in this figure are the drastically increased probabilities found within the model grid for increased stellar winds, shown in the top panel.These high mass-loss evolutionary tracks are the best at recreating the observed parameters of HD 5980, especially in luminosity and surface hydrogen abundance (see Figure 6).For each of the explored input physics variations, we show the evolutionary behaviour in Appendix B.
In the bottom panel, where we show models with decreased mass loss, the probabilities are much lower than in the other panels and they fall below our chosen lower bound of the colorbar.We see that with decreased mass loss, it becomes very hard to explain the properties of HD 5980.As explained above, this is because significant widening is required to match the present-day orbital period of 19.3 days, while the reduced mass loss rates prevent such widening from occurring.These models cannot simultaneously match the masses and periods observed for HD 5980.Thus, the most likely evolutionary track in this grid is the one with the lowest initial mass and highest initial orbital period.
Comparing the variations in mixing, shown on the left and right panel of Fig. 5, we see that increased mixing generally increases P(M track |D).This effect is much less significant than the effect of the winds.
For most of the grids, the best fitting evolutionary track (marked with a white star symbol) can be found near the edge of the chemically homogeneous window.These models have the largest initial periods with the lowest allowed initial mass while still experiencing tidal CHE.Wider periods and lower masses are favored by our priors on the initial distributions of orbital periods and stellar masses.
For an overview of all the posterior distributions of the parameters from our analysis, see Figure 8 in Appendix C.
Bayes Factors Between Model Variations
To compare our different model variations, we compute their Bayes factor relative to the fiducial model grid.In Table 2, we provide an overview of the Bayes factors together with the median and 5th/95th per-1.92.0 2.1 2.2 2. centiles for the initial model parameters.We see that the model grid with increased mixing is somewhat preferred over our fiducial model grid.However, the increased winds model grid is strongly favored over the fiducial.
The preference of this model can be largely attributed the orbital widening that follows from wind mass loss.Since our models have short initial orbital periods necessary for tidal locking (2-4 days), a large amount of mass loss is needed to reach the present-day observed period of ∼ 19 days (see also the top panel of Figure 3 and the corresponding discussion in Section 5.2).For this same reason, the model grid with reduced wind mass-loss is very strongly disfavored: these models fail to sufficiently widen the binary orbit to the observed value.As mentioned in Section 3, winds remain one of the main uncertainties in massive-star models.Our models imply that in order to explain HD 5980 with tidal CHE, the winds of these very massive stars need to be significantly larger than the optically thin winds of OB stars as described by Vink et al. (2001).In summary, we find that enhanced mixing and, especially, enhanced wind mass-loss help to explain the observed properties of HD 5980.It is worth noting that within the enhanced wind mass-loss grid, the stellar behavior tends to resemble that of wind-stripped stars rather than stars undergoing homogeneous evolution due to rotational mixing.While the initial homogeneous evolution is driven by tidal interactions, the stellar winds pick up as we switch to the Hamann et al. (1995) prescription when the surface He abundance increases to Y s > 0.7.Consequently, further homogeneous evolution occurs, at least in part, because stellar winds strip away the star's envelope; these outcomes rely heavily on the adopted wind mass-loss prescription.
FURTHER EVOLUTION AND FINAL FATE
So far, we have focused on uncovering the possible progenitor system of the present-day HD 5980 A&B binary.Equally interesting is its possible future evolution.In this section, we investigate the future fate of HD 5980, both under the enhanced wind mass-loss and fiducial assumptions.
Binary Black Hole Formation and Potential Pair Instability Pulsations
We use MESA to further evolve our best fitting models from the enhanced wind mass-loss grid (m 0 ≈ 188 M ⊙ , p 0 = 2.8 days) and the fiducial grid (m 0 ≈ 170 M ⊙ , p 0 = 3.6 days) until core collapse.For these calculations we apply the method of Marchant et al. (2019), Farmer et al. (2019), and Renzo et al. (2020).
After completing their advanced burning stages, the stars lose most of their mass via stellar winds, leaving behind massive CO cores with a thin helium layer on top.
For the preferred, best-fitting enhanced wind massloss model, we find the final CO core mass at C-ignition to be about 19 M ⊙ , for both components.A CO core with this mass is expected to experience core collapse directly into a black hole, without losing further mass due to pair production.Thus, the final fate of this model is a binary black hole with masses of about 19 M ⊙ .
For our fiducial model, we find the CO core masses at C-ignition to be about 37 M ⊙ , which places them just above the lower bound to experience a pair-pulsational supernova (PPISN, Fowler & Hoyle 1964;Barkat et al. 1967;Rakavy & Shaviv 1967).The CO-core loses approximately 0.1 M ⊙ in pulses before finally collapsing into a black hole.
This would result in a black hole binary with two black holes of about 36.5 M ⊙ (ignoring any further mass loss during the collapse to a black hole, e.g., Marchant et al. 2019).This implies a chirp mass of about 32 M ⊙ , which is slightly above the location of the observed high-mass feature in the mass distribution of merging binary black holes as detected by LIGO and Virgo (which is currently constrained to lie at an approximate chirp mass of 27.9 1.9 −1.8M M ⊙ , Abbott et al. 2023).This feature has often been attributed to the pulsational pair-instability pile up (e.g., Talbot & Thrane 2018;Farr et al. 2019;Stevenson et al. 2019), which would agree with our result presented here.However, recent updates of the tabulated 12 C(α, γ) 16 O reaction rates have pushed the lower boundary of pair-pulsational supernova, and the corresponding expected location of a pile-up in the remnant mass distribution, to higher masses (Farmer et al. 2020;Mehta et al. 2022;Farag et al. 2022;Shen et al. 2023;Hendriks et al. 2023).More updated prescriptions would therefore not result in a PPISN, but in direct core-collapse supernova.Given the low mass loss during the pulse, our final CO-core mass would still lead to a comparable black hole mass via core collapse, and nonetheless lie interestingly close to this observed feature.
In summary, we expect that HD 5980 A&B will result in CO-core masses ∼ 19 − 37 M ⊙ , producing a binary black hole with approximately those same masses.Our preferred model from the enhanced wind mass-loss grid falls within the regime for core collapse.When we instead follow the fiducial model, we find CO-core masses that are either on the edge, or just too low-mass, to experience PPISN; and which produce binary black holes with masses that coincide with the high-mass feature in the mass distribution of merging binary black holes (chirp mass ∼ 32 M ⊙ ).
The enhanced wind mass-loss and fiducial models widen from their "present-day" state due to wind mass loss in their later evolutionary stages.We find final orbital periods of about 220 days and 55 days, respectively.These systems are both too wide to merge within the age of the universe due to gravitational wave driven inspiraling (Peters 1964); however, we discuss a possible merger scenario in the following section.
Speculation on the Role of the Third and Fourth Companions
HD 5980 A&B is thought to be part of a hierarchical 2+2 quadruple system.This may have interesting implications for the final fate of the binary black hole (see, e.g., Antonini et al. 2017;Fragione & Kocsis 2019;Vynatheya & Hamers 2022;Vigna-Gómez et al. 2022).In this section, we speculate on the ultimate fate of HD 5980 A&B.
Little is know about the nature of the triple and quadruple components and the parameters that characterize their orbits.Koenigsberger et al. (2014) quote an orbital period of 96.56 days and an eccentricity (e = 0.82) for third star, star C, and its unseen fourth companion, which we refer to as component D. Star C is estimated to have a mass of about 30 M ⊙ (Shenar et al. 2016).The mass and nature of component D is not known.Neither is the mutual orbit of the A&B and C&D binaries known, and it may in fact be simply a chance alignment.However, given that massive stars often occur in higher order multiple systems (e.g.Sana et al. 2014;Moe & Di Stefano 2017;Offner et al. 2023), we will speculate on possible consequences of a quadruple nature in the remainder of this section.
If system A&B and system C&D indeed form a bound 2+2 quadruple system, we can estimate the minimum mutual orbital period (i.e., the outer orbital period) required for dynamical stability.Throughout the rest of this section, we will assume a plausible range of masses of 3-40 M ⊙ for the unseen companion D (spanning the parameter space from a lower-mass star to a high-mass compact object).With this, we use Eq. 90 from Mardling & Aarseth (2001) on the present day parameters to estimate that the mutual orbital period has to be a least ∼ 500 days, as shorter periods would cause the system to enter the regime for chaotic interactions, likely leading to a (partial) break up of the quadruple.
Both inner binaries in a 2+2 quadruple system may experience secular evolution due to long-term gravitational torques, leading to eccentricity oscillations (see Naoz 2016 for a general review on secular evolution in triple systems, though quadruple systems are generally more complicated).The inner binary system with the longer period (containing Star C and its companion D, with a period of 96.56 days) would experience secular eccentricity oscillations on a shorter time scale than the inner binary with the shorter period (containing stars A and B, with a period of 19.3 days), since the secular time-scale generally scales as P 2 out /P in .This difference may explain why the C&D system has a higher eccentricity (e = 0.82), and thus possibly hint that the system is indeed presently undergoing secular evolution.We note also that secular evolution, in addition to eruptive mass loss episodes, may have contributed to the eccentricity of the A&B system.
Generally, hierarchical quadruples are susceptible to chaotic secular evolution, meaning the eccentricity evolution can be highly complex and much higher eccentricities can be reached for more orthogonal inclinations (e.g., Pejcha et al. 2013;Hamers & Lai 2017;Grishin et al. 2018).Whether or not this type of chaotic secular evolution occurs can be characterized by the ratio of the secular timescales of both orbit pairs in the 2+2 system.In Hamers & Lai (2017, Eq. 32), this ratio is defined as β, where β values close to 1 experience significant eccentricity boosts.Based on the current state of the HD 5980 system, reasonable values of β (here defined as the ratio of the secular time-scale of the A&B orbit to the outer orbit, to the secular time-scale of the C&D orbit to the outer orbit) are in the range 5 ≲ β ≲ 15.This means that the current system is probably not in the regime for chaotic secular interaction (see e.However, if the A&B components evolve to 37 M ⊙ BHs in a 55-day orbit (as we suggest in Section 6.1 for our fiducial model), then the β ratios get much closer to unity, namely 0.9 ≲ β ≲ 5.1.If the components instead evolve to 19 M ⊙ BHs in a 220-day orbit (as suggested for our enhanced winds model), then the β ratios are even closer to unity, falling in the range 0.7 ≲ β ≲ 3.1.In the latter case, the inner period of 220 days may even be wide enough relative to the (unknown) outer orbital period that the system is brought into the chaotic regime.
This could suggest that eccentricity excitation in A&B's orbit, if not efficient right now, could possibly become very significant through secular chaotic evolution by the time the BHs form, significantly accelerating their GW-driven merger.This argument is dependent, however, on short-range forces not quenching secular evolution.The relativistic apsidal motion timescale in the BH A&B orbit is calculated to be ∼ 10 5 − 10 6 yr.The secular time-scale in orbit A&B is similar or shorter than the apsidal-motion timescale for outer orbits shorter than approximately 50 yr, which is plausible given that the minimum current outer orbital period for stability is ∼ 500 days, or 1.4 yr.Therefore, relativistic quenching should be unimportant and not impede the coalescence of the BHs, unless the outer orbit is fairly wide.
Above, we assumed that the components in the C&D orbit do not merge before the A&B stars successfully evolve to BHs.It is possible, however, that the C&D components merge at an earlier time due to secularlydriven high eccentricity.In this scenario and depending on the outer orbit, the merger remnant could potentially fill its Roche-lobe around the A&B BH binary system as it evolves (see, e.g. de Vries et al. 2014;Glanz & Perets 2021;Hamers et al. 2022;Merle et al. 2022).This could in turn lead to either stable mass transfer or common envelope evolution; in the latter case, the BHs could merge during the process in a gas-rich environment, and potentially produce EM counterpart observations.Regardless of the uncertain final fate of the HD 5980 system itself, we conclude that systems like it can be very interesting as progenitors for gravitational wave events, which may even be prompt and associated with an EM counterpart.
MODEL LIMITATIONS AND ALTERNATIVE EVOLUTIONARY EXPLANATIONS
We have explored a possible explanation for the HD 5980 A&B system through a pathway of chemically homogeneous evolution starting from a tight binary system (tidal CHE).In general, the solution we found involves very high initial masses, for which the stellar evolution is highly uncertain.Moreover, although we can satisfactorily explain the present-day period, masses, and luminosity, we were unable to explain several other observed properties of HD 5980.
Firstly, our fiducial, reduced, and enhanced mixing models are at tension with observed hydrogen abundances (see Figures 6 and 8).Our enhanced mass-loss models do reproduce the surface hydrogen abundance within two standard deviations on the primary and one standard deviation on the secondary.
Second, HD 5980 shows evidence for eruptive mass loss episodes (see, e.g., Bateson & Jones 1994;Barba et al. 1995;Koenigsberger et al. 1995;Sterken & Breysacher 1997).We do not account for this in our models, given that the nature of these eruptions is still unknown.We have also not addressed the eccentricity, though we note it is possible the effect of secular evolution may amplify any small eccentricity initially contained within, or later introduced to the system, e.g. through significant mass-eruptions.It is possible that these mass eruptions themselves induce eccentricity, similarly to the Blaauw kick from spherically symmetric supernova ejecta.However, we expect the latter effect to be small, assuming that these previous eruptions were similar to the 1994 event in which approximately 10 −3 M ⊙ was ejected in total (which is much smaller than is typically ejected in a supernova event).
We have not fit our models to observed constraints on the surface temperature due to significant uncertainties in its value.HD 5980 A&B are experiencing high rates of mass-loss, effectively embedding the stars within an optically-thick wind.We believe the observationally derived value of T eff will measure the temperature of the wind and is thus significantly lower than the temperature of the photosphere as computed by MESA.Indeed, when modelling HD 5980, Georgiev et al. (2011) define the photosphere at the sonic point radius R s , and find T s (R s ) = 60 kK.The effective temperature of their model is defined as the temperature at the radius where τ = 2/3, and this point lies above the photosphere in the optically-think wind region.They find T eff = 43 kK, which is consistent with the findings from e.g., Shenar et al. (2016) of T eff ≈ 45 kK.Because the sonic point (R s = 19.6R ⊙ ) lies closer to the hydrostatic stellar radius (e.g., Gräfener & Vink 2013) we expect T (R s ) from Georgiev et al. (2011) to be closer to the photospheric temperature from our MESA model.We do indeed find, as can be seen in Figure 9, that our best fitting models overpredict the observed T eff = 43 kK, but are consistent with the inferred sonic-point temperature from Georgiev et al. (2011).This is particularly encouraging given that we did not fit for the temperature.
The results obtained in this study furthermore rely crucially on two assumptions: 1) the mixing is dominated by meridional circulation, which depends strongly on the rotational velocity, and 2) synchronous rotation is imposed throughout the time over which the star is evolved.Synchronous rotation implies that the star rotates as a rigid body.Thus, there is no differential rotation and no shear instabilities can be triggered.Standard models predict that the core contracts as hydrogen is depleted and, due to the conservation of angular momentum, it speeds up.At the same time, as the outer layers expand, they slow down.Hence, classically, a significant differential rotation structure develops before RLOF occurs.This can lead to significantly more mixing than if synchronous rotation is imposed at all times (Song et al. 2013).Furthermore, true synchronous rotation can only occur in a circular orbit, and HD 5980 has significant eccentricity.Thus, its interior layers are continuously subjected to shearing motions which could lead to enhanced mixing.
Also important is the effect of tides, which are highly uncertain.Both arguments for less efficient tides (e.g., Yoon et al. 2010;Qin et al. 2018;Townsend & Sun 2023;Sciarini et al. 2024), and more-efficient tides (Witte & Savonije 1999;Ma & Fuller 2023) have been made.Since the stars in our CHE models are on such close orbits, slightly less efficient tides will not significantly impact their main-sequence evolution (since they would still synchronize before most of the mixing takes place).Hence, we do not expect the window of CHE to be significantly affected.Additionally, tides are important in deciding whether the stars remain locked as they evolve into Wolf-Rayet stars, and thus affect the spin of the final remnants (see e.g., Qin et al. 2018;Bavera et al. 2020).
Alternative Evolutionary Explanations
In this work, we have focused on the chemically homogeneous evolutionary pathway to HD 5980 due to tidal interactions from ZAMS.However, we recognize that in both our fiducial and enhanced mixing grids, the maximum of the posterior distribution lies at the boundary of the chemically homogeneous window.This could imply that the 'best' solution lies beyond our definition of CHE.Below, we discuss some of the plausible alternative formation pathways.
Post mass-transferring binary -HD 5980 A&B could have started as a wider binary system that experienced a mass transfer phase, causing the orbit to shrink to its current separation.As mentioned in Section 1, this is thought to be unlikely because both stars look relatively evolved (Koenigsberger et al. 2014).However, the un-usual properties of this system (e.g., massive stars, high eccentricity, LBV-like outbursts, etc) may open the door to uncertainties which allow for an unusual evolution in the standard pathway.Koenigsberger et al. (2021) suggested that the mixing efficiency can be enhanced in binary systems that rotate asynchronously -this would allow for relatively lower, and more believable, initial masses for the system.Additionally, the eccentricity of the system implies the presence of steep velocity gradients in the shearing flows around the time of periastron passage, which may lead to enhanced mass-loss rates around periastron passage (Moreno et al. 2011).It is not clear how such periastron passage events may impact the overall mass-loss properties, which is further complicated by LBV-type outbursts; hence the evolutionary path is subject to uncertainty.
It is non-trivial to obtain converged models of high mass stars that evolve through mass transfer, but we note that including such peculiar effects into the postmass transfer scenario might constitute one of the most interesting cases for follow-up studies.
Accretion-induced CHE -HD 5980 A&B may have started as a wider binary that experienced a mass transfer phase, causing the orbit to shrink and spinning up the accreting star, which would then evolve chemically homogeneously.Recent work by Ghodla et al. (2023), building on the work done by, e.g., Wilson & Stothers (1975), Packet (1981), andCantiello et al. (2007), proposed that the majority of chemically homogeneously evolving stars may be derived from this channel.This pathway could explain the relatively evolved appearance of both stars in HD 5980, assuming that we observe the systems post-mass transfer.It would require a stable mass transfer phase from a relatively unequal mass ratio system (q ZMAS ≈ 0.5), and inefficient mass transfer (less than 10% of the donated mass accreted).Both these conditions are plausible and we highlight this as a possible, but unexplored, pathway in the context of HD 5980.
Birth-spin CHE -This is the evolutionary pathway as suggested by Koenigsberger et al. (2014), and Schootemeijer & Langer (2018) (see Section 2).This pathway can explain the relatively wide observed orbital period of HD 5980 without the need for heavy mass loss (as is required for the tidal CHE models explored in this work).However, birth-spin distribution of massive stars is highly uncertain (see e.g., de Mink et al. 2013, and references therein).
CONCLUSION & SUMMARY
We have explored the hypothesis that the massive binary star system HD 5980 A&B is a system that formed through tidally induced chemically homogeneous evolution starting from a (near-) contact binary system.We show that the tidal CHE scenario can explain the short orbital period, present-day masses, luminosities, and sonic-point temperatures.Under this interpretation, we find that the progenitors were at least 150 M ⊙ at zero age and orbiting each other with a period of about 3 days.Our increased wind mass loss models are most favored to explain HD 5980, since a lot of mass loss from winds is needed to bring the binary from a tidally locked configuration at birth (P < 4d), to the observed orbital period of ∼ 20d.Our models imply that, if the tidal CHE scenario correctly describes HD 5980, the winds of very massive binary stars are different (much higher) than the commonly assumed optically thin winds of OB stars from Vink et al. (2001).Perhaps this may be explained, in part, by periods of enhanced mass loss corresponding to the observed LBV-like behavior of HD 5980 A. We further find only slightly improved fits if we consider enhanced internal mixing.This can be taken as mild support for more efficient mixing within HD 5980like CHE stars than is assumed in Hastings et al. (2020), which would be in line with the enhanced mixing proposed by Koenigsberger et al. (2021).
We discuss the implications of the CHE interpretation.We find that HD 5980-like models have nearly completed their hydrogen-burning phase.The present-day masses of about 60 M ⊙ are thus order of magnitude close to the mass of their helium cores.Under fiducial assumptions, these models are candidates for pair(-pulsational) instability supernovae; they result in a binary black hole system with component masses of about 37 M ⊙ , which closely matches the observed feature in binary black hole masses at 35 M ⊙ (as reported by the LVK collaboration, Abbott et al. 2023).Under our preferred enhanced wind mass-loss assumptions, these models instead experience core collapse, producing a binary black hole system with masses of about 19 M ⊙ .Although the binary black holes produced via forward modelling would be too wide to inspiral due to gravitational wave emission alone, we note that HD 5980 has been claimed to be a member of a quadruple system -dynamical interaction with the system's third and fourth components may drive the binary black hole to a merger within a Hubble time.Depending on the behavior of the third and fourth components, this merger may even have an EM counterpart.
We stress that the evolution of such very massive stars is still poorly understood, and hence our models are sensitive to uncertain physical assumptions.Although the CHE interpretation is promising, we note that our cur-rent models do not reproduce the observed hydrogen and helium abundances well.We do not exclude alternative formation pathways for HD 5980.Nonetheless, we find this system is a likely binary black hole progenitor, which provides important clues for understanding the evolution of massive binary stars and the formation of possible gravitational-wave sources.
Software: This research made use of MESA: Modules for Experiments in Stellar Astrophysics, r12778 (http://mesa.sourceforge.net,Paxton et al. 2010Paxton et al. , 2011Paxton et al. , 2013Paxton et al. , 2015Paxton et al. , 2018Paxton et al. , 2019)) We are very grateful for Adrian Hamers for discussions on the consequences of the possible quadruple nature of the system.We also thank Ruggero Valli for a helpful discussion on tides, and Tomer Shenar, Pavan Vynatheya, and Jay Gallagher for their valuable comments and insights.Lastly, we would like to acknowledge the anonymous reviewer, whose comments helped to substantially improve the text.2019) more recently performed a detailed atmospheric analysis of VFTS 352 and concluded that even CHE models cannot provide a good fit for this system.Although they find a tidal CHE model that provides an explanation for its location in the Hertzsprung-Russel diagram, they simultaneously over-predict the N and He enrichment with respect to observations.SMC AB 8 -is an SMC binary containing a WR star (19 +3 −8 M ⊙ ) and a companion O-type star (61 +14 −25 M ⊙ ) on a 16.6 day period.Shenar et al. (2016) performed spectral analysis of all multiple WR systems in the SMC known at the time.They compare their observed properties to BPASS (Eldridge et al. 2008) models assuming either no mixing or CHE, and found that AB 8 was consistent with the (birth-spin) CHE models.They conclude that CHE can explain its evolutionary state, although it is not a necessary assumption.R 145 -is an eccentric (e = 0.79), wide (P orb ≈ 160 days) WR binary in the Large Magellanic Cloud (LMC) with masses 53 +40 −20 M ⊙ and 54 +40 −20 M ⊙ studied in Shenar et al. (2017).They argue that non-homogeneous evolution would have led to Roche-lobe overflow and mass transfer that leads to mass ratios inconsistent with the q ∼ 1 they infer.They conclude CHE induced by birth spin best describes the system.R 144 -is a WR binary in the LMC very similar to R145, but with a shorter orbital period (P orb ≈ 74 days) and less extreme eccentricity (e ≈ 0.5); it is also more massive, having component masses of 74 ± 4 M ⊙ and 69±4 M ⊙ (Shenar et al. 2021).Shenar et al. (2021) compare the components to evolutionary tracks from Brott et al. (2011) and Köhler et al. (2015), and it is found that high initial rotation rates (birth-spin CHE) wellreproduce the observables except for the present day rotational velocities.
WR20a -is a very massive Milky Way WR binary, with component masses ∼ 80 M ⊙ (with primary 82.7±5.5 M ⊙ and secondary 81.9 ± 5.5 M ⊙ ) and an orbital period of 3.686 ± 0.01 days (Bonanos et al. 2004;Rauw et al. 2005;Bonanos et al. 2010).Rauw et al. (2005) note that the spectrum of the system shows enhanced nitrogen and depleted carbon abundances at the stars' surfaces.They also estimate the velocity of rotation to be ∼ 256 km s −1 from the current orbital period (assuming synchronous rotation).This system has also been considered as a candidate for CHE evolution (de Mink et al. 2008(de Mink et al. , 2009b)).
M33 X-7 -is a massive O-star + BH binary (with O star 70.0 ± 6.9 M ⊙ and black hole 15.65 ± 1.45 M ⊙ Orosz et al. 2007), on an orbital period of 3.45 days, yet the O star remains within its Roche-lobe.de Mink et al. (2009a) suggested CHE as an explanation, as this would allow the system to avoid Roche-lobe overflow (at least until the end of core helium burning).Alternatively, as proposed by Valsecchi et al. (2010), this system may result from a phase of Roche lobe overflow where the main-sequence primary transfers part of its envelope to the secondary and loses the rest in a wind.
B. COMPARISON OF EVOLUTIONARY TRACKS AMONG THE DIFFERENT INPUT PHYSICS VARIATIONS
In Figure 6 we provide example evolutionary tracks for all five input physics variations for more insight into their evolutionary differences.To allow for a direct comparison, we fix the initial parameters to initial component masses of 180 M ⊙ and an initial orbital period of 2.8 days.Note that the tracks shown here are thus not the best fitting models of their respective input physics variations grids.However, for this choice we are able to compare chemically homogeneously evolving models for all input physics variations.
As can be seen in the upper panel, all tracks follow the same relation for the orbital widening as the system loses mass.However, the final orbital periods differ.The model with reduced mass loss widens fails to widen enough to explain the present-day orbital period of HD 5980.This also explains the lack of good fits in the grid with reduced winds, see also ing, fiducial, and increased mixing) all widen to approximately HD 5980's present day orbital period just before they terminate (at the beginning of core He burning).The increased mass-loss model widens far beyond the observed period.
In the middle panel we see that the evolution of the stars' luminosities.The models with reduced/enhanced mass loss are significantly brighter/dimmer than the 0.0 0.5 1.0 1.5 2.0 2.5 Age (Myr) fiducial, respectively.The models with different mixing variations are very similar to the fiducial model.
In the bottom panel we see the evolution of the surface hydrogen abundance.The models with increased/reduced mass loss show a shallower/steeper decrease of the surface hydrogen abundance as the star loses mass, respectively.This can be understood as being the results of two mechanisms that are responsible for enriching the surface.The first mechanism is stripping by mass loss, which exposes deeper, chemically processed layers of the star.This process dominates for the model with increased mass-loss.The second mechanism is rotational mixing.This process dominates for the reduced mass loss model, which can be seen from the significant reduction of the hydrogen surface abundance down to 0.4 while losing only 10 M ⊙ .
The reason that mixing is more efficient in the reduced mass loss model is due to its rotation -which is a consequence of the difference in orbital evolution.Reduced mass loss reduces the widening, so tides force the star to co-rotate with a shorter orbital period.The stars thus spend a larger fraction of their lifetime spinning at a significant fraction of their critical velocity (see Figure 7).
C. POSTERIOR DISTRIBUTIONS FOR THE DIFFERENT INPUT PHYSICS GRIDS
In Figure 8, we show marginalized posterior probability distributions for all five variations of the input physics on each of the observed parameters.For most of these parameters, the peaks of the distributions for all input physics variations fall within 1-σ of the observed mass fraction, can be explained within 2-σ.Only the reduced and enhanced wind mass-loss variations are consistent with the surface abundances of H1 in HD 5980 A and B to within 1-or 2-σ.
Comparing among the input physics variations, the enhanced mass loss distribution is always the best or second-best fit to the observed parameters, and thus the best overall.As denoted by the Bayes' factors in Section 5.3.1, enhanced wind mass-loss (and, thus, a combination of tidal CHE and wind stripping) is preferred in explaining the present-day state of HD 5980 A&B.
In Figure 9, we show the marginalized posterior probability distributions for effective temperature (on all five variations of the input physics).We include the observed surface temperatures for both stars quoted in Shenar et al. (2016) as error bars at the top of the plot.The models' effective temperatures do not fit well to the reported observed surface temperature -the distributions in Figure 9 all fall above the reported temperature of ∼ 45 kK.However, these temperatures are consistent with the temperature at the sonic point, T s , from Georgiev et al. (2011) (Figure 16), shown as a blue, dashed error bar at the top of the plot.As discussed in Section 7, we believe T s is much closer to the temperature reported by MESA; it is encouraging that T s is consistent with our models even without fitting for it in our analysis.
Figure 2 .
Figure 2. The probability of each model run/track in our fiducial model grid given the data as a function of the initial period and component mass, P(M track |D).The most probable model is marked with a white star.Gray crosses mark the example models used in Figure 3 alongside the most probable model.Grey squares are models which do not evolve chemically homogeneously.White squares show models which failed to converge.
Figure 3 .
Figure 3. Evolutionary tracks of the orbital period, luminosity, and surface hydrogen mass fraction as a function of the decreasing mass.Models start at the star symbol and evolve in the direction indicated.The color indicates P(M snapshot |D), which is the probability of a given snapshot in time along an evolutionary track given the data.Green dashed-dotted lines in the top panel show the analytical solution as discussed in the text.Observed data for HD 5980 A (gray) and B (black) are plotted with their corresponding error bars.
Figure 4 .
Figure4.Posterior probability distributions of our model parameters: initial component mass, initial orbital period and the age.The median value is displayed above the curve, and the region bounded by the 5th and 95th percentiles are indicated by shading under the curve.In the upper and middle panel, the width of the bins corresponds to spacing in our model grid.In the bottom panel, bins are chosen by eye to be wide enough such that discrete features from our grid do not show up.
Figure 5 .
Figure 5. Probability maps showing P(M track |D) for each of our input physics variations.Panels showing the grids of models for each of the variations of the input physics are laid out as in Figure 1.In white, we overplot the fiducial boundary for chemically homogeneous evolution for comparison among the input physics variations.The best fitting model in each grid is marked with a white star.Gray crosses show the models which are used for comparison between the input physics variations in Appendix B.
Figure 6 .
Figure6.Evolutionary tracks for all variations of the input physics, using models with initial mass 180 M⊙ and period 2.8 days.The fiducial model is shown as a solid orange line.The white star marks the beginning of the tracks; the stars evolve towards the right.Reduced and increased mixing are shown in pink and dark purple lines, respectively.Reduced and increased wind mass-loss are shown in teal and dark green.In the upper panel, a white circle shows the end point of the evolutionary track.Error bars show the observational values for HD 5980's primary (gray) and secondary (black).
Figure 7 .
Figure7.The evolution of the rotation rate, expressed as a fraction of the critical velocity, as a function of age for each of the example models shown in Fig.6.
Figure 8 .Figure 9 .
Figure8.Posterior probability distributions for all variations of the input physics on the observed parameters of stellar mass, luminosity, orbital period, and surface hydrogen mass fraction.Variations of the input physics are colored as in Figure6, and error bars representing the observed values for HD 5980 A and B are shown at the top of each panel.Note that we use an error for the orbital period which is different from the observed standard deviation (see Section 5.1).Additionally, in the panel for orbital period, the peak of the reduced wind distribution falls far below the lower bound of the plot -it is represented by an arrow pointing leftward.
Table 2 .
The total likelihood L for each model grid given the data is given relative the total likelihood for our fiducial model L fid .Positive values indicate input physics variations that are preferred over the fiducial assumptions.This shows the increased mixing and increased winds variations are favored over our fiducial input physics assumption.We also list the corresponding median and 5th/95th percentiles for the initial component mass, m0, initial orbital period, p orb,0 , and the age of the best fitting model, t, for HD 5980.
This project was funded in part by the Netherlands Organization for Scientific Research (NWO) as part of the Vidi research program BinWaves with project number 639.042.728 and the National Science Foundation under Grant No. (NSF grant number 2009131).GK acknowledges funding support from UNAM DGAPA/PAPIIT grant IN105723.We further acknowledge the Black Hole Initiative funded by the John Templeton Foundation and the Gordon and Betty Moore Foundation.APPENDIX A. FURTHER BINARY CANDIDATES FOR CHE HD 5980 is not the only binary system that has been discussed in the context of CHE.Below, we briefly summarise a (non-exhaustive) list of further candidates.VFTS 352 -is a massive double-lined O-type spectroscopic binary system in the LMC studied in detail by Almeida et al. (2015), who suggested it as a candidate for CHE.VFTS 352 is one of the strongest candidates for CHE to date, together with HD 5980 (see Section 2), because of its short orbital period (1.124 days, Almeida et al. 2017), massive components (28.63 ± 0.30 M ⊙ and 28.85 ± 0.30 M ⊙ , Almeida et al. 2015), and rapid rotation (about 290km s −1 , Ramírez-Agudelo et al. 2015).Nonetheless, Abdul-Masih et al. ( | 16,724.4 | 2024-02-19T00:00:00.000 | [
"Physics"
] |
Investigation of Monte Carlo uncertainties on Higgs boson searches using jet substructure
We present an investigation of the dependence of searches for boosted Higgs bosons using jet substructure on the perturbative and non-perturbative parameters of the Herwig++ Monte Carlo event generator. Values are presented for a new tune of the parameters of the event generator, together with the an estimate of the uncertainties based on varying the parameters around the best-fit values.
Introduction
Monte Carlo simulations are an essential tool in the analysis of modern collider experiments. These event generators contain a large number of both perturbative and nonperturbative parameters which are tuned to a wide range of experimental data. While significant effort has been devoted to the tuning of the parameters to produce a best fit there has been much less effort understanding the uncertainties in these results. Historically a best fit result, or at best a small number of tunes, are produced and used to predict observables making it difficult to assess the uncertainty on any prediction. The "Perugia" tunes [1,2] have addressed this by producing a range of tunes by varying specific parameters in the PYTHIA [3] event generator to produce an uncertainty.
Here we make use of the Professor Monte Carlo tuning system [4] to give an assessment of the uncertainty by varying all the parameters simultaneously about the best-fit values by diagonalizing the error matrix. This then allows us to systematically estimate the uncertainty on any Monte Carlo prediction from the tuning of the event generator. We will illustrate this by considering the uncertainty on jet substructure searches for the Higgs boson at the LHC.
As the LHC takes increasing amounts of data the discovery of the Higgs boson is likely in the near future. Once we a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>have discovered the Higgs boson, most likely in the diphoton channel, it will be vital to explore other channels and determine if the properties of the observed Higgs boson are consistent with the Standard Model. For many years it was believed that it would be difficult, if not impossible, to observe the dominant h 0 → bb decay mode of a light Higgs boson. However, in recent years the use of jet substructure [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] offers the possibility of observing this mode. Jet substructure for h 0 → bb as a Higgs boson search channel, was first studied in Ref. [5] building on previous work of a heavy Higgs boson decaying to W ± bosons [16], highenergy W W scattering [21] and SUSY decay chains [22], and subsequently reexamined in Refs. [8,15]. Recent studies at the LHC [23][24][25] have also shown this approach to be promising.
The study in Ref. [5] was carried out using the (FOR-TRAN) HERWIG 6.510 event generator [26,27] together with the simulation of the underlying event using JIMMY 4.31 [28]. In order to allow the inclusion of new theoretical developments and improvements in non-perturbative modelling a new simulation based on the same physics philosophy Herwig++, currently version 2.6 [29,30], is now preferred for the simulation of hadron-hadron collisions.
Herwig++ includes both an improved theoretical description of perturbative QCD radiation, in particular for radiation from heavy quarks, such as bottom, together with improved non-perturbative modeling, especially of multiple parton-parton scattering and the underlying event. In FOR-TRAN HERWIG a crude implementation of the dead-cone effect [31] meant that there was no radiation from heavy quarks for evolution scales below the quark mass, rather than a smooth suppression of soft collinear radiation. In Herwig++ an improved choice of evolution variable [32] allows evolution down to zero transverse momentum for radiation from heavy particles and reproduces the correct soft limit. There have also been significant developments of the multiple-parton scattering model of the underlying event [33,34], including colour reconnections [35] and tuning to LHC data [36].
The background to jet substructure searches for the Higgs boson comes from QCD jets which mimic the decay of a boosted heavy particle. Although Herwig++ has performed well in some early studies of jet substructure [25,37,38], it is important that we understand the uncertainties in our modelling of the background jets which lie at the tail of the jet mass distribution.
In addition we improve the simulation of Higgs boson decay by implementing the next-to-leading-order (NLO) corrections to Higgs boson decay to heavy quarks in the POWHEG [39,40] formalism.
In the next section we present our approach for the tuning of the parameters, which effect QCD radiation and hadronization, in Herwig++ together with the results of our new tune. We then recap the key features of the Butterworth, Davison, Rubin and Salam (BDRS) jet substructure technique of Ref. [5]. This is followed by our results using both the leading and next-to-leading-order matrix elements in Herwig++ with implementation of the next-to-leading-order Higgs boson decays and our estimate on the uncertainties.
Tuning Herwig++
Any jet substructure analysis is sensitive to changes in the simulation of initial-and final-state radiation, and hadronization. In particular the non-perturbative nature of the phenomenological hadronization model means there are a number of parameters which are tuned to experimental results. Herwig++ uses an improved angular-ordered parton shower algorithm [29,32] to describe perturbative QCD radiation together with a cluster hadronization model [29,41]. The Herwig++ cluster model is based on the concept of preconfinement [42]. At the end of the parton-shower evolution all gluons are non-perturbatively split into quarkantiquark pairs. All the partons can then be formed into colour-singlet clusters which are assumed to be hadron precursors and decay according to phase space into the observed hadrons. There is a small fraction of heavy clusters for which this is not a reasonable approximation which are therefore first fissioned into lighter clusters. The main advantage of this model, when coupled with the angularordered parton shower is that it has fewer parameters than the string model as implemented in the PYTHIA [3] event generator yet still gives a reasonable description of collider observables [43].
To tune Herwig++, and investigate the dependency of observables on the shower and hadronization parameters, the Professor Monte Carlo tuning system [4] was used. Professor uses the Rivet analysis framework [58] and a number of simulated event samples, with different Monte Carlo parameters, to parameterise the dependence of each observable 1 used in the tuning on the parameters of the Monte Carlo event generator. A heuristic chi-squared function is constructed where p is the set of parameters being tuned, O are the observables used each with weight w O , b are the different bins in each observable distribution with associated experimental measurement R b , error b and Monte Carlo prediction f b (p). Weighting of those observables for which a good description of the experimental result is important is used in most cases. The parameterisation of the event generator response, f (p), is then used to minimize the χ 2 and find the optimum parameter values. There are ten main free parameters which affect the shower and hadronization in Herwig++. These are shown in Table 1 along with their default values and allowed ranges.
The gluon mass, GluonMass, is required to allow the non-perturbative decay of gluons into qq pairs and controls the energy release in this process. PSplitLight, ClPowLight and ClMaxLight control the mass distributions of the clusters produced during the fission of heavy clusters. ClSmrLight controls the smearing of the direction of hadrons containing a (anti)quark from the perturbative evolution about the direction of the (anti)quark. Al-phaMZ is strong coupling at the Z 0 boson mass and controls the amount of QCD radiation in the parton shower, while Qmin controls the infrared behaviour of the strong coupling. pTmin is the minimum allowed transverse momentum in the parton shower and controls the amount of radiation and the scale at which the perturbative evolution terminates. PwtDIquark and PwtSquark are the probabilities of selecting a diquark-antidiquark or ss quark pair from the vacuum during cluster splitting, and affect the production of baryons and strange hadrons respectively.
Previous experience of tuning Herwig++ has found that Qmin, GluonMass, ClSmrLight and ClPowLight to be flat, and so it was chosen to fix these at their default values [29].
To determine the allowed variation of these parameters Professor was used to tune the variables in Table 1 to the observables and weights found in Appendix A in Tables 5, 6, 7 and 8. The dependence of χ 2 on the various parameters, about the minimum χ 2 value, is then diagonalized.
The variation of the parameters along the eigenvectors in parameter space obtained corresponding to a certain change, χ 2 , in χ 2 can then be used to predict the uncertainty in the Monte Carlo predictions for specific observables. In theory, if the χ 2 measure for the parameterised generator response is actually distributed as a true χ 2 , then a change in the goodness of fit of one will correspond to a one sigma deviation from the minima, i.e. the best tune. In practice, even the best tune does not fit the data ideally and nor is the χ 2 measure actually distributed according to a true χ 2 distribution. This means that one cannot just use Professor to vary the parameters about the minima to a given deviation in the χ 2 measure without using some subjective opinion on the quality of the results.
We simulated one thousand event samples with different randomly selected values of the parameters we were tuning. Six hundred of these were used to interpolate the generator response. All the event samples were used to select two hundred samples randomly two hundred times in order to assess the systematics of the interpolation and tuning procedure. A cubic interpolation of the generator response was used as this has been shown to give a good description of the Monte Carlo behaviour in the region of best generator response [4]. The parameters were varied between values shown in Table 1. The quality of the interpolation was checked by comparing the χ 2 /N df , where N df is the number of observable bins used in the tune, in the allowed parameter range on a parameter by parameter basis for the observables by comparing the interpolation response with actual generator response at the simulated parameter values. Bad regions were removed and the interpolation repeated leaving a volume in the 5-dimensional parameter space where the interpolation worked well. Figure 1 shows the χ 2 /N df distributions for two hundred tunes based on two hundred randomly selected event samples points for the cubic interpolation. The spread of these values gives an idea of the systematics of the tuning process showing that we have obtained a good fit for our parameterisation of the generator response.
The line indicates the tune which is based on a cubic interpolation from six hundred event samples. It is this interpolation which was used to vary χ 2 about the minimum to assess the uncertainty on the measured distributions. During the tune it was discovered that PSplitLight was relatively insensitive to the observables used in the tune. As such, PSplitLight was fixed at the default value of 1.20 during the tune and subsequent χ 2 variation.
Professor was used to vary χ 2 about the minimum value, as described above, determining the allowed range for the parameters. As five parameters were eventually varied, there are 10 new sample points-one for each of the parameters and one "+" and one "−" along each eigenvector direction in parameter space.
We follow the example set by the parton distribution function (PDF) fitting groups in determining how much to allow χ 2 to vary. Our situation is different to the PDF fitters in that we are using leading-order calculations with leadinglog accuracy in the parton shower, where they fit to next-toleading order calculations which gives better overall agreement with the observables used. Generally, PDF groups fit to fully inclusive variables, where as we have fitted to more exclusive processes and by nature, these are more model dependent, in particular hadronization.
In Refs. [45,46] these issues are explored in terms of PDFs and the allowed variation is related to a tolerance parameter T , where A tolerance parameter of T ≈ 10 to 15 is generally chosen for the PDF groups, where they are fitting to around 1300 data points. As our fit is likley to have a higher χ 2 than their fit due to the aforementioned reasons, and that we fit to a greater number of parameters, we will have a higher tolerance parameter. In our fit, we have 1665 degrees-of-freedom and we examined various changes in χ 2 , whilst considering the effects of the precision data from LEP. A variation of χ 2 /N df = 5, equivalent to T ≈ 90, seems, subjectively to keep the LEP data within reasonable limits while a variation of χ 2 /N df = 10, i.e. T ≈ 130 is too large. Anything less T ≈ 40 had very little variation and was therefore Tables 2 and 3 respectively. The Professor tune was then compared with the internal Herwig++ tuning procedure [29] as not all analyses that are in the internal Herwig++ tuning system are available in Rivet and subsequently accessible to Professor. Looking at Fig. 4 it is found that PSplitLight at a value of 0.90 is favoured and gives a significant reduction in the χ 2 /N df . It was therefore decided to use the values obtained from minimisation procedure, but using the value of 0.90 for PSplitLight to maintain a good overall description of the data. The new These error tune values can now be used to predict the uncertainty from the tuning of the shower parameters on any observable. In the next section we will present an example of using these tunes to estimate the uncertainty on the pre-
Jet substructure boosted Higgs
The analysis of Ref. [5] uses a number of different channels for the production of the Higgs boson decaying to bb in association with an electroweak gauge boson, i.e. the production of h 0 Z 0 and h 0 W ± . Reference [5] uses the fact that the Higgs boson predominantly decays to bb in a jet substructure analysis to extract the signal of a boosted Higgs boson above the various backgrounds. Their study found that the Cambridge-Aachen algorithm [47,48] with radius parameter R = 1.2 gave the best results when combined with their jet substructure technique. For our study, we used the Cambridge-Aachen algorithm as implemented in the Fast-Jet package [49]. Three different event selection criteria are used: (a) a lepton pair with 80 GeV < m l + l − < 100 GeV and p T > p min T to select events for Z 0 → + − ; (b) missing transverse momentum / p T > p min T to select events with Z 0 → νν; (c) missing transverse momentum / p T > 30 GeV and a lepton with p T > 30 GeV consistent with the presence of a W boson with p T > p min T to select events with W → ν; where p min T = 200 GeV. In addition the presence of a hard jet with p T j > p min T with substructure is required. The substructure analysis of Ref. [5] proceeds with the hard jet j with some radius R j , a mass m j and in a mass-drop algorithm: 1. the two subjets which were merged to form the jet, ordered such that the mass of the first jet m j 1 is greater than that of the second jet m j 2 , are obtained; 2. if m j 1 < μ m j and y = min(p 2 where R 2 j 1 ,j 2 = (y j 1 − y j 2 ) 2 + (φ j 1 − φ j 2 ) 2 , and p Tj 1,2 , η j 1,2 , φ j 1,2 are the transverse momenta, rapidities and azimuthal angles of jets 1 and 2, respectively, then j is in the heavy particle region. If the jet is not in the heavy particle region the procedure is repeated using the first jet.
This algorithm requires that j 1,2 are b-tagged and takes μ = 0.67 and y cut = 0.09. A uniform b-tagging efficiency of 60 % was used with a uniform mistagging probability of 2 %. The heavy jet selected by this procedure is considered to be the Higgs boson candidate jet. Finally, there is a filtering procedure on the Higgs boson candidate jet, j . The jet, j , is resolved on a finer scale by setting a new radius R filt = min(0.3, R bb /2), where from the previous massdrop condition, R bb = R 2 j 1 ,j 2 . The three hardest subjects of this filtering process are taken to be the Higgs boson decay products, where the two hardest are required to be b-tagged.
All three analyses require that: • after the reconstruction of the vector boson, there are no additional leptons with pseudorapidity |η| < 2.5 and p T > 30 GeV; • other than the Higgs boson candidate, there are no additional b-tagged jets with pseudorapidity |η| < 2.5 and p T > 50 GeV.
In addition, due to top contamination, criterion (c) requires that other than the Higgs boson candidate, there are no additional jets with |η| < 3 and p T > 30 GeV. For all events, the candidate Higgs boson jet should have p T > p min T . The analyses were implemented using the Rivet system [58].
The plots shown in Fig. 5 use the leading-order matrix elements for the production and decay of Higgs boson but the W , Z and top [50] have matrix element corrections for their decays. The plots shown in Fig. 6 have leading-order tt production, leading-order vector boson plus jet production (with the same matrix element corrections as the LO matrix elements) but the NLO vector boson pair production [51] and NLO vector and Higgs boson associated production [52]. In addition we have implemented the corrections to the decay h 0 → bb in the POWHEG scheme, as described in Appendix B. The signal significances are outlined in Table 4. 5 Results for the reconstructed Higgs boson mass distribution using leading-order matrix elements. A SM Higgs boson was assumed with a mass of 115 GeV. In addition to the full result the contribution from top quark pair production (tt ), the production of W ± (W + Jet) and Z 0 (Z + Jet) bosons in association with a hard jet, vector boson pair production (VV) and the production of a vector boson in association with the Higgs boson (V + Higgs), are shown Fig. 6 Results for the reconstructed Higgs boson mass distribution using leading-order matrix elements for top quark pair production (tt ), and the production of W ± (W + Jet) and Z 0 (Z + Jet) bosons in association with a hard jet. The next-to-leading-order corrections are included for vector boson pair production (VV) and the production of a vector boson in association with the Higgs boson (V + Higgs) as well as in the decay of the Higgs boson, h 0 → bb. A SM Higgs boson was assumed with a mass of 115 GeV The uncertainties due to the Monte Carlo simulation are shown as bands in Figs. 7 and 8. As there are correlations between the different processes the uncertainty is determined for the sum of all processes. Whilst it would be possible to show the envelope for the individual processes, this would not offer any information on the envelope for the sum of the Fig. 7 Results for the reconstructed Higgs boson mass distribution using leading-order matrix elements. A SM Higgs boson was assumed with a mass of 115 GeV. The envelope shows the uncertainty from the Monte Carlo simulation Fig. 8 Results for the reconstructed Higgs boson mass distribution using leading-order matrix elements for top quark pair production, and the production of W ± and Z 0 bosons in association with a hard jet. The next-to-leading-order corrections are included for vector boson pair production and the production of a vector boson in association with the Higgs boson as well as in the decay of the Higgs boson, h 0 → bb. A SM Higgs boson was assumed with a mass of 115 GeV. The envelope shows the uncertainty from the Monte Carlo simulation Table 4 The significance of the different processes for the leadingand next-to-leading-order matrix elements. The significance is calculated using all masses in the range 112-120 GeV processes which is the result of interest. In addition the uncertainty on the significance is shown in Table 4.
Conclusions
Monte Carlo simulations are an essential tool in the analysis of modern collider experiments. While significant effort has been devoted to the tuning of the parameters to produce a best fit there has been much less effort understanding the uncertainties in these results. In this paper we have produced a set of tunes which can be used to assess this uncertainty using the Herwig++ Monte Carlo event generator. We then used these tunes to assess the uncertainties on the mass-drop analysis of Ref. [5] using Herwig++ with both leading-and next-to-leading-order matrix elements including a POWHEG simulation of the decay h 0 → bb.
We find that while the jet substructure technique has significant potential as a Higgs boson discovery channel, we need to be confident of our tunes to investigate this with Monte Carlo simulations.
The error tunes and procedure here can now be used in other analyses where the uncertainty due to the Monte Carlo simulation is important.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribu-tion, and reproduction in any medium, provided the original author(s) and the source are credited.
Herwig++
The weights and observables used in the Professor tuning system are outlined in Tables 5, 6, 7 and 8. Table 5 Observables used in the tuning and associated weights for observables taken from [53] Observable Weight Table 6 Observables used in the tuning and associated weights for observables taken from [44] Observable Weight The NLO differential decay rate in the POWHEG [39] approach is wherē Here B(Φ m ) is the leading-order Born differential decay rate, V (Φ m ) the regularized virtual contribution, D i (Φ m , Φ 1 ) the counter terms regularizing the real emission and R(Φ m , Φ 1 ) the real emission contribution. The leadingorder process has m outgoing partons, with associated phase space Φ m . The virtual and Born contributions depend only on this m-body phase space. The real emission phase space, Φ m+1 , is factorised into the m-body phase space and the phase space, Φ 1 , describing the radiation of an extra parton. The Sudakov form factor in the POWHEG method is Fig. 9 The two real-emission processes contributing to the NLO decay rate where k T (Φ m , Φ 1 ) is the transverse momentum of the emitted parton.
In order to implement the decay of the Higgs boson in the POWHEG scheme in Herwig++ we need to generate the Born configuration according to Eq. (5) and the subsequent hardest emission according to Eq. (6). The generation of the truncated and vetoed parton showers from these configurations then proceeds as described in Refs. [29,52,54,55].
The virtual contribution for h 0 → bb was calculated in Ref. [56]. The corresponding real emission contribution, see Fig. 9, is gives Together with the definition, x 3 = x ⊥ cosh y, we obtain the Jacobian , (13) for the transformation of the radiation variables. We can then generate the additional radiation according to Eq. (10) using the veto algorithm [3]. To achieve this we use an overestimate of the integrand in the Sudakov form factor, f (p T ) = c p T , where c is a suitable constant. We first generate an emission according to Once the trial p T has been generated, y and φ are also generated uniformly between [y min , y max ] and [0, 2π], respectively. The energy fractions of the partons are obtained using the definition x 3 = x ⊥ cosh y, and x 2 using energy conservation. As there are two solutions for x 1 both solutions must be kept and used to calculate the weight for a particular trial p T . The signs of the z-components of the momenta are fixed by the sign of the rapidity and momentum conservation. Any momentum configurations outside of the physically allowed phase space are rejected and a new set of variables generated. The momentum configuration is accepted with a probability given by the ratio of the true integrand to the overestimated value. If the configuration is rejected, the procedure continues with p max T set to the rejected p T until the trial value of p T is accepted or falls below the minimum allowed value, p min T . This procedure generates the radiation variables correctly as shown in Ref. [3].
This procedure is used to generate a trial emission from both the bottom and antibottom. The hardest potential emission is then selected which correctly generates events according to Eq. (10) using this competition algorithm. | 5,926.2 | 2012-10-01T00:00:00.000 | [
"Physics"
] |
Quantization of subgroups of the affine group
Consider a locally compact group $G=Q\ltimes V$ such that $V$ is abelian and the action of $Q$ on the dual abelian group $\hat V$ has a free orbit of full measure. We show that such a group $G$ can be quantized in three equivalent ways: (1) by reflecting across the Galois object defined by the canonical irreducible representation of $G$ on $L^2(V)$; (2) by twisting the coproduct on the group von Neumann algebra of $G$ by a dual $2$-cocycle obtained from the $G$-equivariant Kohn-Nirenberg quantization of $V\times\hat V$; (3) by considering the bicrossed product defined by a matched pair of subgroups of $Q\ltimes\hat V$ both isomorphic to $Q$. In the simplest case of the $ax+b$ group over the reals, the dual cocycle in (2) is an analytic analogue of the Jordanian twist. It was first found by Stachura using different ideas. The equivalence of approaches (2) and (3) in this case implies that the quantum $ax+b$ group of Baaj-Skandalis is isomorphic to the quantum group defined by Stachura. Along the way we prove a number of results for arbitrary locally compact groups $G$. Using recent results of De Commer we show that a class of $G$-Galois objects is parametrized by certain cohomology classes in $H^2(G;\mathbb T)$. This extends results of Wassermann and Davydov in the finite group case. A new phenomenon is that already the unit class in $H^2(G;\mathbb T)$ can correspond to a nontrivial Galois object. Specifically, we show that any nontrivial locally compact group $G$ with group von Neumann algebra a factor of type I admits a canonical cohomology class of dual $2$-cocycles such that the corresponding quantization of $G$ is neither commutative nor cocommutative.
Consider a locally compact group G = Q V such that V is abelian and the action of Q on the dual abelian group V has a free orbit of full measure. We show that such a group G can be quantized in three equivalent ways: (1) by reflecting across the Galois object defined by the canonical irreducible representation of G on L 2 (V ); (2) by twisting the coproduct on the group von Neumann algebra of G by a dual 2-cocycle obtained from the Gequivariant Kohn-Nirenberg quantization of V ×V ; (3) by considering the bicrossed product defined by a matched pair of subgroups of Q V both isomorphic to Q.
In the simplest case of the ax + b group over the reals, the dual cocycle in (2) is an analytic analogue of the Jordanian twist. It was first found by Stachura using different ideas. The equivalence of approaches (2) and (3) in this case implies that the quantum ax + b group of Baaj-Skandalis is isomorphic to the quantum group defined by Stachura. Along the way we prove a number of results for arbitrary locally compact groups G. Using recent results of De Commer we show that a class of G-Galois objects is parametrized by certain cohomology classes in H 2 (G; T ). This extends results of Wassermann and Davydov in the finite group case. A new phenomenon is that already the unit class in H 2 (G; T ) can correspond to a nontrivial Galois object. Specifically, we show that any nontrivial locally compact group G with group von Neumann algebra a factor of type I admits a canonical cohomology class of dual 2-cocycles such that the corresponding quantization of G is neither commutative nor cocommutative.
It can be explicitly quantized using the Jordanian twist found independently by Coll-Gerstenhaber-Giaquinto [8] and Ogievetsky [27]. 1 This twist and its generalizations have been extensively studied, see, e.g., [20], [21]. It is particularly popular in physics literature, as it can be used to construct the κ-Minkowski space [6], which by [22] can also be obtained from a bicrossed product construction.
If we want to make sense of Ω as a unitary operator on L 2 (G ×G), since the elements x and y are skew-adjoint, our best bet is to take h ∈ iR, but then we still have a problem with the logarithm, as the spectrum of y is the entire line iR. A correct analytic analogue of the Jordanian twist was found by Stachura [30], see formula (3.31) below, but it turns out that, similarly to the case of SU (1,1), it is important to work with the entire nonconnected ax + b group. What to do in the connected case remains an open problem.
In fact, in an earlier paper [4] the first two authors found a universal deformation formula for the actions of the connected component of the ax + b group (and, more generally, of Kählerian Lie groups) on C * -algebras. Unfortunately, despite the claims in [26] and [5], this formula turned out to define a coisometric but nonunitary dual 2cocycle, which is the term we prefer to use in the analytical setting instead of the "twist". See Remark 2.12 below and erratum to [5] for further discussion.
If we do consider the nonconnected ax + b group, then even earlier, Baaj and Skandalis constructed its quantization as a bicrossed product of two copies of R * [29]. One disadvantage of this construction is that from the outset it is not clear how justifiable it is to call their quantum group a quantization of the ax + b group. Some justification was given later by Vaes and Vainerman [33].
The present work grew out of the natural question how the constructions in [29], [4] and [30] are related. As we already said, we found out that [4] does not lead to a unitary cocycle and therefore cannot actually be used to quantize the ax + b group. But the constructions in [29] and [30] turned out to be equivalent, as was conjectured by Stachura. Furthermore, we found an interpretation of the Jordanian twist/Stachura cocycle in terms of the Kohn-Nirenberg quantization, which allowed us to construct quantum analogues of a class of semidirect products Q V . We also realized that these constructions have a very natural description within De Commer's analytic version of the Hopf-Galois theory [11,12].
In more detail, the main results and organization of the paper are as follows. After a short preliminary section, we begin by discussing G-Galois objects for general locally compact groups G in Section 2. These are von Neumann algebras equipped with actions of G that are in an appropriate sense free and transitive. For compact groups such actions are known in the operator algebra literature as full multiplicity ergodic actions.
Using recent results of De Commer [12] we show that the G-Galois objects with underlying algebras factors of type I are classified by certain second cohomology classes on G (Theorem 2.4). For finite groups such a result quickly leads to a complete classification of G-Galois objects obtained by Wassermann [34] (although the result is not very explicit there, see [25]) and Davydov [10]. For infinite groups the situation is of course more complicated, as in general there exist Galois objects built on non-type-I algebras.
We show next that under extra assumptions a G-Galois object of the form (B(H), Ad π), where π is a projective representation of G on H, defines a dual unitary 2-cocycle (Proposition 2.9). We do not know whether these assumptions are always automatically satisfied, but we show that they are if π is a genuine representation (Theorem 2.13). This implies that if the group von Neumann algebra W * (G) of a nontrivial group G is a type I factor, then there exists a canonical nontrivial cohomology class of dual cocycles on G. This gives probably the shortest explanation why a quantization of, for example, the ax +b group exists at the operator algebraic level. At the Lie (bi)algebra level this is related to Drinfeld's result on quantization of Frobenius Lie algebras [13].
The construction of the dual cocycle in Section 2 is, however, rather inexplicit and in Section 3 we find a formula for such a cocycle for the semidirect products G = Q V such that V is abelian and the action of Q on the dual abelian group V has a free orbit of full measure (Assumption 2.15). It is well-known that producing a dual 2-cocycle/twist is essentially equivalent to finding a G-equivariant deformation of an appropriate algebra of functions on G. Our assumptions on G = Q V imply that we can identify L 2 (G) with L 2 (V ×V ) in a G-equivariant way. The Kohh-Nirenberg quantization of V ×V provides then a deformation of L 2 (G) and gives rise to a dual unitary 2-cocycle Ω (Theorem 3.12). The cohomology class of this cocycle is exactly the one we defined in Section 2 (Theorem 3.18).
In fact, there are two versions of the Kohn-Nirenberg quantization, so we get two cohomologous dual cocycles. In the case of the ax + b group we show that one of these cocycles coincides with Stachura's cocycle (Proposition 3.28).
Finally, in Section 4 we consider the bicrossed product defined by a matched pair of two copies of Q in Q V . We show that this quantum group is self-dual and isomorphic to (W * (G), ΩΔ(·)Ω * ) (Theorem 4.1 and Corollary 4.2). This is achieved by showing that the multiplicative unitary of the twisted quantum group (W * (G), ΩΔ(·)Ω * ) is given by a pentagonal transformation on Q ×V (Theorem 3.26) and by applying the Baaj-Skandalis procedure of reconstructing a matched pair of groups from such a transformation [2].
Preliminaries
Let G be a locally compact group. We fix a left invariant Haar measure dg on G and denote by L p (G), p ∈ [1, ∞], the associated function spaces.
The modular function Δ = Δ G is defined by the relation Then Δ(g) −1 dg is a right invariant Haar measure on G.
In a similar way, if q ∈ Aut(G), then the modulus |q| = |q| G of q is defined by the identity We let λ and ρ be the left and right regular (unitary) representations of G on L 2 (G): For a function f on G we define a function f by We also let J = J G and Ĵ =Ĵ G be the modular conjugations of L ∞ (G) and W * (G) := λ(G) : Jf :=f andĴf := Δ −1/2f , and we use the shorthand notation The multiplicative unitary Ŵ =Ŵ G of the dual quantum group is defined bŷ The coproduct Δ : W * (G) → W * (G)⊗W * (G) on the group von Neumann algebra W * (G) is defined by Δ (λ g ) = λ g ⊗ λ g . We then havê Let now V be a locally compact Abelian group and V be its Pontryagin dual. Elements of V will be denoted by the Latin letters v, v j , v . . . while elements of V will be denoted by the Greek letters ξ, ξ j , ξ . . . . We will use additive notation both on V and on V .
The duality paring V × V → T will be denoted by e i ξ,v . This is just a notation, we do not claim that there is an exponential function here. We also let We fix a Haar measure dv on V and we normalize the Haar measure dξ of V so that the Fourier transform F V defined by For functions in several variables only one of which is in V , we use the same symbol F V to denote the partial Fourier transform in that variable.
Projective representations and Galois objects
Hopf-Galois objects is a well-studied topic in Hopf algebra theory. An adaption of this notion to locally compact quantum groups has been developed by De Commer [11]. Let us recall the main definitions. We will do this for genuine groups, as this is mainly the case we are interested in, but it will be important for us that the theory is developed at least for locally compact groups and their duals.
Let G be a locally compact group and β be an action of G on a von Neumann algebra N . Such an action is called integrable if the operator-valued weight is semifinite. If β is in addition ergodic, then we get a normal semifinite faithful weightφ on N such that P (a) =φ(a)1. Note that ϕ(β g (a)) = Δ(g) −1φ (a) for all a ∈ N + . (2.1) We can then define an isometric map where Λ : Nφ → L 2 (N , φ) denotes the GNS-map. The pair (N , β) consisting of a von Neumann algebra N and an ergodic integrable action β of G on N is called a G-Galois object if the Galois map G is unitary.
The following characterization of Galois objects is often easier to use.
consisting of a von Neumann algebra N and an ergodic integrable action β of a locally compact group G on N is a G-Galois object if and only if N G is a factor, which is then necessarily of type I.
Proof. This is a simple consequence of results in [11,Section 2]. Indeed, the integrability assumption implies that N G has a canonical representation η on L 2 (N , φ), and then by [11, Theorem 2.1] the pair (N , β) is a G-Galois object if and only if η is faithful. If N G is a factor, then η is faithful, so we get one implication in the proposition. Next, the ergodicity of the action β implies that the action on N by the automorphisms Ad η(λ g ) is ergodic as well, that is, N ∩ η(W * (G)) = C1. It follows that η(N G) = B(L 2 (N , φ)). Hence, if (N , β) is a Galois object, then N G is a factor canonically isomorphic to B(L 2 (N , φ)).
We will be interested in G-Galois objects that are themselves factors of type I, in which case we say, following [12], that (N , β) is a I-factorial G-Galois object. Identifying N with B(H) for a Hilbert space H, we then get a projective unitary representation π : G → P U(H) such that β g = Ad π(g). Note that the equivalence class of π is uniquely determined by (N , β).
Remark 2.2. Ergodicity of β = Ad π is equivalent to irreducibility of π. Assuming ergodicity, integrability of the action Ad π is equivalent to square-integrability of the irreducible projective representation π, meaning that there are nonzero vectors ξ, ζ for which the function g → (π(g)ξ, ζ) is square-integrable. This observation goes back to [9, Example 2.8, Chapter III], but let us give some details.
Note for future use that property (2.1) translates into (Ad π(g))(K) = Δ(g)K. (2.2) Since the action Ad π is ergodic, this determines K uniquely up to a scalar factor. In particular, as was already observed in [14], if G is unimodular then K is scalar, and otherwise K is unbounded.
Remark 2.3. By [11], given a G-Galois object, we get a locally compact quantum group G obtained by reflecting G across the Galois object. If G is abelian, then G = G. But if G is a nonabelian genuine locally compact group and our Galois object has the form (B(H), Ad π) for a projective representation π of G, then G is a genuine quantum group. Indeed, assume G is a group. By the general theory we know that B(H) is a G -Galois object with respect to an action β of G commuting with the action of G, see [11]. There exist scalars χ g (g ) ∈ T such that β g (π(g)) = χ g (g )π(g) for all g ∈ G, g ∈ G , where π(g) is any lift of π(g) to U (H). Then χ g is a character of G . Furthermore, by ergodicity of the action β , if χ g 1 = χ g 2 for some g 1 , g 2 ∈ G, then π(g 1 ) and π(g 2 ) coincide up to a scalar factor, which by surjectivity of the Galois map for (B(H), Ad π) is possible only when g 1 = g 2 . Therefore the map g → χ g is an injective homomorphism from G into the group of characters of G . Hence G is abelian, which contradicts our assumption.
By a recent duality result of De Commer [12], for any locally compact quantum group G, there is a bijection between the isomorphism classes of I-factorial G-Galois object and I-factorial Ĝ -Galois objects. This bijection is constructed as follows. Suppose we are given a I-factorial G-Galois object (N , β). Then N ∩ (N G), equipped with the dual action, becomes a I-factorial Galois object for Ĝ . (More precisely, we rather get a Galois object for the opposite comultiplication on L ∞ (Ĝ) and then an additional application of modular conjugations is needed to really get a Galois object for Ĝ , but this is unnecessary in our setting of genuine groups and their duals.) There is a simple class of Ĝ -Galois objects constructed as follows. Assume from now on that G is second countable. Let ω be a T -valued Borel 2-cocycle on G. Consider the ω-twisted left regular representation of G on λ ω : G → B(L 2 (G)) defined by satisfying λ ω g λ ω h = ω(g, h)λ ω gh , and let W * (G; ω) := λ ω (G) ⊂ B(L 2 (G)). Then W * (G; ω) equipped with the coaction λ ω g → λ ω g ⊗ λ g of G (or in other words, the action of Ĝ ) is a Ĝ -Galois object, see [11,Section 5] for this statement in the setting of locally compact quantum groups.
In fact, this covers all possible I-factorial Ĝ -Galois objects and by duality we get a description of the I-factorial G-Galois objects: Theorem 2.4. For any second countable locally compact group G, there is a bijection between the isomorphism classes of I-factorial G-Galois objects and the cohomology classes [ω] ∈ H 2 (G; T ) such that the twisted group von Neumann algebra W * (G; ω) is a type I factor.
Here H 2 (G; T ) denotes the Moore cohomology of G, which is based on Borel cochains [23].
Explicitly, the bijection in the theorem is defined as follows. To a I-factorial G-Galois object (B(H), Ad π) we associate the cohomology class [ω π ] ∈ H 2 (G; T ) defined by the projective representation π of G. Recall that this means that we lift π to a Borel map π : G → U (H) and define a Borel T -valued 2-cocycle ω π on G by the identitỹ π(g)π(h) = ω π (g, h)π(gh).
For finite groups G, this theorem is essentially due to Wassermann [34] (see also [25]), as well as to Davydov [10] in the purely algebraic setting.
We divide the proof of Theorem 2.4 into a couple of lemmas. Let (B(H), Ad π) be a I-factorial G-Galois object and ω be the cocycle defined by a lift π of π. Lemma 2.5. The Ĝ -Galois object associated with the I-factorial G-Galois object (B(H), Ad π) is isomorphic to (W * (G; ω), α), where the coaction α of G is defined by Proof. We have an isomorphism Explicitly, the crossed product B(H) G is the von Neumann subalgebra of B(L 2 (G; H)) generated by the operators λ g considered as operators on L 2 (G; H) and the operators T for T ∈ B(H) defined by (T ξ)(g) = π(g) * T π(g)ξ(g).
Then the required isomorphism is given by Ad U , where U : This gives the result.
In particular, it follows that W * (G; ω) is a type I factor. As JW * (G; ω)J = W * (G; ω), the von Neumann algebra W * (G; ω) is a type I factor as well. By De Commer's duality result [12] we conclude that the map associating [ω π ] to the isomorphism class of (B(H), Ad π) is injective and its image is contained in the set of cohomology classes [ω] such that W * (G; ω) is a type I factor.
Consider now an arbitrary cohomology class [ω] ∈ H 2 (G; T ) such that W * (G; ω) is a type I factor. To finish the proof it suffices to establish the following.
Dual cocycles
An important class of G-Galois objects arises from dual 2-cocycles. By a dual unitary 2-cocycle on G we mean a unitary element Ω ∈ W * (G)⊗W * (G) such that Similarly to the Ĝ -Galois objects W * (G; ω) considered above, such cocycles lead to G-Galois objects W * (Ĝ; Ω) (which for notational consistency with W * (G; ω) we should have rather denoted by W * (Ĝ; Ω * )). Following [26,Section 4], they can be described as follows.
Identify as usual the Fourier algebra A(G) with the predual of W * (G). Given a dual unitary 2-cocycle Ω ∈ W * (G)⊗W * (G), the von Neumann algebra N := W * (Ĝ; Ω) ⊂ B(L 2 (G)) is generated by the operators Define an action β of G on N by where we remind that ρ g =Ĵλ gĴ is the right regular representation. The map π Ω is the representation of the algebra A(G) equipped with the new product (2.5) The representation π Ω has the equivariance property (2.6) By [33, Section 1.3], the canonical weight φ on N has the following description. Its GNS-space can be identified with L 2 (G), with the GNS-map Λ : Nφ → L 2 (G) uniquely determined byΛ where we remind that f (g) = f (g −1 ). In particular, for f as above we havẽ Two dual unitary 2-cocycles Ω, Ω are called cohomologous if there exists a unitary u ∈ W * (G) such that The cohomology classes form a set H 2 (Ĝ; T ). In general this set does not have any extra structure.
Proposition 2.7. Two dual unitary 2-cocycles Ω, Ω on a locally compact group G are cohomologous if and only if they define isomorphic G-Galois objects.
Denote by G and G the corresponding Galois maps. Then by definition we have On the other hand, by [11,Proposition 5 Using again that Δ (u) =Ŵ * (1 ⊗ u)Ŵ , we conclude that Ω = (u ⊗ u)ΩΔ(u) * .
Combined with Theorem 2.4 this proposition allows one to describe a part of H 2 (Ĝ; T ) in terms of cohomology of G. Namely, denote by H 2 I (Ĝ; T ) the subset of H 2 (Ĝ; T ) formed by the classes [Ω] such that W * (Ĝ; Ω) is a type I factor. Given such an Ω, we can identify W * (Ĝ; Ω) with B(H) for a Hilbert space H. Then the action β of G on W * (Ĝ; Ω) is given by a projective representation π of G on H, and we denote by c Ω the corresponding 2-cocycle ω π on G. Similarly, denote by H 2 I (G; T ) the subset of H 2 (G; T ) formed by the classes [ω] such that W * (G; ω) is a type I factor. A natural question is whether this embedding is onto. We do not know the answer, but as a step towards a solution of this problem let us explain how dual cocycles arise from Galois maps under extra assumptions.
It will be useful to go beyond Galois objects. Assume we are given a square-integrable irreducible projective representation π of G on H. Assume also that we are given a unitary map which we will call a quantization map. Here HS(H) denotes the Hilbert space of Hilbert-Schmidt operators on H. As in Section 2.1, consider the Duflo-Moore operator K on H and the weight φ = Tr(K 1/2 · K 1/2 ). As the GNS-space for φ we could take HS(H), with the GNS-map Λ uniquely determined by Λ (T K −1/2 ) := T for T ∈ HS(H) such that T K −1/2 is a bounded operator. But using the unitary Op we can transport everything to L 2 (G). Thus we take L 2 (G) as the GNS-space, with the GNS-map uniquely determined bỹ Consider the corresponding Galois map G, so G : , where we remind that J = JĴ.
Proposition 2.9. With the above setup and notation, the operator Ω is coisometric. It lies in the algebra W * (G)⊗W * (G) and satisfies the cocycle identity (2.4).
Proof. Since the Galois maps are always isometric, it is clear that Ω is coisometric.
Turning to the cocycle identity, by [11,Proposition 3.5] the Galois map G (denoted by G in op. cit.) satisfies the following hybrid pentagon relation: (More precisely, the result in [11] is formulated only for the Galois objects, but an inspection of the proof shows that it remains valid for arbitrary integrable ergodic actions.) and using Δ (x) =Ŵ * (1 ⊗ x)Ŵ and the pentagon relation Ŵ 12Ŵ13Ŵ23 =Ŵ 23Ŵ12 we obtain the required cocycle identity. Finally, by the definition of Ω, the Galois map G is unitary if and only if Ω is unitary. Assuming that Ω and G are unitary, by [11, Proposition 3.6(1)] the elements (ω ⊗ ι)(G) for ω ∈ B(L 2 (G)) * span a σ-weakly dense subspace of πφ(B(H)). Recalling the definition of W * (Ĝ; Ω) we conclude that The action of G on B(H) is implemented on the GNS-space by the unitaries λ g , while that on W * (Ĝ; Ω) by the unitaries ρ g . Since J ρ g J = λ g , we see that the Galois objects (B(H), Ad π) and (W * (G; Ω), β) are indeed isomorphic.
Remark 2.10. Although [11, Proposition 3.6(1)] is formulated only for the Galois objects, its proof remains valid for any integrable ergodic action. Therefore we see that starting from a square-integrable irreducible projective representation π of G on H and a unitary quantization map Op : L 2 (G) → HS(H), we can define a dual coisometric cocycle Ω by (2.12) and then the set of elements (ω ⊗ ι)(Ŵ Ω * ) for ω ∈ B(L 2 (G)) * span a σ-weakly dense subspace of J πφ(B(H))J . Therefore Ω contains a complete information about (B(H), Ad π) independently of whether we deal with a Galois object or not.
Remark 2.11. The quantization map Op defines a product on L 2 (G) by In general the product Ω defined by Ω is apparently not the same as on A(G) ∩ L 2 (G). In order to see this, let us proceed a bit informally, without trying to fully justify every step. By definition we have Take functions f 1 , f 2 ∈ L 2 (G) and consider the function f ∈ A(G) defined by f (g) = (λ g f 1 , f 2 ). Applying (·f 1 , f 2 ) to the first leg of G(J ⊗ 1) we then get We thus conclude that Ω is related to the quantization map Op defined by The products defined by the quantization maps Op and Op coincide if and only if the is an automorphism with respect to . In general we see no reason why this should be the case. But this can happen. Observe that since Δ is the only positive measurable function F on G such that λ g F = Δ(g) −1 F , any reasonable extension of Op to a class of functions including Δ s should satisfy Op( For the examples studied in this paper we will indeed have such identities, with c = 1 and α = 1/2. On other hand, for the example studied in [5] (which is not a Galois object) we had c = 1 and α = 1/4.
We thus see that the problem of describing H 2 I (Ĝ; T ) reduces to the following question: it is true that for any I-factorial Galois object (B(H), Ad π) there is a unitary quantization map (2.9)? This can be reformulated as a representation-theoretic problem as follows. Assume we are given a 2-cocycle ω on G such that W * (G; ω) is a type I factor. We identify W * (G; ω) with B(H) and put π ω (g) := λ ω g ∈ B(H). Then g → π ω (g) ⊗π c ω (g) is a well-defined unitary representation of G on H ⊗H, where π c ω (g)ξ := π ω (g)ξ. Is this representation equivalent to the regular representation?
The answer is known to be "yes" for finite groups [34,24]. Indeed, the Galois map gives a unitary equivalence where ε L denotes the trivial representation of G on the Hilbert space L. This implies the required equivalence π ω ⊗ π c ω ∼ λ for finite groups G, but falls short of what we need for general G.
Remark 2.12. In order to stress that it can be dangerous to rely too much on analogies with the finite group case, note that for any square-integrable irreducible projective representation π of G on H, the Galois map always defines an embedding of the representation Ad π ⊗ ε HS(H) on HS(H) ⊗ HS(H) into ρ ⊗ ε HS(H) . It follows that for finite groups existence of a unitary quantization map (2.9) is equivalent to (B(H), Ad π) being a Galois object. This is certainly (but until recently unexpectedly!) not the case for general groups. For example, for the connected component G of the ax + b group over R there are two inequivalent infinite dimensional irreducible unitary representations. They are both square-integrable and both admit unitary quantization maps [4]. But G has no I-factorial Galois objects, since H 2 (G; T ) is trivial and W * (G) is the sum of two type I factors.
Dual cocycles defined by genuine representations
We now turn to Galois objects (B(H), Ad π) defined by genuine representations.
Recall that by Proposition 2.7 the cohomology class of Ω is determined by the isomorphism class of the corresponding Galois object. Therefore the above theorem shows that if W * (G) is a type I factor, then we get a canonical class [Ω] ∈ H 2 (Ĝ; T ). In terms of Corollary 2.8, this class corresponds to the unit of H 2 (G; T ).
Note also that the condition that W * (G) is a type I factor implies that G is neither compact nor discrete, so the situation described in the theorem is a purely analytical phenomenon.
Proof of Theorem 2.13. The first statement is an immediate consequence of Theorem 2.4 applied to genuine representations and, correspondingly, to the trivial 2-cocycle ω = 1 on G. Furthermore, that theorem implies that π is unique up to equivalence as a projective representation. Therefore to prove part (i) we only have to show the slightly stronger statement that π is also unique up to equivalence as a genuine representation. Thus, we identify W * (G) with B(H), take π(g) = λ g , and we have to show that for any character η : G → T the representations ηπ and π are equivalent. The representations ηλ and λ are unitarily equivalent, e.g., by Fell's absorption principle. It follows that there exists an automorphism θ of W * (G) such that θ(λ g ) = η(g)λ g for all g. As θ is an automorphism of B(H) = W * (G), it is unitarily implemented, which means exactly that ηπ and π are unitarily equivalent.
In order to prove part (ii), by our results in Section 2.2 it suffices to show that the representation π ⊗ π c is equivalent to the regular representation.
Since we can identify W * (G) with B(H) in such a way that π(g) = λ g , the representation λ is a multiple of π, so we can write λ ∼ π ⊗ ε L , where ε L is the trivial representation on a separable Hilbert space L. Since G is nontrivial, the Hilbert space H must be infinite dimensional. But then the multiplicity of the square-integrable representation π in λ must be infinite as well [14], so the Hilbert space L is infinite dimensional.
By passing to the conjugate representations we get This implies that the irreducible representations π c and π are equivalent. Next, using Fell's absorption principle we get From this we see that the representation π ⊗ π is a multiple of π, and in order to conclude that π ⊗ π ∼ λ it suffices to show that the multiplicity of π in π ⊗ π is infinite. In other words, we have to check that the commutant of (π More generally, let us show that if G 1 is a closed nonopen subgroup of a second countable locally compact group G 2 such that W * (G 1 ) is a type I factor, then the relative is finite dimensional. Denote by Δ i the modular function of G i , by μ i the Haar measure on G i and by φ i the standard Haar weight on W * (G i ). The modular group of φ 2 is given by σ t (λ g ) = Δ 2 (g) it λ g . It preserves W * (G 1 ), and since W * (G 1 ) is a type I factor, there exists a normal semifinite faithful weightφ 1 on W * (G 1 ) with the same modular group. Consider the unique normal semifinite operator-valued weight P : is a type I factor and W * (G 1 ) ∩ W * (G 2 ) is finite dimensional, it follows, e.g., from [32,Corollary 12.12] applied to M = W * (G 1 ) that such an operator-valued weight must be bounded, hence it is a scalar multiple of a conditional expectation. In particular, the Denote by E the conditional expectation obtained by rescaling P , so that c φ 1 E =φ 2 . Using the identity φ 2 (xy) = c φ 1 (E(x)y) for appropriate elements x ∈ W * (G 2 ) and y ∈ W * (G 1 ), it is not difficult to compute E on a dense set of elements. Namely, if Applying this to functions f ≥ 0 supported in arbitrarily small neighborhoods of the identity and normalized by f 1 = 1, which form an approximate unit in L 1 (G 2 ), and using the assumed continuity of E, we conclude that But this formula certainly defines an unbounded map, so we get a contradiction. Indeed, take any nonzero function f ∈ C c (G 1 ). Since G 1 has zero measure in G 2 , we can extend f to a function f ∈ C c (G 2 ) with the same supremum-norm but supported in a set of arbitrarily small Haar measure, so that the L 1 -norm of f can be made arbitrarily small. Since the operator norm is dominated by the L 1 -norm, we thus see that the preimage of G 1f (g)λ g dμ 1 (g) under E has elements of arbitrarily small norm.
As we have already observed, the class [Ω] ∈ H 2 (Ĝ; T ) corresponds to the unit of H 2 (G; T ), so one might wonder whether Ω is also a coboundary, that is, cohomologous to 1. But this is surely not the case, since In fact, the following stronger nontriviality property holds.
Proposition 2.14. If G is a nontrivial second countable locally compact group with group von Neumann algebra a factor of type I, and Ω is the dual unitary 2-cocycle given by Theorem 2.13, then the twisted locally compact quantum group (W * (G), ΩΔ(·)Ω * ) is neither commutative nor cocommutative.
Proof. The algebra W * (G), being a nontrivial type I factor, is clearly noncommutative. This implies that the group G is noncommutative. By Remark 2.3 it follows that the quantum group G Ω obtained by reflecting G across the Galois object (W * (Ĝ; Ω), β) is a genuine quantum group, that is, the coproduct Δ Ω on W * (G Ω ) is not cocommutative. [11].
Examples: subgroups of the affine group
We now introduce the main class of examples studied in this paper. Let V be a nontrivial second countable locally compact abelian group, Q be a second countable locally compact group of continuous automorphisms of V and call G the semidirect product Q V ⊂ Aff(V ) := Aut(V ) V . It has the group law The unit element is (id, 0) and the inverse is (q, v) −1 = (q −1 , −q −1 v). Whenever convenient we identify q ∈ Q with (q, 0) ∈ Q V and v ∈ V with (id, v) ∈ Q V . We will also usually write 1 instead of id for the unit of Q.
We denote by q the dual action of Q on V defined by the identity We will study the groups Q V satisfying the following property.
is a measure class isomorphism. In this case, we say that (V, Q) satisfies the dual orbit condition.
We note in passing that the groups studied in [17] do not satisfy this condition.
Remark 2.16. If (V, Q) satisfies the dual orbit condition, then V cannot be compact, since the neutral element of V cannot belong to the Q-orbit of ξ 0 and therefore V cannot be discrete. Note also that the stabilizer of ξ 0 in Q must be trivial, so that the map φ is injective.
In order to give some concrete examples, let K be a nondiscrete locally compact skewfield. If K is commutative then K is a local field. This means that K is isomorphic either to R, C, a finite degree extension of the field of p-adic numbers Q p or to a field F q ((X)) of Laurent series with coefficients in a finite field F q . If K is a skew-field, then K is isomorphic to a finite dimensional division algebra over a local field k. As an abelian group K is self-dual, with a pairing K × K → T given by (a, b) → χ(Tr K/k (ab)) for a nontrivial character χ of k. Denote by χ K the character χ • Tr K/k . Here also, both (V, Q) and (V , Q) satisfy the dual orbit condition.
Example 2.19. For n ≥ 1 and m ≥ 2, let Then (V, Q) satisfies the dual orbit condition, since for M 1 , . . . , M m , B 1 , . . . , B m−1 ∈ Mat n (K) and A ∈ GL n (K) we have On the other hand, the dual pair (V , Q) does not satisfy the dual orbit condition, as there is no Q-orbit of full measure in V . Let us now say a few words about the case when Q is a real or complex Lie group, V is a finite dimensional vector space and the action of Q on V is given by a representation ρ. In this case we can identify V with V * , and the action of Q on V * is given by the contragredient representation ρ c . If (Q, V ) satisfies the dual orbit condition, then Q has the same dimension as V and the map φ : Q → V * is open, so (V * , ρ c (Q)) is a prehomogeneous vector space.
Remark 2.21. In the complex case, assuming dim Q = dim V , the dual orbit condition is satisfied for ξ 0 ∈ V * as long as ξ 0 has trivial stabilizer. Indeed, consider the set Ω of vectors ξ ∈ V * such that the map q → V * , X → (dρ c )(X)ξ, is a linear isomorphism. This set is nonempty, as ξ 0 ∈ Ω, and Zariski open in V * . It follows that it is a dense, connected, open subset of V * in the usual topology. Since the Q-orbit of every element of Ω is open, it follows that Ω consists entirely of one orbit, so the dual orbit condition is satisfied. As a byproduct we see that Q must be connected.
Note that these arguments do not apply in the real case, as a Zariski open subset of R n can have finitely many connected components.
By a repeated use of the following simple lemma we can construct more and more complicated examples. Proof. By Remark 2.21 it suffices to find a vector η 0 ∈ g * with trivial stabilizer. By assumption there exists ξ 0 ∈ V * with trivial stabilizer in Q. Identifying g with q × V we let η 0 (X, w) := ξ 0 (w). The adjoint action is given by Assume now that (q, v) ∈ G lies in the stabilizer of η 0 , that is, Taking X = 0, we see that q stabilizes ξ 0 , hence q = e. Since the map q → V * , X → (dρ c )(X)ξ 0 , is an isomorphism, we then conclude that v = 0.
This lemma and its proof show, both in real and complex cases, that if (Q, V ) satisfies the dual orbit condition, then the Lie algebra of G := Q V is Frobenius, meaning that G has an open coadjoint orbit, or equivalently, there exists η 0 ∈ g * such that the bilinear form η 0 ([X, Y ]) on g is nondegenerate. This has already been observed in [28]. Moreover, the converse is almost true: by [28,Theorem 4.1], if dim Q = dim V and the Lie algebra of Q V is Frobenius, then there exists ξ 0 ∈ V * such that the map q → V * , X → (dρ c )(X)ξ 0 , is a linear isomorphism. This is a bit less than what we need since the stabilizer of ξ 0 can still be a nontrivial discrete subgroup of Q and, in the real case, the open set ρ c (Q)ξ 0 ⊂ V * can be nondense.
In addition to Lemma 2.22, another way of producing new examples is to start with a pair (Q, V ) satisfying the dual orbit condition and multiply the representation of Q on V by a quasi-character of Q. This can destroy the dual orbit condition, but not necessarily.
Example 2.23. Consider Q = C * , V = C, with Q acting on V by multiplication. Obviously, the dual orbit condition is satisfied. By Lemma 2.22 the adjoint action of the ax + b group G := Q V satisfies the dual orbit condition as well. This is basically Example 2.19 for n = 1 and m = 2. If we multiply the adjoint representation of G by the quasi-character (q, v) → q k , k ∈ Z, we get the representation The action of G on C 2 defined by ρ k has a dense open dual orbit if and only if k = −1. The dual orbit condition is satisfied if and only if k = 0 or k = −2, while for other k = −1 the stabilizers of the points on the dense dual orbit are finite and nontrivial. We can use the same formulas in the real case. Then the dual orbit condition is satisfied if and only if k is even.
We remark in passing that for k = 1 the group G ρ 1 C 2 is an extension of C * by the Heisenberg group. It is used in the construction of an extended Jordanian twist in [20].
Returning to the general case, we have the following result. By Theorem 2.13 it follows that we get a canonical class [Ω] ∈ H 2 (Ĝ; T ) defined by the Galois object (W * (G), Ad λ). In order to get an explicit representative of this class we need to fix a (unique up to equivalence) representation π as in that theorem and choose a unitary quantization map (2.9). This is the task we are going to undertake in the next section.
Kohn-Nirenberg quantization of V ×V
Quantization of the abelian self-dual group V ×V has been considered long ago [31,35].
Here we consider the Kohn-Nirenberg quantization. It has some benefit compared with the Weyl type quantization for which one has to assume the map V → V , v → 2v, to be a homeomorphism.
The Kohn-Nirenberg quantization can initially be defined as the continuous injective linear map from tempered Bruhat distributions on V ×V to continuous linear operators from the Bruhat-Schwartz space on V to tempered Bruhat distributions on V (see [7,Section 9] for a precise definition). It is defined by the formula Remark 3.1. This quantization map should rather be called anti-Kohn-Nirenberg. The Kohn-Nirenberg one is given by the formula The two quantization maps are unitarily equivalent and for our purposes it is easier to work with anti-Kohn-Nirenberg.
The distributional kernel of the operator Op KN (F ) is therefore given by Since an operator on L 2 (V ) with kernel K is Hilbert-Schmidt if and only if K ∈ L 2 (V × V ), we immediately deduce: Hence, the Hilbert space L 2 (V ×V ) can be endowed with an associative product Very important here are the symmetries of the Kohn-Nirenberg product 0 . We have an action of Aff(V ) on V ×V : With | · | V the modulus function of Aut(V ) and | · |V the one of Aut(V ), we observe that We will usually use only the modulus |q| V in various formulas and write it simply as |q|. The action (3.3) gives rise to a unitary representation π V ×V of Aff(V ) on L 2 (V ×V ) defined by We can also define a unitary representation π V of Aff(V ) on L 2 (V ) by In particular, the operators of the representation π V ×V of Aff(V ) act on L 2 (V ×V ) by algebra automorphisms for 0 .
Proof. Take ϕ ∈ L 2 (V ). Then On the other hand, Using that and that the Haar measure dw dξ of V ×V is invariant under the transformations (w , ξ) → (qw , q ξ), we can write the last integral as and this is equal to (3.6) by translation invariance of the Haar measure dw on V .
Dual cocycles on subgroups of Aff(V )
Let Q V ⊂ Aff(V ) be a subgroup satisfying the dual orbit Assumption 2.15. Let d Q (q) be a Haar measure on Q and Δ Q the modular function. Routine computations show that a left invariant Haar measure and modular function on G are respectively given by The measure class isomorphism: We therefore may assume that the Haar measure on Q is normalized so that the G-equivariant linear operator Ũ φ : where φ(q) = q ξ 0 , is unitary. Equivalently, our normalization is such that the Haar measure dξ onV is the push-forward under φ of the measure |q| −1 d Q (q) on Q, that is, Note in passing that this uniquely determines the Haar measure on G: if we multiply dv by a scalar, then (3.9) and unitarity of F V force us to divide d Q (q) by the same scalar. Therefore we get a unitary quantization map and if we denote by π the canonical representation of G ⊂ Aff(V ) on L 2 (V ), that is, π = π V | G , then by Lemma 3.3 we have Op(λ g f ) = π(g) Op(f )π(g) * .
Lemma 3.4. The representation π of G on L 2 (V ) is irreducible, square-integrable and the associated Duflo-Moore formal degree operator is Since Δ is a V -invariant function on G, it is naturally viewed as a function on Q. Therefore by M (Δ −1 • φ −1 ) we mean the operator of multiplication by the function Proof. It is more convenient to work on L 2 (Q) with the equivalent representation π given byπ The restriction of the representation π to V is simply the regular representation. It follows that any operator in B(L 2 (V )) commuting with π(G) must belong to π(V ) . Passing to the equivalent representation π, this means that any operator in B(L 2 (Q)) commuting with π(G) must be an operator of multiplication by a function in L ∞ (Q). In addition this function must be invariant under the left translations on Q, hence it is constant. Thus π is irreducible.
Turning to square-integrability, for ϕ 1 , ϕ 2 ∈ C c (Q), we have with the second equality following from (3.9). If we set then, for every fixed q ∈ Q, the function f q has compact essential support and is bounded.
The Plancherel formula for V gives then This shows that π is square-integrable and the Duflo-Moore operator is the operator of multiplication by the function q → |q|/Δ Q (q). This gives the result.
The next natural question is whether (B(L 2 (V )), Ad π) is a G-Galois object, or equivalently, by Theorem 2.13 and Proposition 2.24, whether π is quasi-equivalent to the regular representation. If it is, then we can construct a dual unitary 2-cocycle on G by the procedure described in Section 2.2. Instead of exactly following that procedure, however, we will construct this cocycle directly from the product on L 2 (G) defined by According to Remark 2.11 this approach should not necessarily work, but if it does, it has some technical advantages.
Let us start by observing that by definition the algebra (L 2 (G), ) is unitarily isomorphic to the algebra HS(L 2 (V )) of Hilbert-Schmidt operators. Hence (3.12) We will need explicit formulas for the product on dense subspaces of (L 2 (G), ). First, we introduce an auxiliary space: Definition 3.5. Let E(G) be the Banach space completion of C c (G) with respect to the norm Lemma 3.6. For any f 1 , f 2 ∈ E (G) and a.a. (q, v) ∈ G, we have
13)
and Here, following our conventions, F V : L 2 (G) = L 2 (Q V ) → L 2 (Q ×V ) is the partial Fourier transform in V -variables. Note that for this map to be unitary we have to equip Q ×V with the measure |q| −1 d Q (q) dξ (which in general is not the Haar measure of the semidirect product Q V for the dual action).
Proof. For f ∈ L 2 (G), we let K f ∈ L 2 (V × V ) be the kernel of the Hilbert-Schmidt operator Op(f ). From (3.1) and (3.10) we get If f 1 , f 2 ∈ E(G), then, since Op(f 1 f 2 ) = Op(f 1 )Op(f 2 ), the product formula for operator kernels gives where the last step is justified by Fubini's theorem: note that for any belongs to L 1 (V ×V ×V ) and its L 1 -norm is not greater than , w), (3.14) with absolutely converging integrals. It is easy to see that f ∈ L 2 (G) and In particular, we can compute K f , the kernel of the operator Op(f ): which by Fubini (and a simplification of the phases) becomes Hence K f 1 f 2 = K f , and (3.13) follows by injectivity of the map To get the second formula in the formulation of the lemma, we apply the partial Fourier transform to (3.14) and using Fubini's theorem one more time obtain This concludes the proof.
Proof. Since C c (G) ⊂ E(G), the result follows from formula (3.14) which clearly entails that when f 1 , f 2 ∈ C c (G), then also f 1 f 2 ∈ C c (G).
Next we consider a space of functions with good behavior in the partial Fourier space: Definition 3.8. For a measure space (X, μ), we let L(X, μ) be the subspace of L ∞ (X, μ) consisting of functions that are (essentially) zero outside a set of finite measure. We then let FL(G) be the subspace of L 2 (G) consisting of functions of the form a subalgebra of (L 2 (G), ). Moreover, for any f 1 , f 2 ∈ FL(G) and a.a. (q, ξ) ∈ Q ×V , we have Proof. Let K j ⊂ Q and K j ⊂V (j = 1, 2) be Borel sets of finite measure such that f j is (essentially) zero outside K j ×K j . Then the function defined by the right hand side of (3.15) is zero for (q, ξ) outside K 1 × (K 1 +K 2 ). Therefore if (3.15) holds, then f 1 f 2 ∈ FL(G).
Turning to the proof of (3.15), we already know from Lemma 3.6 that this identity holds for (3.16) for almost all (q, ξ). By (3.12) we have h 1,n h 2,n → f 1 f 2 in L 2 (G) as n → ∞, hence the left hand side of (3.16) (considered as a function in (q, ξ)) converges to F V (f 1 f 2 ) in the L 2 -norm. On the other hand, the right hand side converges to the right hand side of (3.15) by the dominated convergence theorem. Therefore to finish the proof it suffices to show that for every f ∈ FL(G) there exists a sequence of functions h n ∈ E(G) ∩FL(G) First for all, if (K n ) n is an increasing sequence of compact subsets of Q ×V with union Q ×V , then F * Let K be any compact such that its interior contains the support of F V f . By Lusin's theorem, we can find a uniformly bounded sequence of continuous functions g n supported in K such that g n → F V f a.e. Then F * V (g n ) ∈ FL(G) and In a similar fashion, by approximating functions in C c (Q ×V ) by elements of the algebraic tensor product C c (Q) ⊗ C c (V ), we may assume that f = F * V (g ⊗ h) = g ⊗ F * V h for some g ∈ C c (Q) and h ∈ C c (V ). Finally, by approximating h by the convolution of two functions, we may assume that f = g ⊗ F * V (h 1 * h 2 ) for g ∈ C c (Q) and h i ∈ C c (V ). But such a function is already in E(G).
It follows from (3.13) that a candidate for the dual cocycle on G defining the product by formula (2.5) is given by (q, v). (3.17) For the moment this is just a formal expression, but it at least makes sense as a sesquilinear formΩ on C c (G × G): Our first goal is to prove that Ω makes sense as a unitary operator on L 2 (G × G). This will be proven by showing that Ω factorizes as a product of three unitaries.
Next, by the definition of Ω it is clear that Ω commutes with the operators ρ g ⊗ 1 and 1 ⊗ ρ g . Hence Ω ∈ W * (G)⊗W * (G).
Proof. Recall that A(G) consists of functions of the form ϕ 1 * φ 2 , with ϕ i ∈ L 2 (G), which correspond to the linear functionals (· ϕ 2 , φ 1 ) on W * (G). Under the identification Using the initial definition of Ω as a bilinear form on C c (G × G), we get Hence the equality in the formulation of the lemma holds for all f 1 , f 2 of the form ϕ 1 * φ 2 , with ϕ i ∈ C c (G). Therefore in order to prove the lemma it suffices to show that every function f ∈ A(G) ∩ L 2 (G) can be approximated by functions of the form ϕ 1 * φ 2 , with ϕ i ∈ C c (G), simultaneously in the norms on A(G) and L 2 (G).
Consider first a function of the form f = f 1 * f 2 , with f 1 ∈ L 2 (G) and f 2 ∈ C c (G). If ϕ n → f 1 in L 2 (G), ϕ n ∈ C c (G), then ϕ n * f 2 → f 1 * f 2 both in A(G) and L 2 (G).
Consider now an arbitrary f ∈ A(G) ∩ L 2 (G). By the previous case in order to finish the proof it suffices to show that f can be approximated simultaneously in A(G) and L 2 (G) by functions of the form f * φ, with ϕ ∈ C c (G). Take a standard approximate unit (ϕ n ) n in L 1 (G) consisting of functions ϕ n ∈ C c (G) such that ϕ n ≥ 0, G ϕ n dg = 1, with the supports of ϕ n eventually contained in arbitrarily small neighborhoods of the unit. Then f * φ n → f in L 2 (G). At the same time, if we write f as f 1 * f 2 for some f i ∈ L 2 (G) and use that We thus see that A(G) ∩ L 2 (G) is a subalgebra of (L 2 (G), ). Since A(G) ∩ L 2 (G) is dense in A(G), the associativity of the product on this subalgebra implies that Ω satisfies the cocycle identity (2.4).
To summarize, we have proved the following result. Remark 3.13. If we started from the opposite Kohn-Nirenberg quantization (see Remark 3.1) we would have obtained the following dual 2-cocycle: which differs from Ω by the inversion of legs and by the sign of the phase: where R is the unitary antipode of W * (G) given on the generators by R (λ g ) = λ g −1 .
By [11, Proposition 6.3, iii)], the dual cocycles Ω and Ω are cohomologous, with the unitary operator implementing the cohomological relation equal to J J, which we will explicitly compute in Section 3.5.
Identification of the Galois objects
To complete the picture it remains to check that the Galois object defined by the dual cocycle Ω is exactly the pair (B(L 2 (V )), Ad π).
For f ∈ L 2 (G), consider the operator L (f ) on L 2 (G) defined by By (3.12) we have Furthermore, under the identification of (L 2 (G), ) with HS(L 2 (V )) via Op, the map L is simply the left regular representation of HS(L 2 (V )) on itself. This representation is a multiple of the canonical representation of HS(L 2 (V )) on L 2 (V ). It follows that there is a unique isomorphism Therefore in order to find an isomorphism W * (Ĝ; Ω) ∼ = B(L 2 (V )) it suffices to find an isomorphism W * (Ĝ; Ω) ∼ = L (L 2 (G)) . From formula (2.7) for the GNS-map on W * (Ĝ; Ω), for every f ∈ A(G) we have the following equality: π Ω (f )ϕ = S L (f ) Sϕ for all ϕ ∈ A(G) ∩ L 2 (G) such that the right hand side is well-defined, where S is the unbounded operator defined by Sϕ =φ. In other words, using the unitary operator J defined in (1.1), for all ϕ ∈ A(G) ∩ L 2 (G) such that the right hand side is well-defined.
We thus see that we need to understand a connection between the operators L (f ) and M (Δ s ). For this we will first get another useful formula for L . First, we denote by U φ the variant of the unitary operator Ũ φ , defined in (3.8), without the permutations of variables: (3.21) Then we denote by γ the unitary representation of V on L 2 (G) given by where τ ξ : L 2 (V ) → L 2 (V ) is the left regular representation of V given by (τ ξ ϕ)(ξ ) = ϕ(ξ − ξ).
Next, for fixed ξ ∈V and f ∈ FL(G), we denote by (F V f )(•, ξ) the V -invariant function on G given by (q, v) → (F V f )(q, ξ) and by M (F V f )(•, ξ) the bounded operator on L 2 (G) of multiplication by the function (F V f )(•, ξ). Lemma 3.9 then leads to the following result.
Lemma 3.14. For any f ∈ FL(G), we have the absolutely convergent (in the operator norm) integral formula Finally, we introduce a family (T z ) z∈C of operators on functions on G by where we remind that Δ(q ) = Δ Q (q )/|q |. We need a dense subspace of FL(G) preserved by these operators: We denote by L 0 (Q ×V ) the union of the spaces L K,L (Q ×V ) and by FL 0 (G) (respectively, by FL K,L (G)) the subspace of FL(G) consisting of functions f ∈ FL(G) such that F V f belongs to L 0 (Q ×V ) (respectively, to L K,L (Q ×V )).
Proof. By definition (3.24), the operator T z conjugated by the partial Fourier transform is the operator of multiplication by the function This immediately shows that FL K,L (G) is stable under T z as the modular function Δ is bounded, as well as bounded away from zero, on any compact subset of Q. Hence FL 0 (G) is also stable under T z .
Since F V is unitary, to prove density of FL 0 (G) in L 2 (G) it suffices to show that For this it suffices to show that for every compact set K ⊂ Q the union of the sets (3.25) over the compact sets L ⊂ Q is a subset of K ×V of full measure. But this is clear, since this union is and by assumption φ(Q) is a subset of V of full measure.
Since the map f →f is bounded on , the functions f for f ∈ FL 0 (G) form a dense subspace of L 2 (G) as well. Hence the functions ϕ * f for ϕ ∈ C c (G) and f ∈ FL 0 (G) are dense in A(G). An easy calculation shows that for any g = (q, v) ∈ G and compacts K, L ⊂ Q, we have λ g (FL K,L (G)) ⊂ FL qK,L (G). Hence, if f ∈ FL K,L (G) and ϕ ∈ C c (G) has support contained in U ×V for a compact set U ⊂ Q, then ϕ * f ∈ FL UK,L (G). Therefore Taking a standard approximate unit in L 1 (G) for ϕ, we see also that functions of the form ϕ * f , for ϕ ∈ C c (G) and f ∈ FL 0 (G), are dense in FL 0 (G), hence in L 2 (G). Moreover, since the operators T z are bounded on the spaces FL K,L (G), we may also conclude that the functions T z (ϕ * f ) are dense in L 2 (G) for all z. In particular, Proposition 3.17. The operator M (Δ) is affiliated with the von Neumann algebra L (L 2 (G)) . Moreover, for all f ∈ FL 0 (G) and z ∈ C, we have Proof. Since Δ depends only on the coordinate Q, the operators M (Δ it ), t ∈ R, commute with the partial Fourier transform F V . From formula (3.15) we see then that As L (FL 0 (G)) is σ-weakly dense in L (L 2 (G)) , this implies that M (Δ) is affiliated with L (L 2 (G)) . The same formula also shows that since Next, formula (3.23) and definition (3.24) of T z give, for f ∈ FL 0 (G), the absolutely convergent integral On the other hand, using again (3.23), we have on Δ (z) L 2 (G): As F V commutes with operators of multiplication by V -invariant functions, from this we easily deduce: Hence we have which by closedness of M (Δ −z ) finally gives This proposition and identity (3.20) imply that for any f ∈ A(G) ∩ FL 0 (G) we have , hence on L 2 (G), as both sides of the identity are bounded operators.
is dense in L 2 (G) by Lemma 3.16, it follows that W * (Ĝ; Ω) = J L (L 2 (G)) J . Recalling also that the action of G on W * (Ĝ; Ω) is given by the automorphisms Ad ρ g , we see that this action transforms under the isomorphism Ad J of W * (Ĝ; Ω) onto L (L 2 (G)) into the action given by the automorphisms Ad λ g . Using the isomorphism (3.19) the latter action transforms, in turn, into the action Ad π on B(L 2 (V )). We therefore get the required isomorphism of the Galois objects: Theorem 3.18. For any second countable locally compact group G = Q V satisfying the dual orbit Assumption 2.15, the G-Galois object (W * (Ĝ; Ω), β) defined by the dual cocycle (3.17) is isomorphic to (B(L 2 (V )), Ad π); explicitly, the isomorphism maps π Ω (f ), As a byproduct we see that (B(L 2 (V )), Ad π) is indeed a Galois object. We remind that by Theorem 2.13 this is therefore the unique up to isomorphism I-factorial Galois object defined by a genuine representation of G.
We are therefore exactly in the situation discussed in Remark 2.11, with identities (2.13) satisfied for c = 1 and α = 1/2. This "explains" why we were able to construct the dual cocycle Ω using the quantization map Op instead of the modified quantization map Op .
Deformation of the trivial cocycle
We continue to consider a second countable locally compact group G = Q V satisfying the dual orbit Assumption 2.15 and a dual unitary 2-cocycle Ω on G defined by (3.17).
By replacing the distinguished point ξ 0 ∈V by ξ q 0 := q ξ 0 we in fact get a family of such dual cocycles Ω q indexed by the elements q ∈ Q. We already know that all these cocycles are cohomologous, since they correspond to the unique Galois object defined by a genuine representation. This is also easy to see as follows.
Proposition 3.20. We have Ω q = (λ q ⊗ λ q )ΩΔ(λ q ) * for all q ∈ Q. In particular, the map q → Ω q is continuous in the so-topology.
Proof. Considering as before the partial Fourier transform F V as a map From this and Lemma 3.10 we get On the other hand, consider the map φ q : Q →V defined by ξ q 0 = q ξ 0 , so that φ q (q ) := q ξ q 0 = φ(q q). Then which shows that Ξ q is exactly the map from the expression for Ω q in Lemma 3.10 (with ξ 0 replaced by ξ q 0 ). This proves the proposition.
Although the dual cocycles Ω q are all cohomologous, under quite general assumptions they can be used to construct a continuous deformation of the trivial cocycle. Namely, we have the following result. Proof. Using the notation from the proof of the previous proposition it suffices to show that Ξ z −1 n → id a.e., or equivalently, φ −1 (ξ 0 + z n ξ) → e for a.e. ξ ∈V . But this is clear, since by assumption the image of φ contains a neighborhood of ξ 0 and φ is a homeomorphism of Q onto its image.
Example 3.22. Consider the ax + b group G over the reals, so that Q = R * , V = R, and Q acts on V by multiplication. In other words, G is the group of matrices We identify R with R via the pairing e ixy . Then s t = s −1 t for s ∈ Q and t ∈V . Take −1 as ξ 0 . We then get a continuous family of cohomologous dual unitary 2-cocycles Ω θ , θ ∈ R * , on G such that In this case the pointwise convergence θ → 0 in End(V ) means that θ −1 → 0 in R. So by the above proposition we have Ω θ → 1 as θ → 0, which is obviously the case.
Multiplicative unitaries
Our next goal is to find an explicit formula for the multiplicative unitary of the twisted quantum group (W * (G), ΩΔ(·)Ω * ) for the dual cocycle Ω defined by (3.17). The main issue is to determine the modular conjugation J for the canonical weightφ on W * (Ĝ; Ω).
It is more convenient to work with the isomorphic Galois object (L (L 2 (G)) , Ad λ). The algebra (L 2 (G), ) becomes a * -algebra if we transport the * -structure on HS(L 2 (V )) to L 2 (G). Since the * -structure on the algebra of Hilbert-Schmidt operators is isometric, the corresponding * -structure on L 2 (G) must have the form f → UJf for a unitary operator U on L 2 (G). In other words, the operator U is defined by the identity Op(f ) * = Op(UJf).
Lemma 3.23. The operator U is given by where U φ is the operator defined by (3.21).
Proof. Consider the unitary Op :
Then by definition Op(f ) is the integral operator with kernel
. In other words, Using that we arrive at Finally, using that U φ =Ũ φ Σ and we get the desired formula. Proof. The proof is similar to that of [5,Proposition 2.8]. Let us start with the canonical weightφ L for the Galois object (L (L 2 (G)) , Ad λ). Note that since M (Δ) is affiliated with L (L 2 (G)) and the function Δ is, up to a scalar factor, the only positive measurable function F on G such that λ g F = Δ(g) −1 F , the isomorphism L (L 2 (G)) ∼ = B(L 2 (V )), where K is the Duflo-Moore operator of formal degree (explicitly given by Lemma 3.4). We have densely defined operators on HS(L 2 (V )) of multiplication on the right by c z K −z (z ∈ C). Correspondingly, we have densely defined operators on L 2 (G), which we suggestively denote by f → f Δ z . Thus, by definition, for f in a dense subspace of L 2 (G). Explicitly, by Proposition 3.17 we have Recalling the description of the GNS-representation for (B(L 2 (V )), Ad π) in Section 2.2, and formula (2.10) in particular, we see that as the GNS-space for φ L we can take L 2 (G), with the GNS-map Λ L : Nφ L → L 2 (G) uniquely determined bỹ Λ L (L (f )) = c 1/2 f Δ −1/2 for f ∈ L 2 (G) such that the right hand side is well-defined. The corresponding modular conjugation J L is simply given by the involution on L 2 (G) ∼ = HS(L 2 (V )), so J L = UJ. Now, using the isomorphism Ad J between W * (Ĝ; Ω) and L (L 2 (G)) , we can consider the space L 2 (G) as the GNS-space for φ using the map Nφ x → JΛ L (J xJ ). (3.28) In this picture the modular conjugation for φ is J UJJ . Therefore we only have to check that the above GNS-map is exactly the map Λ used to define J . Recall that Λ is given byΛ for all such f . By comparing the norms of both sides we can already conclude that c = 1. Then, since any two GNS-representations associated with φ are unitary conjugate and the vectors Λ (π Ω (f )) =f , with f ∈ A(G) ∩ FL 0 (G), form a dense subspace of L 2 (G), it follows that the maps Λ and (3.28) are equal.
As was shown in [11], the unitary operator J J must belong to W * (G). The following makes this explicit.
Lemma 3.25. For any ϕ ∈ C c (G) and g ∈ G we have: Proof. We have, with absolutely convergent integrals: As J ρ g J = λ g , applying this to J ϕ instead of ϕ we get the announced formula.
By [11,Proposition 5.4], we deduce that the multiplicative unitary Ŵ Ω for the deformed quantum group (W * (G), ΩΔ(·)Ω * ) is given by the formulâ (3.29) Conjugating by the partial Fourier transform, we get a more explicit formula: (3.17). Then for the multiplicative unitary Ŵ Ω of the deformed quantum group (W * (G), ΩΔ(·)Ω * ) and any f ∈ L 2 (G × G) we have Proof. We know by Lemma 3.10 that for f ∈ L 2 (Q ×V × G, From this we obtain, for f ∈ L 2 (G × G): It follows that On the other hand, with the help of Lemma 3.25 we can perform calculations similar to those of Lemma 3.10 to get, for f ∈ L 2 (Q ×V , |q| −1 d q (q) dξ), that Moreover, one easily finds that Hence we get Using that the expression above becomes which is what we need.
Recall that in Section 3.4 we considered a continuous family of cohomologous dual unitary 2-cocycles Ω q , q ∈ Q, defined by replacing ξ 0 by ξ q 0 = q ξ 0 .
Corollary 3.27. We have: Q →V is open and z n → 0 in End(V ) pointwise for a sequence of elements Proof. Part (i) follows already from formula (3.29). Indeed, the map q → Ω q is continuous by Proposition 3.20. On the other hand, the unitary U q in formula (3.29) for the dual cocycle Ω q is given by Lemma 3.23, with ξ 0 replaced by ξ q 0 . To be more precise, that lemma is formulated under the assumption that the Haar measure on Q is normalized by (3.9). If we replace ξ 0 by ξ q 0 = q ξ 0 , and, correspondingly, the map φ by φ q (q ) = φ(q q), but want to keep the same measure on Q, then the map is unitary only up to a scalar factor. But this means that for Lemma 3.23 to remain true we just have to state it as the equality As the map q → U φ q is obviously continuous in the so-topology, we conclude that the map q → U q is continuous as well, hence so is the map q →Ŵ Ω q .
Since by (3.30) we have Y = (F V ⊗ F V )Ŵ (F * V ⊗ F * V ), this proves the result.
Stachura's dual cocycle
In this section we consider the simplest example of our setup, the ax + b group G over the reals. In this case Stachura [30] already defined a dual cocycle on G. We refer the reader to his paper for a motivation of the construction and just present an explicit form of the cocycle.
Consider the following operators affiliated with W * (G): Then the dual unitary cocycle on G found by Stachura, see [30,Lemma 5.6], is defined by Proposition 3.28. The dual cocycle Ω S coincides with the dual cocycle Ω defined by (3.18) for ξ 0 = −1. In particular, Ω S is cohomologous to the dual cocycle Ω defined by (3.17).
Consider now, for fixed s ∈ R, the function ϕ(y) := s + ln |1 + y|. It has two simple zeros located at y ± = ±e −s − 1 and it is continuously differentiable (away from −1) with ϕ (y ± ) = ±e s . Therefore Setting now q = e s and v = −t we get Setting ξ 0 = −1 and remembering that we have here q ξ = q −1 ξ and d(q, v) = (2π) −1 q −2 dq dv, we finally get which is exactly the dual cocycle Ω defined by (3.18) in Remark 3.13. As we already observed there, Ω is cohomologous to Ω.
As was suggested by Stachura [30], the quantum group (W * (G), Ω SΔ (·)Ω * S ) is isomorphic to the quantum ax + b group of Baaj and Skandalis [29] (see also [33,Section 5.3]), but his arguments fall a bit short of proving that this is indeed the case. The above proposition together with Theorem 4.1 below complete his work.
Bicrossed product construction
Recall that a pair (G 1 , G 2 ) of closed subgroups of a locally compact second countable group G is called a matched pair if G 1 ∩ G 2 = {e} and G 1 G 2 is a subset of G of full measure [3]. Given such a pair, we have almost everywhere defined measurable left actions α of G 1 and β of G 2 on the measure spaces G 2 and G 1 , resp., such that gs −1 = α g (s) −1 β s (g) for g ∈ G 1 , s ∈ G 2 .
We can then define a bicrossed product Ĝ 1 G 2 . This is a locally compact quantum group with the function algebra The coproduct on L ∞ (Ĝ 1 G 2 ) is a bit more difficult to describe, but we will not need to know the exact definition and refer the reader for that to [3] or [33]. 3 Then by [33, Proposition 2.9 and Theorem 2.13] the dual quantum group is G 1 Ĝ 2 =Ĝ 2 G 1 , the bicrossed product defined by the matched pair (G 2 , G 1 ) of subgroups of G.
In a similar way there exist a (unique up to isomorphism) second countable locally compact group G 2 , a right action of G 2 on X and an equivariant measurable map f 2 : X → G 2 such that for almost all pairs (x, y) ∈ X × X we have x * y = yf 2 (x).
In our case we get According to [2,3] we then get a locally compact group G and embeddings h i : G i → G of the groups G i as closed subgroups such that the map G 1 × G 2 → G , (s, g) → h 1 (s)h 2 (g) is injective and the complement of its image is a set of measure zero. By [2, Lemma 3.5(b)], for almost all (x, y) ∈ X × X we have h 2 (f 2 (x))h 1 (f 1 (y)) = h 1 (f 1 (b))h 2 (f 2 (a)), where (a, b) = v(x, y).
Corollary 4.2. The locally compact quantum group (W * (G), ΩΔ(·)Ω * ) is self-dual. If G is nontrivial, then this quantum group is noncompact and nondiscrete, and if G is nonunimodular (that is, Δ Q = | · | V ), then the quantum group is also nonunimodular, with nontrivial scaling group and scaling constant 1. | 18,825.8 | 2019-06-05T00:00:00.000 | [
"Mathematics"
] |
Subunits of the H’-ATPase of Escherichia coli OVERPRODUCTION OF AN EIGHT-SUBUNIT F,Fo-ATPase FOLLOWING INDUCTION OF A A-TRANSDUCING PHAGE CARRYING THE unc OPERON*
The proton-translocating ATPase complex (F1Fo) of Escherichia coli was purified after induction of a X- transducing phage (kasn5) carrying the ATPase genes of the urn operon. ATPase activity of membranes pre- pared from the induced X-unc lysogen was 6-fold greater than the activity of membranes prepared from strains lacking the unc-transducing phage, confirming the report of Kanazawa et al. ((1979) Proc. Natl. Acad Sci. U. S. A. 76, 1126-1130). The FIFo-ATPase complex was purified in comparable yield from either enriched membranes or control membranes using a modification of the procedure reported by Foster and Fillingame ((1979) J. Biol. Chem 254,8230-8236). Each of the eight subunits that had been reported as components of the FIFo complex from wild type E. coli was overproduced in the A-unc lysogen. All eight subunits co-purified in the same stoichiometric proportion as in the complex purified from wild type E. coli. We conclude that all eight subunits are likely coded by the small segment of chromosomal DNA carried by the X-transducing phage. These experiments provide the first evidence that all eight polypeptides are authentic subunits of the ATP- ase complex rather than contaminants that fortuitously co-purify.
location of H' across the membrane to the synthesis or hydrolysis of ATP (2, 4). The F,-ATPase has been purified from several species of bacteria, mitochondria, and chloroplasts (1, 3). It is an extremely complex enzyme, composed of at least five nonidentical subunits in most species and perhaps six subunits in mammalian mitochondria (1,3). Fragmentary information on the function of different subunits of F1 has been obtained by biochemical reconstitution experiments with purified subunits from two species of bacteria (2, 5). The Fo sector of the complex has been less thoroughly analyzed than F, and the subunits only tentatively identified. The F,Fo-ATPase complexes purified from the thermophilic bacterium PS3 and Escherichia coli contained three polypeptides in addition to the five which compose F, (6,7). On the other hand, the FIFO complex purified from chloroplasts contained four polypeptides in addition to those of F1 (8). Mitochondrial FIFO preparations are considerably more complicated (9-12). A common subunit found in the Fo of all species studied to date is a hydrophobic "proteolipid" protein which is the site of covalent reaction with dicyclohexylcarbodiiiide (DCCD), an inhibitor of the proton-translocating activity of Fa (4). A second subunit from the FO of the thermophilic bacterium PS3 has been implicated in the binding of the F1-ATPase (13). It remains uncertain whether there are other subunits that are authentic components of Fa and what function these subunits may perform.
The FIFO-ATPase of E. coli has been studied extensively in recent years because questions of function are subject to genetic as well as biochemical analysis in this organism (14,15). All mutations affecting the ATPase complex have mapped a t a single locus termed unc, and these genes seem to be organized in an operon (15). Biochemical analysis of mutants altered in different subunits should provide definitive information on the function of each subunit. For example, it has been shown by this approach that both the a and p subunit of F, play some role in catalytic activity, since genetic alteration of either of these subunits abolishes ATPase activity (5, 15, [16][17][18][19][20]. Similarly, it was through the use of DCCD-resistant mutants that the proteolipid protein of Fa was most clearly shown to be the site of specific DCCD reactivity, the reaction leading to inhibition of both proton translocation and ATPase activity (4, 21, 22). Genetic techniques also provide a means of amplifying the genes coding for the complex. Overproduction of the complex would facilitate its preparation in large quantity. This approach was used by Young et al. (23) to obtain a substantial increase in the level of membrane-associated NADH dehydrogenase.
Miki et ad. (24) have described the isolation of a specialized transducing phage, Aasn5, which carries chromosomal DNA including the unc operon. Induction of this phage was shown by Kanazawa et al. (25) to result in increased levels of mem-
Overproduction of Eight-subunit H'-ATPase
brane-associated ATPase activity and evidence was presented indicating that the five subunits of F, were overproduced. Since the ATPase activity was membrane-bound and DCCDsensitive, it seems likely that at least some components of Fo were overproduced as well as F,. We have extended this work and demonstrate here that the eight subunits found in the purified FIFo-ATPase of E. coli are overproduced during induction of AasnS.
EXPERIMENTAL PROCEDURES
Bacterial and Viral Strains-The following derivatives of strain KH716 [asn-31, thi, and rifl (24) were used. Strain "95 was derived by lysogenizing strain KH716 with with XcI857S7. This phage is thermoinducible due to the cI857 mutation and is unable to lyse cells due to the S7 mutation (24). Strain KY7485 (24) was derived from strain KH716 by lysogeny with XcI857S7 and the transducing phage hasn5 (XcI857S7[bgIR-C+, glmS', uncA', asn']), which carries a segment of DNA including the unc operon. In the text, strain KY7485 will be referred to as the X-unc lysogen and strain "95 as the Xlysogen control. Strains AN180 and ML308-225 are nonlysogenic uric+ strains which were utilized as sources of F,Fo and FI, respectively, as described previously (7). Growth of Cells and Induction of X Phages-Cells were grown on minimal medium containing 0.1 M potassium phosphate (pH 7.5), 93 mM NH,Cl, 0.8 mM Na2S04, 16 mM MgC12, and 3.6 pM FeS04 supplemented with 78 mM glucose, 14.8 WM thiamin, and 3.8 mM L-asparagine. Asparagine was omitted for growth of strain KY7485. T o avoid formation of a precipitate, the concentration of minerals in the medium was initially one-half that stated; the remainder was added 2 h before phage induction. Cells were grown in 10 liters of medium in a 14-liter New Brunswick fermenter, with aeration at 8.5 liters/min and stirring at 400 rpm. Under these conditions, saturation was reached at an A of 9 units a t 550 nm (2.4 X 10"' cells/ml). Cells were grown a t 32°C to an optical density of 3 units (8 X 10' cells/ml). X phage production was then induced by raising the temperature of the medium to 42°C over a period of 17 min. After 30 min, the temperature was reduced to 37°C (over a period of 3 min) and aeration and stirring were continued for 3 h. Membrane-associated ATPase activity was maximal a t this time. Cells were harvested, washed, and stored as described previously (7).
Preparation of the ATPase Complex-Membranes were prepared as described (7), except that 6 mMp-aminobenzamidine was included in all buffers. In order to purify the ATPase complex in high yield from membranes of the induced X-unc lysogen, the procedure of Foster and Fillingame (7) had to be modified. The modified procedure also proved superior in purifying the ATPase complex from nonlysogenic strains grown on glucose. Membrane a t 20 mg of protein/ml in Buffer A (50 mM Tris-HC1 (pH 7.5). 5 mM MgS04, 1 mM dithiothreitol, 10% (v/v) glycerol, 6 mM p-aminobenzamidine, and 1 mM phenylmethylsulfonyl fluoride) was adjusted to 1 M KC1 by the addition of solid KC1 and to 0.5% in both sodium deoxycholate and sodium cholate by the addition of 10% (w/v) detergent solutions. The suspension was intermittently mixed for 10 min a t 0°C and then centrifuged at 40,000 rpm (193,000 X gmaJ for 80 min in a Beckman type 50.2 Ti rotor a t 4%. The clear supernatant solution was immediately diluted by the addition of 1 volume of Buffer A and dialyzed against 50 volumes of Buffer A for 16 h a t 4'C with one change of external buffer at 6 h. The resultant turbid suspension was centrifuged as above. The pellet was homogenized in a volume of 100 mM Tris-HC1 (pH 7.5), 5 mM MgSO,, 2 mM dithiothreitol, 10% (v/v) methanol, and 12 mM paminobenzamidine equal to one-fifth that used in the extraction of the membrane. This suspension was diluted to 11 mg of protein/ml, adjusted to 0.6% in sodium deoxycholate, and fractionated with ammonium sulfate as described (7), but taking a 25 to 35% cut. The use of methanol rather than glycerol in this step eliminated sporadic problems with floating precipitates and improved the yield. The ammonium sulfate fractionated material was purified by sucrose density gradient centrifugation as described (7) except 6 mM p-aminobenzamidine and 0.28% deoxycholate (w/v) were included in the gradients.
When the complex was prepared from the A-lysogens, the ammonium su1fat.e fractionation described above was omitted. The particulate fraction of the dialyzed membrane extract was resuspended at 20 mg of protein/ml in 50 mM Tris-HCI (pH 7.5), 5 mM MgC12, 1 mM dithiothreitol, 10% (v/v) methanol, and 6 mM p-aminobenzamidine and resolubilized by the addition of 10% (w/v) sodium deoxycholate to 1.25% (w/v), 10% (w/v) sodium cholate to 0.5% (w/v), and solid KC1 to 0.25 M. After intermittent stirring for 5 min a t O"C, the suspension was centrifuged for 20 min a t 50,000 rpm (227,000 X gmnr) in a Beckman type 50 Ti rotor a t 4°C to remove residual particulate material. The clear supernatant solution was immediately layered onto 10 to 40% (w/v) sucrose gradients and centrifuged as described above for the purification of the complex from nonlysogenic cells. Following centrifugation, the gradients were fractionated into 26 fractions of 0.19 ml following puncture of the tube bottom. The leading portion of the peak of ATPase activity was pooled and reconstituted as described (7).
Analytical Procedures and Assays-The procedures described (7) were used without modification. The polyacrylamide slab gels used contained 13% (w/v) acrylamide, 0.41% (w/v) N,N'-methylenebisacrylamide, and 0.2% (w/v) sodium dodecyl sulfate and had dimensions of 1.5 mm X 9 cm X 14 cm. Electrophoresis was carried out with the buffer described previously (21) for 6 h a t 20 mA/slab gel.
Chemicals-Sodium deoxycholate was obtained from Calbiochem-Behring (La Jolla, CA). Phenylmethylsulfonyl fluoride was obtained from Sigma (St. Louis, MO). Cholic acid (Sigma) was recrystallized twice from 70% ethanol and neutralized with NaOH. p-Aminobenzamidine dihydrochloride was obtained from Aldrich (Milwaukee, WI). " Approximately 20% of the total ATPase activity was inactivated during the extraction step, i.e. it could not be accounted for when the ' The yield at this point was more typically 60%.
' Material applied to sucrose gradient.
, I Includes resolubilization of particulate dialyzed extract, ammonium sulfate fractionation, and resolubilization for application to Sucrose e Material is not pure. gradients.
RESULTS
Gene Dosage-dependent Increase in Membrane ATPase-Kanazawa et al. (25) indicated that lysogeny with the X-unctransducing phage resulted in an increase in membrane ATPase activity consistent with that expected due to gene dosage. Under the conditions used here, lysogeny with A-unc resulted in a 2-fold increase in ATPase activity relative to either a nonlysogenic or nontransducing X-lysogen control ( Table I).
Thermal induction of the X-unc lysogen resulted in a further 3-fold increase in ATPase activity, whereas no change in membrane ATPase was observed on induction of the nontransducing X-lysogen control (Table I). These results are consistent with those reported previously (25), but apply to cells grown on glucose minimal medium and on a large scale ( Table 11). The level of membrane ATPase activity seems to correlate well with the number of copies of the unc operon present per cell.
Purification of FIFO from Induced X-unc Membranes-The F,Fo-ATPase was purified from the induced X-unc membranes by a modification of the procedure of Foster and Fillingame (7). In order to minimize any effects arising from phage induction alone, the complex was also purified by this procedure from strain "95, which contained a heat-inducible X prophage identical with the helper phage in strain KY7485. The complex was purified several times from both types of cells and the results of a typical purification are summarized in Table 11. In order to efficiently solubilize the ATPase complex from induced X-unc membranes, the detergent concentration used for extraction was increased relative to that described (7). Similarly, the detergent concentration had to be increased to resolubilize the particulate ATPase formed after dialysis in reasonable yield. Methanol rather than glycerol was used to stabilize the ATPase during resolubilization since it diminished formation of a precipitate that occasionally floated during centrifugation. The ammonium sulfate fractionation procedure was not used in the modified procedure since it proved possible to obtain high purity F,Fo from membranes of induced X-unc without it, and the yield was poor due to problems in resolubilization. Inclusion of the ammonium sulfate fractionation step was necessary in order to obtain high purity FIFo from membranes of the induced, nontransducing X control. This is indicated by the lower specific activity in Table I1 and by analysis on acrylamide gels as discussed below.
Subunits Overproduced in Induced X-unc-The subunit compositions of the purified ATPase preparations described in Table 11 were compared by SDS-polyacrylamide gel electrophoresis (Fig. 1). The FIFO complex prepared from induced X-unc by the abbreviated procedure contained eight subunits which migrated identically with those found in the complex purified from a nonlysogenic strain by the original procedure of Foster and Fillingame (7). Furthermore, the relative proportions of each subunit, as judged by the staining intensity, were very nearly equal in the X-unc and wild type preparations, suggesting a constant stoichiometric relationship.2 When the complex was prepared from the induced, nontransducing X control by the abbreviated procedure (Table 11), these eight subunits were found as major componen@ in the same relative proportion (Fig. 1). However, other contaminants were also observed, the most prominent being polypeptides with apparent molecular weights of 76,000, 26,000, and 15,000. The relative amount of these contaminants in the induced X-unc preparation was negligible because of the 6-fold 19,OOO. and a true molecular weight of 8,400, respectively; a, d, and e indicate contaminants with apparent molecular weights of 76,000,26,000, and 15,000. increase in F1Fo subunits relative to general membrane proteins.
The results described above indicate that the eight subunits found in FIFo preparations of high purity are all overproduced in equal proportion in the induced X-unc strain. This was directly demonstrated for most of the subunits by analysis of membranes and partially purified fractions (Fig. 2) 1 and 10, FI, 3 pg; lanes 2 and 9, FIFo purified by complete procedure, 5 pg; lanes 3 and 4, membranes of X-unc or control, 40 pg; lanes 5 and 6, particulate dialyzed extract of detergent-solubilized ATPase from A-unc or conintensity of subunits in the region of the y, 8, and e subunits are also apparent, but their identification is more tenuous due to other intensely staining polypeptides in these regions.:' However, the overproduction of all eight polypeptides is apparent in the detergent-solubilized fraction from the membrane. The induced X-unc membrane shows an enrichment for several polypeptides other than the eight cited above, the most obvious having apparent molecular weights of 31,000, 20,000, 15,000, and 11,000. These are probably membrane proteins coded for by the segment of transducing DNA carried on the phage, which corresponds to the region between bglB and asn on the E. coli chromosome. The M , = 31,000 protein was extracted from the membrane by the procedure used to solubilize the DCCD-sensitive ATPase complex, but did not co-purify with the FIFO complex during sucrose gradient centrifugation (Fig. 2).
In our earlier report (7), a polypeptide with an apparent molecular weight of 14,000 co-purified with the FIFO-ATPase complex prepared from cells grown on succinate/acetate/malate and could not be ruled out as a possible component of the FIFO complex in cells grown under these conditions. When The overproduction of the y subunit in the membrane fraction was masked by an intensely staining band of outer membrane protein which migrated to this position. This outer membrane protein migrates anomalously in SDS gels, occasionally to a position of higher apparent molecular weight (26). On the several occasions in which this outer membrane protein migrated with decreased mobility, the overproduction of the y subunit in the membrane fraction has also been obvious. trol, 25 pg; lanes 7 and 8, sucrose gradient pool of FIFO from X-unc or control, 5 pg. Greek letters refer to Fl subunits; 24K. 19K. and 8K refer to Fo subunits; arrocus point to induced proteins of unknown identity in X-unc membranes with apparent molecular weights of 31,000,20,000, 15,000, and 11,OOO.
FIFO was purified from the X-unc lysogen grown on these carbon sources, overproduction of a polypeptide of this molecular weight was not detected.
Energy-transducing Activities of the Purified F1 Fo Preparations-The F'Fo-ATPase purified from the induced X-unc strain exhibited all of the energy-transducing properties of the complex purified by our original procedure (7). The ATPase extracted from induced X-unc membranes exhibited equivalent sensitivity to DCCD, i.e. >70% inhibition of ATPase activity at all stages of purification. At least 80% inhibition of ATPase activity by DCCD was observed with the purified, reconstituted complex. The FIFO from the induced X-unc lysogen exhibited ATP-dependent quenching of quinacrine fluorescence, which is a qualitative index of ATP-coupled proton translocation. The Fn sector prepared from the induced X-unc lysogen by the method of Negrin et al. (22) demonstrated equivalent proton translocation activity (per micrograms of protein) as the Fo prepared from a nonlysogenic wild type strain.
DISCUSSION
Purification of an FIFo-ATPase from E. coli was previously reported to yield a complex composed of eight nonidentical polypeptides. The question remained as to whether all eight components were authentic subunits of the complex. Here we have shown that all eight components co-purify in constant stoichiometric proportion from membranes enriched 6-fold for the ATPase complex. This result by itself strongly suggests that each of the eight subunits is an unc gene product. However, one could envision such a result if one of these polypeptides was a contaminant that was coded for elsewhere in the chromosome if this contaminant was normally produced in a 6-fold or greater excess over the ATPase complex. This possibility can be dismissed since on comparing membranes from the induced X-unc strain to control membranes we observed an obvious increase in intensity of polypeptides corresponding in molecular weight to a, p, y, 24,000, 19,000, and 8,400 (DCCD-binding protein). Subunits S and also appeared to be overproduced in the membrane although identification was less certain. An unlikely possibility that cannot be ruled out is that the 18-megadalton segment of DNA (0.65% of the E. coli chromosome) carried on the X phage codes not only for the ATPase subunits but also for a contaminant that fortuitously co-purifies with the complex.
These experiments suggest but do not prove that all eight subunits are coded by genes at the unc locus. Conceivably, a regulator produced at the unc locus could promote overproduction by genes at another chromosomal location. However, in considering this possibility, it should be noted that mutations affecting the ATPase complex have never been mapped in chromosomal locations other than unc (15).
The eight-subunit ATPase complex has been demonstrated to be active in several properties indicative of its function in oxidative phosphorylation. These include ?'PPi-ATP exchange (7), ATP-driven proton pumping as judged by quinacrine quenching (7), and Fo-mediated proton translocation (22). Despite these demonstrations, it is possible that the complex is composed of more than eight subunits in vivo, some of which are lost during purification. Several other membrane proteins were induced during X-unc prophage induction. The M , = 31,000 protein, which was solubilized with the ATPase complex, likely corresponds to a DNA-binding protein of the outer membrane that has been shown to be coded for by this segment of DNA (27). One of the others may be a P-glucoside transport protein (Enzyme 11) coded for by bgZC which is also carried on the X-unc DNA (24,28).
Other workers have recently reported DCCD-sensitive
ATPase preparations of E. coli. The preparation of Friedl et al. (29) is very similar to that discussed here, but does contain components which we think correspond to the M , = 76,000 and 26,000 contaminants discussed in Fig. 1 and the original purification paper (7). This preparation exhibited energytransducing activities similar to those described above (29). Friedl et al. (29) have cited preliminary findings indicating that the major subunits in their preparation can be synthesized in vitro from the DNA of an independently constructed X-unc-transducing phage, but the data supporting this claim have yet to be published. The composition of the preparation reported by Rosen and Hasan (30) differs significantly from that discussed here and that reported by Friedl et al. (29). Their preparation seemed to lack not only the M , = 24,000 and 19,000 subunits of Fo, but also the 6 subunit of F,. The sensitivity of this ATPase preparation to inhibition by DCCD was greater than that of F1 but less than that of membranes. However, due to its wide reactivity, DCCD is not an entirely specific inhibitor (41, and it wiU under appropriate conditions inhibit the activity of the F,-ATPase (31). Energy-transducing activities were not demonstrated with this unusual preparation.
Prior to this work, there was little question that the a-E subunits of F1 and the M , = 8,400 subunit (DCCD-reactive proteolipid) were true components of FIFO. The results pre-sented here provide strong prima facia evidence that the M, = 24,000 and 19,000 polypeptides are also true subunits. It is of interest that Kagawa and co-workers (6,32) observed two subunits other than the proteolipid and those of F1 in their FIFO and Fo preparations from thermophilic bacterium PS3.
However, one of these subunits was not required in what appeared to be a fully reconstituted FIFo complex (13). Unequivocal proof that both the M , = 24,000 and 19,000 subunits are essential to the function of Fo will require more refined genetic and/or reconstitution experiments analogous to those reported for the a and P subunits (uncA and uncD genes) (5, 15, [16][17][18][19][20]. | 5,065.2 | 1980-12-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Virtue in Medical Practice: An Exploratory Study
Virtue ethics has long provided fruitful resources for the study of issues in medical ethics. In particular, study of the moral virtues of the good doctor—like kindness, fairness and good judgement—have provided insights into the nature of medical professionalism and the ethical demands on the medical practitioner as a moral person. Today, a substantial literature exists exploring the virtues in medical practice and many commentators advocate an emphasis on the inculcation of the virtues of good medical practice in medical education and throughout the medical career. However, until very recently, no empirical studies have attempted to investigate which virtues, in particular, medical doctors and medical students tend to have or not to have, nor how these virtues influence how they think about or practise medicine. The question of what virtuous medical practice is, is vast and, as we have written elsewhere, the question of how to study doctors’ moral character is fraught with difficulty. In this paper, we report the results of a first-of-a-kind study that attempted to explore these issues at three medical schools (and associated practice regions) in the United Kingdom. We identify which character traits are important in the good doctor in the opinion of medical students and doctors and identify which virtues they say of themselves they possess and do not possess. Moreover, we identify how thinking about the virtues contributes to doctors’ and medical students’ thinking about common moral dilemmas in medicine. In ending, we remark on the implications for medical education.
Introduction
One of the central questions in medical ethics is how best to understand what characterises good medical practice or the right medical decisions. Consequentialism in medical ethics sees good practice as consisting in securing good outcomes for patients and society and deontologism sees it as practicing in accordance with ethical rules or principles. By contrast, virtue ethics sees good practice as practice that results from the virtuous moral character of the doctor. As a distinctive approach to medical ethics, virtue ethics investigates how the doctor's good moral character enables them to promote the good for the patient.
Today, a substantial literature explores medical ethics from a virtue perspective (see, for instance, Drane 1995;Pellegrino and Thomasma 1993;Oakley and Cocking 2001;Toon 2014) and, more recently, virtue has received increasing attention in fields like medical education (Eckles et al. 2005;Coulehan 2005;Bryan and Babelay 2009;Strachan 2015), medical management (Dawson 2009) and medical leadership (Conroy 2010). Within this literature, a signal debate concerns whether virtue ethical approaches are better equipped than (in particular) deontological or rule-based approaches to provide doctors with practical ethical guidance. Advocates of rule-based approaches emphasise the advantages of summarising the main principles of ethics in a compact system (like the famous four principles of medical ethics of Beauchamp and Childress (1979)-beneficence, non-maleficence, justice and autonomy). By contrast, virtue ethicists hold that these principles are too abstract to be of use in the context of the practice of medicine.
Medical virtue ethicists commonly advance three reasons as to why virtue ethics provides a more realistic, practice-focussed way to understand good medical practice than rule-based approaches. A first reason is that rules or principles by themselves are too abstract and general to guide moral action (Pellegrino and Thomasma 1993, p. 19). Rules or principles need to be interpreted in context and, to do that, virtue ethicists stress that the good doctor must acquire virtues such as perceptiveness and good moral judgement. A second reason is that rules or principles typically set a minimum standard for what is to count as good practice and risks encouraging an attitude of mere compliance with such standards. By contrast, virtue based accounts of medical ethics are 'excellence-oriented' (Barilan and Brusa 2012, p. 5). They are concerned with how the personal virtues demonstrated by the doctor in their work may promote the patient's good to the fullest extent possible (the Greek word for virtue-arête-means excellence). Thirdly, many authors note the similarities between wise ethical judgement in medicine and the real practice of 'clinical judgement'. Authors such as Kaldjian (2010) and others hold that virtue theory is better equipped than principles-based thinking to make sense of the complex weighing up of goals, goods and options that characterise real clinical judgement, because of a focus on wisdom and good judgement (phronesis). For its advocates, virtue-based thinking ties medical ethics more closely to the ideal of medical practice compared to deontological or consequentialist thinking.
While it is no doubt influential in theory, empirical study of virtue in medicine is in its infancy, with only a small body of empirical work in existence (e.g., Schulz et al. 2013;Carey et al. 2015). In fact, a recent review , found that empirical study of medical ethical decision-making is still completely dominated by rule-based approaches. In particular, no previous study has considered systematically: 1. what virtuous character amounts to in medicine; 2. how a doctor's character influences their practice; and 3. how character develops through medical education. This paper reports results from a first-of-a-kind empirical study on virtue in medical education and practice that explored answers to these questions.
Methods
During 2013-15, we conducted an exploratory, mixed-methods, cross-sectional study of the role of character in ethical medical practice. 1 By conducting a crosssectional study, we aimed to compare the views of respondents at three different career stages: students at the beginning of their medical degree ('undergraduates'), students about to graduate and begin hospital practice ('graduates') and experienced doctors (doctors with at least 5 years of experience).
We designed an electronic survey to answer three questions: • Which character traits are important in the practice of medicine according to medical students and doctors? • How do the virtues influence medical students' and doctors' thinking about common moral dilemmas in medicine? • How are medical students' and doctors' views about character influenced by institutional and social contexts? (Reported separately.) Because of the acknowledged difficulty in studying virtue psychologically (see Curren and Kotzee 2014;Kotzee and Ignatowicz 2016) we also conducted semistructured interviews with a sample of survey participants.
The electronic survey 2 comprised five sections 3 : (VIA-IS) (Peterson and Seligman 2004) and were instructed to select the six which 'best describe the sort of person you are'. 2. Responses to a set of moral dilemmas in medicine: Respondents were presented with six situational judgement tests derived from common dilemmas in the literature designed by a panel of experts (n = 15) in medical education. Participants were presented with a dilemma and instructed to choose one of two alternative courses of action. Once a course of action was chosen, participants were presented with six possible justifications for their chosen action and instructed to rank the best three (with no clicking back allowed). The reasons for action always included four written from the perspective of virtue ethics, one written from a consequentialist perspective and one written from a deontological perspective. (The full set of dilemmas are presented below.) 3. Views on the character of the 'ideal' professional in their profession: Respondents were presented with the list of the 24 VIA-IS character strengths again, and instructed to 'choose the six which you think best describe a good doctor'. 4. Respondents' views regarding their work or study environment: this section adapted questions from a Europe-wide workplace survey (The Eurofund Working Conditions Survey, 2012) with additional questions on ethical issues in the workplace. (Reported separately.) 5. A set of demographic questions.
Participants
Participants were recruited at four sites. These sites clustered around one medical school in each of the South of England, the English Midlands, the North of England and Scotland. First year students were surveyed on starting their medical degree and final year students were surveyed shortly before graduation. Practising doctors in the four regions were recruited by email contact through the local organisations of a number of Medical Royal Colleges. Table 1 summarises the number of survey respondents by career stage and Table 2 summarises the gender distribution of survey respondents.
Amongst experienced doctors, 65.7 % were general practitioners and 19.7 % were hospital doctors in the physicianly specialities.
Data Analysis
Survey data were collected using the e-survey and interview data were collected during face-to-face interviews and audio recorded. Survey data were transferred to SPSS version 21, checked, cleaned and readied for analyses. Analyses included descriptive analysis, cross-tabulation, correlation and factor analysis. New analyses were also developed to deal specifically with the results of sections 1 and 3 (respondents' views on character) and section 2 (moral dilemmas).
Interview data were analysed in NVivo 10. Thematic analysis was conducted using the constant comparative method.
Limitations of the Study
Some limitations exist that need to be kept in mind when interpreting the results.
The study was cross-sectional, not longitudinal. Respondent membership was not constant across the three career-stages studied and, therefore, questions may be raised about whether the three cohorts are exactly comparable.
There is a high likelihood that response bias influenced at least some of the results. Because participation in the study was voluntary, one could say that only those participants who were disposed favourably enough to the topic of virtue in medicine responded. Consequently, the survey and interviews may represent only the views of a self-selected group of people and not a perfectly unbiased sample. Approximately 1400 medical students were invited by email to complete the e-survey and 274 completed the survey, an approximate response rate of 19.5 %.
A last limitation concerns the make-up of the cohort of experienced professionals in the sample. Many more General Practitioners participated than members of any other specialty. Amongst hospital-based doctors far more members of the physicianly than surgical specialties participated in the survey. Because the invitation to participate in the survey was distributed mostly through electronic newsletters it is hard to calculate a participation rate.
Ethical Considerations
The study received ethical approval from the University of Birmingham Ethics Committee. An information leaflet was given to potential participants including full information about the study. Participation was on an 'opt in' basis and confidentiality was protected by anonymising survey responses and interview transcripts. Participants had the right to withdraw from the study up to six months after data collection.
Personal and Professional Virtues
In section 1 of the survey, participants were presented with a list of the 24 character strengths included in the Values in Action Inventory of Strengths (VIA-IS) (Peterson and Seligman 2004). The VIA-IS is the most frequently used psychometric instrument to profile a person's character and participants in the e-survey were asked to identify and rank the six character strengths that they think best represent their personal character. There was strong agreement between doctors and medical students as to what character strengths they possess and five strengths out of the 24 featured in the most frequently picked strengths list for all three of the cohorts. These were the character strengths of: fairness, honesty, kindness, perseverance and teamwork. Differences emerged with experienced doctors and graduating students reporting that they possess the strength of humour more frequently than first-year undergraduates (p \ 0.05). Women were more likely to report kindness as a personal strength, while men were more likely to report humour as a personal strength (p \ 0.05). There was even greater agreement between the three groups regarding the character strengths they expect in the good doctor. Section 3 of the survey presented participants with the same list of 24 character strengths and asked them to rank the six that they think best characterise a good doctor. Across all three cohorts, participants in the e-survey mentioned the following strengths most frequently as the character strengths that are required in a good doctor: fairness, honesty, judgement, kindness, leadership and teamwork. Some gender differences emerged as to what respondents reported about the character of the good doctor with women more likely to report judgement, kindness and leadership as strengths needed by the good doctor than men (p \ 0.05).
Placing these results side-by-side, we can conclude that, as a group, the medical students and doctors who responded to the survey held that the good doctor is a person that is: • a good team player, and • a person with good judgement.
As a group, the respondents were likely to hold of themselves that they possess four of these same strengths, presenting themselves as: • fair, • honest, • kind and • good team players.
Yet, respondents to the survey were more likely to report that • leadership and • judgement are strengths that the good doctor should possess than they were to report those as strengths they possess themselves (p \ 0.05).
We found it interesting that participants seemed to recognise the importance of judgement and leadership to the good doctor, but were hesitant to describe themselves as having leadership and/or judgement and followed this matter up qualitatively. A number of interviewees commented specifically on how difficult it is to have good judgement. An experienced doctor described it thus: Well, judgement you need all the time. You've got your problem-solving, you've got to balance risk and benefits, you've got to balance how you communicate with patients, you've got to balance your time and effort, you've got to balance resources. Your whole day is about making judgements in all those different areas.
Of the complexity involved, one undergraduate student reported: 'I think judgement may be actually -may be making a concise decision in a certain amount of time -it's difficult isn't it. I think I'd need a bit of practice with that.' The main reason respondents may have been hesitant to report that they have good judgement is a realisation regarding how difficult it is for any person to have good judgement. Indeed, this confirms in miniature a central idea in the virtue ethics literature, that, next to the individual moral virtues a further virtue, phronesis (mostly translated as practical wisdom) is needed to coordinate moral feelings and impulses into action. Moreover, phronesis is particularly hard to acquire (see Pellegrino and Thomasma 1993, pp. 85-89;Kaldjian 2010, pp. 558-562;Kristjansson 2015, pp. 313-315). What our respondents understood regarding 'judgement' applies quite well to what virtue ethicists mean by phronesis.
Turning to leadership, the follow-up interviews revealed something similar. One experienced doctor in the sample explained well what a weighty matter leadership is: 'I mean, yes, there are other colleagues you can ask but basically, you know, your patients are your patients, really, and you have to be prepared to -the buck stops with you.' Other experienced doctors stressed how leadership in medicine is a developing quality and one that needs ongoing attention: 'I think that is the thing, being a doctor you are never a finished product and I think obviously your life experiences influence you. So yeah, I view myself as an evolving beast really… Particularly leadership skills and things like that.' Another held: 'I suppose developing leadership, that's something that comes with experience and confidence probably, that's probably something that's still ongoing.' Amongst the medical students interviewed, many held that they had not really had the chance to develop or demonstrate leadership qualities (at least in the medical setting).
'There are certain things that I will need to maybe improve more on, project leadership skills, but mainly because I haven't actually been in a position where I've had to lead…' 'I'm not very good at leadership at the moment so I'm working on that one…' Together, the character trait of being a good leader was, for our respondents, both something that is difficult, but also needs constant development over time.
Virtues and Moral Dilemmas in Medicine
We have seen that the virtues identified by respondents as most important to good medical practice are: fairness, honesty, judgement, kindness, leadership and teamwork. How do these virtues influence doctors and medical students' moral decision-making in practice?
To examine this issue, we designed 'six situational judgement' tests 4 to study the role of character in moral dilemma situations in medicine. The situational judgement tests, we hoped, would give us an insight into: (a) which character strengths are important in dilemma situations in medicine, and (b) how they interact with each other and with other factors, such as explicit rules for medical practice and the consequences of certain decisions. The dilemmas used and summary results are presented in Table 3 (below).
How Did Our Participants Solve the Dilemmas?
Some dilemmas were solved similarly by the great majority of respondents. The best example is dilemma 4. In this dilemma, 84 % of respondents indicated they would not pursue a personal relationship with a patient and amongst those who selected this course of action, the most frequently cited justifications for why they would not pursue such a relationship were that (i) it is important to preserve a professional doctor-patient relationship and (ii) that this is what is suggested by the General Medical Council's ethical guidelines Good Medical Practice (General Medical Council 2013). In designing this dilemma, the expert design panel were of the view that providing justifications like these would indicate a concern with the virtue of prudence and with a rule-based consideration in how respondents solved the dilemma. A far smaller proportion of respondents (16 %) indicated that they would pursue a relationship with a patient in these circumstances. The two most frequently cited reasons as to why respondents would pursue such a relationship were that, (i) in this particular setting, everyone one meets would be a patient and (ii) that the doctor had known the patient personally before the consultation. The expert panel were of the view that solving the dilemma in this way would indicate a concern with perspective and judgement.
Other dilemmas seemed to polarise opinions much more. In dilemma 3, for instance, 60 % of respondents reported that they would disclose the HIV status of the wife to the husband and 40 % reported that they would not disclose this information. Respondents who indicated that they would disclose the information cited as reasons (i) the need to protect the health of the husband, (ii) that this was the accepted protocol in situations like this and (iii) that it would be unfair not to inform the husband. The expert panel were of the view that solving the dilemma like this would indicate a concern with kindness, with what the rules demand, and fairness. Respondents who indicated that they would not disclose the information cited as reasons (i) the need to protect the wife's confidentiality, (ii) the need to respect her wishes, and (iii) the need to retain her trust. The expert panel were of the view that solving the dilemma like this would indicate a concern with what the rules demand and with kindness and perspective.
While dilemma 3 seemed to polarise opinion the most, it is important to note that there was no very great difference between how the three cohorts solved the dilemma in that roughly similar proportions of each cohort reported that they would either disclose or not disclose the wife's HIV status. In this respect, dilemmas 3 and 4 (already mentioned) and dilemma 6 are similar. In each of these dilemmas, there was broad agreement between the three career stages over how to solve the dilemmas, with similar proportions in each group preferring the same decision. This creates an impression of homogeneity across the three cohorts and leads one to think that there is a high degree of shared perspectives between the three cohorts.
Dilemmas 1, 2 and 5 are different, however. In these dilemmas it is easy to spot one cohort that disagreed quite markedly from the other two cohorts as to the best decision. In dilemma 1, many more of the starting undergraduate students (37 %) responded that they would admit the patient to hospital than graduating students (17 %) or experienced doctors (10 %); and in dilemma 2, many more of the starting undergraduate students (48 %) reported that they would not perform the transfusion when compared to graduating students (32 %) and experienced doctors (31 %). In dilemma 1 and 2 it seems as if the graduating students' views have aligned quite decisively with those of the experienced doctors-while many of the starting undergraduates still have a different view. In short, the 4-5 years of medical education that the graduate group have experienced seems to have had an impact on their moral thinking. In dilemma 5, in contrast, the views of the undergraduate students aligned most closely with the experienced doctors, with 58 % of undergraduate students and 52 % of experienced doctors saying that they would report their colleague to the supervising consultant, but 85 % of graduating doctors saying they would deal with the matter privately. This result was followed up qualitatively and it was found that the largest group of graduating students in our sample (representing the class at one university) had only recently received teaching in ethics and profession conduct in which a case like this was discussed and in which advice was given to try to deal with a matter like this collegially first. While dilemma 3, 4 and 6, then, showed no great differences of views between the cohorts, such differences were strongly in evidence in dilemmas 1, 2 and 5. From analysis of these dilemmas, one can conclude that the process of medical education does shift participants' views on how to handle some frequently encountered ethical dilemmas in medicine. How this process works is a matter that deserves further study.
Which Character Traits are Important?
In designing the moral dilemmas, the research team had hoped to be able to answer questions like: (a) which character strengths are important in dilemma situations in medicine; (b) why or how they are important; and (c) how considerations to do with character interact with explicit rules for medical practice and the consequences of certain decisions.
In all of the six dilemmas, participants were asked to say what they would do in the dilemma situation and then to rank the three best reasons that they saw for acting in that way. The expert panel helped the researchers analyse the results by providing a judgement that, if a respondent were to give a certain reason for acting in a certain way, this would indicate a certain concern on their part. For instance, in dilemma 1, the following reasons were given as to why one may choose not to admit the patient, Mr G. to hospital: The expert panel identified reason 1 as indicating a concern with the consequences of one's actions, reason 5 as indicating a concern with the virtue of kindness and reason 6 as indicating a concern with what the rules or explicit guidance have to say regarding some matter.
In completing this 'mapping' of reasons to virtues, consequences and rules, the expert panel were instructed to use Peterson and Seligman's 24 character strength terms to describe the virtues that they thought operated in the dilemmas and, importantly, the panel saw some of this list of 24 at work in the possible reasons provided for action far more than others. The expert panel saw the following virtues at work in the reasons that we had provided for action disproportionally often: 'judgement', 'kindness', 'fairness', 'prudence', 'leadership' and 'perspective'. On the one hand, this may indicate that, in their thinking about the moral dilemmas, the expert panel of 15 medical educators thought that a relatively small number of virtues capture the kinds of considerations that a doctor must weigh up in solving some common dilemmas in the profession. On the other hand, conclusions about which virtues were particularly important in thinking about the dilemmas were strongly mediated by the mapping process (the process of associating 'Trying to treat Mr G. against his own wishes isn't the best use of the hospital's resources' Indicates a concern with consequences Reason 5: 'This is the kindest option for Mr G.' Indicates a concern with the virtue of kindness Reason 6: 'Professional guidance states that if the patient is capable you should comply with their wishes' Indicates a concern with the rules certain virtues with certain reasons for action on the survey) conducted by the expert panel.
That said, eight virtues in particular focus time and again in how respondents dealt with the dilemmas (see the right hand column of Table 3): This shows quite some overlap with the virtues that our respondents said the good doctor should have as well as the virtues that they reported they possess themselves. To the question 'Which virtues are important in our participants' thinking about the moral dilemmas?' the-cautious-answer is therefore: that it is the same virtues that respondents identified as important in the good doctor and in themselves, with the addition of the virtues of prudence, bravery, perspective and social intelligence.
How Do the Virtues Interact with Each Other (and with Rules and Consequences) in Determining Decisions?
It is interesting that in many of the dilemmas some very similar considerations were cited by respondents as reasons for acting in two opposite ways. Take the case of dilemma 3 again. In this case, what is the kind thing to do and what the rules demand were cited both by respondents who chose to disclose the wife's HIV status and those who said that they would not disclose this information as the reason for why they acted as they did. In effect, some participants held that the kind thing to do would be to inform the husband while others held that the kind thing to do would be not to inform the husband. Moreover, the same was true of what participants thought the accepted rules or protocols of medical ethics demands-both sides of the argument thought that their view was the accepted one. This indicates that fundamental differences of opinion are possible over what is the virtuous thing to do in a certain situation and further study in this area can fruitfully focus on the matter of which respondents, for instance, tended to hold that the kind thing to do was to disclose to the husband and which thought the kind thing to do was not to disclose this information to the husband. As it is, we found no significant differences of this kind between how, for instance, the different cohorts in the sample approached this issue or to how men and women, say, approached the issue.
A second important point that emerges is that in all of the six dilemmas, what participants think the rules demand featured strongly in participants' thinking. This indicates that it is too simplistic to hold that doctors are more motivated by the demands of virtue than they are by what they perceive to be the explicit rules that pertain to some matter. Clearly, what the rules say do matter, because respondents cite them as important in why they would act one way rather than another.
Thirdly, when comparing the lists of virtues that respondents used to describe themselves and to describe the good doctor with the virtues that seemed to operate in the dilemmas, it is striking that the virtues of prudence, bravery, perspective and social intelligence play the important role that they do in the dilemmas. Indeed, looking closely at these four virtues, the virtue of 'perspective' seems to align quite well with a virtue that we have already seen is important-the virtue of 'judgement'. Indeed, having 'perspective' and having 'good judgement' both involve being able to see a situation in the round and to be able to select the most important things to focus on in that situation. To a lesser extent, the same can be said of what it means to be 'prudent'-in that 'judgement' (or 'wisdom') is a possible synonym for 'prudence'. What emerges from the dilemmas in this regard is how important it is for the doctor to be able to understand the whole situation, to focus on the most important matters and to be able to come to a sensible conclusion about it.
In coming to understand how our respondents thought about the moral dilemmas in the survey, we asked participants in the follow-up interviews about the moral dilemmas that they had encountered in medical school or in practice and how they went about solving them. Moreover, we asked participants their views of the quality of the guidance they thought was provided by ethical guidance for the profession (with a particular focus on the General Medical Council's guidelines as contained in Good Medical Practice).
From the interviews, it seemed that there is an interesting developmental journey at play in doctors' attitudes towards ethical guidelines for the profession across their career. Many of the first year students interviewed seemed to have a quite simplistic attitude towards these guidelines. Speaking about Good Medical Practice, one first year student held: 'I'm guessing they make it clear what is expected of a doctor, providing the parameters around what is acceptable about both professional conduct and also, perhaps, conduct outside of work.' Another said: 'They're obviously like really important because in a way they're sort of like the law of medicine -so if you kind of break those then it's not going to be very good for you.' When speaking to the graduating students, however, a marked scepticism had set into their attitudes about Good Medical Practice. One said: 'We had to do this silly thing at graduation where we have to read out the lists of the duties of a doctor from Tomorrow's Doctors, which is complete guff 'cos just having recited it doesn't mean you are going to do it.' Graduating students were clearly already well aware of the limitations of written guidance. One held: 'I think there are quite a lot of situations where there can't really be a set guidance and there's sort of two ways to go that are appropriate and so it's -I suppose being comfortable with saying, ''This is what I think's better''.' Many experienced doctors also spoke at quite some length about this in the interviews. Two participants summed up the prevailing attitude very well: 'I think the GMC's guidance is there as a kind of bottom line I think rather than -I mean, if you just went to medical school and spent 5 years just reading GMC guidelines it's not going to teach you how to be a good doctor. I think it's there to tell you what the boundaries are rather than as being a guide as to how to practice. And as I say, the guidance on how to practice doesn't exist, because it's age old, it's watching previous generations of doctors, working alongside them and learning it by experience really.' 'Does it influence my practice on a day-to-day level? I don't think I consciously think, on a day-to-day level, ''Oh, I'd better not do that because it's not in the GMC guidance.'' I think that I think… It's a step before that, isn't it? It's about what I think is a reasonable person or a reasonable doctor going to do in this situation.' The importance of the virtue of judgement is the most important finding from the dilemma part of the study. Carey et al. (2015) investigated US medical students' opinions on the character education they receive and find that medical students are generally receptive to character education 5 . However, what should such character education look like? Our study was the first to explore the influence of virtue in medical practice in the round and focussed on understanding: which virtues are important in medical practice, how these virtues influence medical students' and doctors' thinking about moral dilemmas and what contextual factors may influence doctors' ability to live out these virtues.
Discussion
What Do the Self-Report Sections Tell Us?
We found that medical students and doctors at three career stages were in substantial agreement on the virtues that good doctors should exhibit. These were: fairness, honesty, judgement, kindness, leadership and teamwork. Medical students and doctors also reported four of these character strengths as personal strengthsfairness, honesty, kindness and teamwork; however, they were less likely to rank their own qualities of judgement and leadership highly. We found only small differences between how different cohorts in the study described their own character and between how men and women described their own character. Whether this homogeneity in how respondents describe their own character is due to the fact that entrants to medicine are selected from a common (some would say elite) social background or whether medical education and training itself has a homogenising effect is an interesting matter to consider. Moreover, it deserves to be considered whether the commonality in how medical students and doctors describe their own character is an indication of a positive trend-towards consensus and standardisation in the medical workforce-or a negative trend-towards group-think and conformism.
What Do the Dilemmas Tell Us?
As we have seen, the moral dilemmas were designed to understand how considerations of virtue interact with thinking about rules and consequences in explaining how medical students and doctors think about morally problematic situations. From the findings above, it should be clear that it is possible to adopt the well-known moral dilemma approach to investigate not only moral reasoning, but to explore additional considerations concerning virtue. The most fruitful way to think of moral dilemmas from a virtue-perspective is to conceive of such dilemmas as situations in which different virtues-and, sometimes, virtues and rules-come into conflict. As we saw, analysis of the moral dilemmas on the e-survey highlighted a number of conflicts regarding exactly what virtue demands in a particular situation (for instance between what is kind, what is fair and what the rules demand in dilemma 3 or what perspective and judgement demands versus what prudence and the rules demand in dilemma 4). Balancing these demands requires good judgement and both the findings from the dilemma section on the survey and the qualitative study reinforce the view that the virtue of judgement is important in medicine. This confirms the great deal of theoretical work that has already gone into studying virtues like 'phronesis', 'judgement' and 'prudence' in medicine (see, e.g., Pellegrino and Thomasma 1993;Marcum 2012;Kaldjian 2014;Toon 2014).
Implications for Medical Education
Today, the teaching of medical ethics at UK medical schools is informed by the Institute of Medical Ethics's model core curriculum (Stirrat et al. 2010). According to the IME, this curriculum has two purposes: • Creating 'virtuous doctors' • Providing them with a skill set for analyzing and resolving ethical problems (IME, undated).
However, despite the curriculum making the creation of virtuous doctors an avowed purpose, most of the content of the core curriculum deals with an understanding that students should demonstrate of matters such as: • Professionalism • Patients' rights • Consent The development of particular virtues in students is not given any attention at all. Our results can inform the development of medical ethics curricula and assessments in two ways: • Results from our study regarding views of the 'ideal' doctor illustrate which virtues medical students and practising doctors find important. The virtues of fairness, honesty, judgment, kindness, teamwork and leadership can form the focus of character education interventions in medical education. By the same token, these virtues deserve to be studied more closely in a medical context. • With our situational judgement tests, we attempted for the first time to design moral dilemma tests from a virtue perspective. These designs can be further developed to chart character development in medical students (rather than simply development in moral reasoning ability).
Conclusion
The importance of virtue ethics to medical ethics has been well-studied for a period stretching back some decades. However, few have attempted to study the development of doctors' moral character empirically. In this first-of-a-kind exploratory study, we attempted to answer the question of which virtues are particularly important in the good doctor and how a sample of doctors and medical students see their own character. We have also attempted to show how considerations of virtue influence the way that medical students and doctors think about common moral dilemmas in medicine. It is hoped that the study will spur further work on moral character development in the medical curriculum. | 8,404.2 | 2016-08-24T00:00:00.000 | [
"Medicine",
"Philosophy"
] |
Anisotropic tubular neighborhoods of sets
Let E⊂RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E \subset {{\mathbb {R}}}^N$$\end{document} be a compact set and C⊂RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C\subset {{\mathbb {R}}}^N$$\end{document} be a convex body with 0∈intC\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0\in \mathrm{int}\,C$$\end{document}. We prove that the topological boundary of the anisotropic enlargement E+rC\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E+rC$$\end{document} is contained in a finite union of Lipschitz surfaces. We also investigate the regularity of the volume function VE(r):=|E+rC|\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_E(r):=|E+rC|$$\end{document} proving a formula for the right and the left derivatives at any r>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r>0$$\end{document} which implies that VE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_E$$\end{document} is of class C1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^1$$\end{document} up to a countable set completely characterized. Moreover, some properties on the second derivative of VE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_E$$\end{document} are proved.
Introduction
The study of the tubular neighborhood E r := {x ∈ R N : dist(x, E) ≤ r } of a convex set E in R N plays a crucial role in convex geometry. Of course, is not without interest to investigate the tubular neighborhood also for non convex sets, and it turns out that the boundary of E r becomes more regular than the boundary of E, which could be very irregular: more precisely, in 1985 Fu [11] proves that ∂ E r is a Lipschitz manifold whenever E is compact in R N and r > r 0 for some r 0 > 0. The approach of Fu is essentially based on the fact that the sublevels of regular values of a proper and semiconcave function are sets of positive reach: this argument can be applied since the distance function is semiconcave far from E. The semiconcavity of the distance is strongly related with the smoothness of the ball in R N : notice indeed that E r can also be written as E r = E + r B, where B is the unit closed ball centered in the origin. In this paper first of all we investigate the extension of such results to the anisotropic case, that is in the case E r = E + rC where C is a prescribed convex body, i.e. a compact convex set in R N with 0 ∈ int C. In this case the appropriate anisotropic distance to E, which we denote by d E , could not be semiconcave outside E, since we are not assuming any kind of regularity of the boundary of C, unless locally Lipschitz coming from convexity: notice that we are really interested in enlarging E with a convex body C, since this case recovers also the crystalline anisotropy where C is convex but not necessarily strictly convex nor smooth. We will prove (see Thm. 3.1) that for any r > 0 the boundary of E r is contained in a finite union of Lipschitz surfaces when E is bounded and C is Lipschitz with 0 ∈ int C. Of course since C is not sufficiently smooth we cannot use the Fu's approach, but the key idea of our proof is very easy: we first prove that enlarging C by a very small set, like εK with ε > 0 small and K ⊂ B, we still obtain a Lipschitz domain, and then we use the same idea of Rataj and Winter [17] covering ∂ E r by a finite union of sets with small diameter. The rectifiability of ∂ E r is an independent interesting result, but actually we need to prove the regularity of ∂ E r in order to study the regularity of the volume function V E (r ) := |E r | (see [21,22] for the isotropic case). We therefore characterize the set, at most countable, where V E is not differentiable (see Thm. 5.2) and we find explicit formulae for left and right derivatives of V E . Moreover, V E is of class C 1 whenever it is differentiable (see Thm. 5.3). We mention that such a result finds application also in different fields, for example, in Stochastic Geometry, where V E is strictly related to the notion of covariogram and of contact distribution function associated to a random closed set (e.g. , see [23,Sec. 4] for the isotropic case, and the recent paper [15] where dilation by finite sets is considered). Finally, an easy characterization of V E is proved (see Thm. 5.4). Our result is a generalization of the isotropic case [14] and our proof is partially based on the so called anisotropic outer Minkowski content (see [6] and [16] for details). We also need to base our argument on the existence of a so called Cahn-Hoffmann vector field for C with divergence measure bounded from above (see Prop. 4.1), and this is, in our opinion, an interesting result independent on the rest, since we are not assuming the strict convexity of C, so that such an existence result holds true also in the crystalline case.
Notation
For any A subset of R N we will denote by |A| the Lebesgue measure of A while H k (A) stands for the k-dimensional Hausdorff measure of A, where k ∈ {0, . . . , N }; of course H N is the Lebesgue measure. For any x ∈ R N the euclidean norm of x will be denoted by |x| while x · y stands for the euclidean scalar product in R N between x and y. For any r > 0 and x ∈ R N the closed ball centered in x with radius r will be denoted by B r (x); we let B r := B r (0) and S N −1 := ∂ B 1 . We finally denote by ω k the volume of the k-dimensional unit ball in R k .
Convex analysis
Here we recall some basic notions of convex analysis; for all details we refer to [18]. In this paragraph C will be a convex body in R N , that is a compact convex subset of R N with 0 ∈ int C. We denote by h C : R N → R the support function of C, that is h C (v) := max x∈C x · v. We will use also the polar of h C , denoted by h • C and defined by h x · v for each v ∈ R N ; it turns out that both h C and h • C are convex and positively 1-homogeneous. We will need also to consider convex sets for which the support function and its polar are more regular. Let C be of class C 2 . We say that C is elliptic if the curvature of ∂C is bounded from below by some positive constant. It turns out that if C is C 2 and elliptic then both h C and h • C are in C 2 (R N \{0}). A very useful notion related with convexity is given by semiconcavity. Let A be a subset of R N and let f : A → R. We say that f is concave if the inequality is said to be semiconcave if there exists α > 0 such that for any x, y ∈ A and for any λ Notice that if f is semiconcave and smooth enough, for instance of class C 2 , then D 2 f ≤ α I , where I is the identity matrix and the inequality holds in the sense of matrices. A useful class of semiconcave functions can be constructed; we have the following well known proposition, see for instance [9].
satisfies (2.1) uniformly with respect to s.
Geometric measure theory
In this paragraph we recall some notions of Geometric Measure Theory we will need; for all details we refer the reader to [2], [10] and [20]. Let N ≥ 1 be integer and let k ∈ N with k ≤ N . Let S ⊂ R N . We say that S is k-rectifiable if there exist a bounded set B ⊂ R k and a Lipschitz function f : B → R N such that S = f (B); equivalently, by the Kirszbraun's extension Theorem, we can say that S is k-rectifiable if S is contained in a finite union of Lipschitz surfaces in R N . We say that S ⊂ R n is countably H k -rectifiable if there exist countably many Lipschitz functions f h : It turns out that if S is countably H k -rectifiable then for H k -almost any point x 0 ∈ S it is well defined the approximate tangent space Tan k (S, x 0 ), that is In particular, if k = N − 1 then Tan N −1 (S, x 0 ) ⊥ is generated by some unit vector denoted by ν S . Let now E ⊂ R N be a measurable set and ⊂ R N be an open domain; we denote by χ E the characteristic function of E. We say that E has finite perimeter in if χ E ∈ BV ( ); the perimeter of E in is defined by P(E; ) := |Dχ E |( ), where |Dχ E | denotes the total variation of Dχ E ; we also let P(E) := P(E; R N ). For sufficiently smooth boundaries the perimeter coincides with the (N − 1)-dimensional Hausdorff measure of the topological boundary. The upper and lower N -dimensional densities of E at x are respectively defined by * . It turns out that if E has finite perimeter in , then H N −1 (∂ * E\E 1/2 ) = 0, and P(E; ) = H N −1 (∂ * E ∩ ). Moreover, one can define a subset of E 1/2 as the set of points x where there exists a unit vector ν E (x) such that and which is referred to as the outer normal to E at x. The set where ν E (x) exists is called the reduced boundary and is denoted by Let us collect some elementary properties of sets with countably H N −1 -rectifiable boundary and with finite perimeter in ; for any E ⊆ R N we let E c := R N \E. Assume that E has finite perimeter in and ∂ E is countably H N −1 -rectifiable. Then the following relations hold true: We finally recall an anisotropic version of the coarea formula which we will need; for details see [13,Thm.3]. Let u ∈ BV ( ) and let α : R N → (0, +∞) be a convex and positively one-homogeneous function with c −1 |v| ≤ α(v) ≤ c|v| for any v ∈ R N and for some constant c > 0. Then the following formula holds true:
Anisotropic outer Minkowski content
We now briefly recall the notion of outer Minkowski content; for details see [1] and [22] (isotropic case), [6] and [16] (anisotropic case). Let C ⊂ R N be a convex body. For each closed set E ⊂ R N we also define the anisotropic outer Minkowski content of E as whenever such a limit exists.
The following existence and characterization result for SM C holds true.
Theorem 2.2 [16,Thm. 4.4] Let E ⊂ R N be a closed set such that: (b) there exist γ > 0 and a probability measure η in R N absolutely continuous with respect to H N −1 such that η(B r (x)) ≥ γ r N −1 for all x ∈ ∂ E and for all r ∈ (0, 1).
Notice that any compact set in R N whose boundary is contained in a finite union of Lipschitz surfaces satisfies property (b): see for instance [1,Rem. 1]. We also observe that even if ν E is not well defined on ∂ E ∩ E 0 , the expression φ C (ν E ) turns out to be well defined.
Anisotropic tubular neighborhoods
In this paragraph we will introduce all the objects we want to investigate. Let N ≥ 1 be integer. Let E ⊂ R N be compact and C ⊂ R N be a compact Lipschitz set with 0 ∈ int C. For any r > 0 denote E r := E + rC. Moreover, let It is convenient to introduce the anisotropic distance from E, that is Notice that E r = {d E ≤ r } and E r = {d E < r }. It turns out (for details see [6]) that d E is Lipschitz continuous and, if C is a convex body, Note that for C = B 1 , it is also named volume function of E (see also [21,22]). It is easy to see that V E is continuous.
Regularity of the boundaries
In this section we prove that ∂ E r and ∂ E r are sufficiently smooth, in the sense of geometric measure theory. Proof We divide the proof in two steps.
Step 1: Let K ⊂ R N be a bounded set. We claim that for ε positive and sufficiently small the set C + εK is a Lipschitz set.
Without loss of generality we can assume K ⊂ B 1 . For any ξ ∈ R N , ξ = 0, we let where π ξ denotes the orthogonal projection on ξ ⊥ . Since C is Lipschitz and compact we can write its boundary locally as a graph of a Lipschitz function in a uniform way: precisely, we can find r > 0 such that B r ⊂ C and such that for any z ∈ ∂C there exists a Lipschitz function f z : Let ε < r /2 and fix x 0 ∈ ∂(C + εK ). There exists k 0 ∈ K such that x 0 ∈ ∂C + εk 0 , thus For any x ∈ B r /2 ∩ z ⊥ 0 and any k ∈ K let: For ξ ∈ ∂(C + εK ) ∩ S z 0 r /2 , writing ξ = η + εk, η ∈ ∂C, k ∈ K , we observe that for y = π z 0 (η) one has η = y + f z 0 (y)ẑ 0 with |y| ≤ r . Thus, one finds that ξ decomposes as and it follows that: , and then η n = ξ − εk n ∈ ∂C). On the other hand if one lets now ξ n : We notice eventually that g is Lipschitz continuous with the same Lipschitz constant L of f z 0 , which achieves the proof that ∂(C + εK ) is locally a Lipschitz graph: indeed for any Step 2: Now it is relatively easy to conclude the proof for ∂ E r ; the rectifiablity of ∂ E r follows since ∂ E r ⊆ ∂ E r . The idea is to use the same argument as in the proof of [17,Prop. 2.3]. If r > 0 by step 1 we can say that for any x ∈ R N the set rC + (B r (x) ∩ E) has Lipschitz boundary for r < r sufficiently small (apply step 1 to rC instead of C). We cover now E, which has compact closure, with balls B r ( that is ∂ E r is contained in a finite union of Lipschitz surfaces, and this yields the conclusion.
Construction of a Cahn-Hoffmann vector field for C
First of all, we recall some basic results of the theory of viscosity solutions; for details we refer to [8]. Let Sym N (R) be the set of all symmetric N × N matrices with real entries, let be a subset of R N and let F : × R × R n × Sym N (R) → R be a continuous function such that the following monotonicity condition holds: whenever r ≤ s and Y ≤ X in the sense of matrices. Let u : → R be upper semicontinuous. We say that u is a viscosity subsolution of the equation F(x, u, Du, D 2 u) = 0 on if for any φ ∈ C 2 ( ) and for anyx ∈ local maximum point of u − φ it holds Let now u : → R be lower semicontinuous. We say that u is a viscosity supersolution of the equation F(x, u, Du, D 2 u) = 0 on if for any φ ∈ C 2 ( ) and for anyx ∈ local minimum point of u − φ it holds If u is both a viscosity subsolution and supersolution then u is called viscosity solution of F(x, u, Du, D 2 u) = 0 on .
We are ready to start the construction of a Cahn-Hoffmann vector field for C in the smooth case.
Proposition 4.1 Assume that C is a convex body of class C 2 and elliptic. Let n
and div n is a Radon measure on in the distributional sense out of E r .
Proof First of all we point out that the assumptions on C guarantee that both h C and h • C are in C 2 (R N \{0}). From the standard fact that h 2 C /2 and (h • C ) 2 /2 are Legendre-Fenchel convex conjugates, so that their gradients h C ∇h C and h • C ∇h • C are inverse mappings, we deduce that for any z ∈ R N \{0} For the sake of simplicity we will denote d := d E .
Step 1. The proof of (4.1) is easy: indeed, if we fix from which we immediately get (4. Step 2. We prove (4.2). First of all, it turns out that d is a viscosity supersolution of in R N \E r . This is a variant of a classical result, see [3]. The proof is quite straightforward. Indeed, if φ is a smooth function which touches the graph of d from below at a point ) then by definition of d, φ also touches the graph of x → h • C (x −ȳ) from below atx, whereȳ ∈ E is a point of minimal distance tox. Being both functions smooth atx, it follows that ∇φ( . Combining (4.3) with the Euler's identity, for any z ∈ R N \{0} we obtain, also by direct computation, .
We find that not only d is a viscosity supersolution of (4.4) out of E r , but the more precise inequality holds. Since h • C ∈ C 2 (R N \{0}) by Proposition 2.1 we can say that d is (locally) semiconcave out of E r , and in particular D 2 d ≤ c in both the viscosity and distributional sense. It is not obvious however to deduce from these facts that out of E r in the sense of distributions, as the left-hand side is the product of a L ∞ , yet discontinuous function, and a Radon measure.
We pick now R > r , λ > 0, and we introduce u λ a solution of the problem Notice that we can easily apply on the functional in (4.5) direct method of the Calculus of Variations: we have lower semicontinuity in the strong convergence of L 1 essentially by Reshetnyak's lower semicontinuity and we have strong L 1 -compactness of sequences bounded in energy since h C (v) ≥ c|v| for some c > 0. Moreover, observe that by truncation arguments we clearly have r ≤ u λ ≤ R in E R \E r . Standard density estimates for the level sets of u λ show also that u λ is a.e. equal to a lower and a upper-semicontinuous function.
We assume that u λ is upper-semicontinuous, and is a.e. equal to its lower-semicontinuous envelope. We check then that u λ is a strict viscosity subsolution of (4.4) in {u λ > d}, in the following sense: if φ ≥ u λ , φ smooth, φ(x) = u λ (x), then if ∇φ(x) = 0 one has The proof is easy and quite standard. Possibly replacing φ with φ +η|·−x| 2 , η small, we may assume thatx is the only contact point. Then, one checks that {φ − δ < u λ } has nonempty interior and goes to {x} in the Hausdorff distance as δ → 0. We denote H λ = (N − 1)/r + λ.
For δ > 0 small we have
Moreover, since for any open set
satisfies the generalized coarea formula (2.3) and it is convex, we get submodularity (see [5,Prop. 3
.2]), which reads as
Therefore, we obtain that (letting A a small open set containing {φ − δ < u λ }, for δ small) If ∇φ(x) = 0 then one may assume that ∇φ = 0 in A, so that it follows We deduce that div ∇h C (∇φ)(x) ≥ H λ , as claimed, otherwise one reaches a contradiction for small δ. Now, we can deduce that u λ ≤ d (so that in particular u λ = d), using a standard comparison result for viscosity sub and supersolution (with one possibly discontinuous). We sketch the argument, see [4] and [8] for details. Let m := max{u λ − d} and assume by contradiction that m > 0. For δ > 0 small, we consider is a sup-convolution. In particular, if x ∈ {u λ δ > d + m/2}, a pointȳ which reaches the maximum in (4.6) is such that u λ (ȳ) > d(ȳ) as soon as δ < m/L 2 (L denoting the Lipschitz constant of d), and in this case u λ δ is still a strict subsolution of (4.4) in {u λ δ > d + m/2}: as a test function in the definition of strict subsolution of (4.4) applied to u λ . Now, since u λ δ is (near x δ ) semiconvex while d is semiconcave, we can invoke Jensen's Lemma (see [8] for details), and find that there are points x n → x δ which are local maximum points of with p n → 0, α n → 0, u λ δ (x n ) > d(x n ) + m/2; notice that we have to add the term α n |x−x δ | 2 2 since, in order to apply Jensen's Lemma, we need x δ be a strict local maximum of the function we perturb with the linear term p n · x. By Aleksandrov's Theorem (see again [8] for details) we can also assume that u λ δ and d are both twice differentiable at x n . In particular, for n large where o(1) → 0 as n → ∞. Since λ > 0 this yields a contradiction. Hence u λ = d for any λ > 0, and it follows that d is the only minimizer of (4.5) for any λ > 0, and in the limit is also a minimizer for λ = 0. Finally, we have shown that the functional in (4.5) is minimized by d, including for λ = 0. But then, the Euler-Lagrange equation for the problem is easily derived: using perturbations d +δφ with δ > 0 small, φ smooth, nonnegative, with compact support in E R \E r , we readily find that is precisely (4.2) in the distributional sense.
We are ready to prove essentially the same result stated in Proposition 4.1 for a general convex body C.
Theorem 4.2 Let C be a convex body. There exists n
and div n is a Radon measure on R N \E with div n ≤ N − 1 r (4.8) in the distributional sense out of E r .
Proof We use again the notation d = d E . We prove (4.7) and (4.8) approximating C by smooth, elliptic, uniformly bounded and convex sets C σ , with C σ ⊇ C, and using Proposition 4.1. Let E σ r := E + rC σ and denote by d σ the anisotropic distance from C σ . Then n σ := ∇h C σ (∇d σ ) ∈ C σ is well defined a.e.,and (4.2) reads div n σ ≤ N − 1 r (4.9) out of E σ r . As σ → 0 + we can assume, up to a subsequence, since ||n σ || ∞ remains bounded by (4.1), that n σ * n in L ∞ (R N ; R N ) and we have for any nonnegative C 1 function φ with compact support in R N \E r , for σ small enough (using the Hausdorff convergence of E σ r to E r ), as σ → 0 + , showing that in R N \E r , div n is a measure bounded from above by (N − 1)/r , so that we get (4.8). On the other hand, if η : R + → R is any smooth nonincreasing function with η(t) = 1 for t ≤ r , η(t) = 0 for t large, one has (since n σ = ∇h C σ (∇d σ ) ∈ ∂h C σ (−η (d σ )∇d σ ), using that ∇h C σ is zero-homogeneous and always contained in ∂h C σ (0)): Since h C σ ≥ h C , we easily see that, from η • d σ → η • d in any L p and using standard lower semicontinuity results for integral functionals, On the other hand (using (4.9)), which together with (4.10) yields Since n ∈ C a.e. we obtain (4.7) and this ends the proof. [12] provided a construction of an n satisfying (4.7) with minimal |div n| 2 . Also, an alternative way to build a Cahn-Hoffmann field satisfying (4.7) can be deduced from the construction in Chambolle, Morini and Ponsiglione [7], in addition this should also provide a field with minimal curvature.
Regularity of the volume function
In this section we investigate the regularity of the volume function V E . Our result extends [14,Eq. (2.20)], where an expression for V E has been given whenever C is strictly convex.
In what follows n is given as in Theorem 4.2. Let Remark 5. 1 We will prove (see (5.7)) that for any r > 0, In what follow we denote by V E (r + ) and V E (r − ) respectively the right and the left derivative of V E .
Theorem 5.2 For any r > 0 we have
and
In particular, V E is differentiable at r if and only if r / ∈ J .
Proof Notice that from the fact that ∂C is locally Lipschitz and compact we easily deduce that that is formula (5.1). It remains to compute the left derivative of V E . We divide the rest of the proof in some steps.
Step 1. Let C * := −C, that is the symmetrical of C with respect to the origin; notice that We also introduce the corresponding anisotropic distance to E r c : where we have denoted E r c := (E r ) c . Let s ∈ (0, r ). Notice that By definition there exist ε > 0 and z ε ∈ E r c such that Then, for any y ∈ E we obtain, by the subadditivity of h Taking into account Lemma 3.1 we can say that |{d * = s}| = 0 and |E c r | = |E r c |, hence Passing to the limit as s → 0 + we deduce that Using Theorem 2.2 we get Notice now that if r / ∈ J then H N −1 (∂ E r ∩ E 1 r ) = 0. We obtain that for any r ∈ (0, +∞)\J Step 2. We prove now that for any r > 0 lim sup For any s ∈ (0, r ) we have, using the coarea formula and (5.15), Therefore, by (5.9) we obtain lim sup which is (5.6).
Proposition 5.3 For any r
In particular, V E is C 1 in (0, +∞)\J .
Proof Let us prove (5.8). The easy part is the estimate from below: since Dχ E s * Dχ E r , as measures as s → r + , applying Reshetnyak's lower semicontinuity we have Now we divide the rest of the proof in some steps.
Step 1. We claim that for each continuous function ψ : (5.10) For simplicity of notation we set First of all, combining (2.5) with the coarea formula, for any positive integer k we obtain For a general continuous function ψ it is sufficient to apply the previous argument to ψ + and ψ − .
Step 2: Consider η : R → R a smooth nondecreasing function with η ≡ 1 on R − and η(t) = 0 for t ≥ 1. Then, letting, for k ≥ 0, ψ k (x) := η(k(d E (x) − r )) and ψ ε k (x) := η(k(d E (x) − r − ε)), one has, using (4.8), as k → +∞. On the other hand, using the definition of n and the coarea formula, Using (5.13) and definition of div n we easily get while passing to the limit in (5.12) as k → +∞ we deduce Passing to the limit in (5.15) as ε → 0 + we get lim sup so that the proof of (5.8) is complete. Finally, (5.9) follows from (5.14) and from (5.7) since div n is a measure, E r \E s E r \E r as s → r − .
We next investigate further regularity properties of V E .
In particular, (5.16) in the sense of distributions.
Proof We have, by coarea formula, from which the conclusion.
Corollary 5.5
For any t, r ∈ (0, +∞)\J with t < r we have In addition we have that for any r > ε > 0 As soon as r 0 is such that E r 0 ⊃ conv(E) ⊃ r ∈J (∂ E r ∩ E 1 r ) we have Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 7,285.6 | 2021-03-08T00:00:00.000 | [
"Mathematics"
] |
Effect of free oxygen radical anions and free electrons in a Ca12Al14O33 cement structure on its optical, electronic and antibacterial properties
The aim of this work was to investigate the effect of free oxygen radicals and free electrons in a Ca12Al14O33 (C12A7) cement structure on the optical, electronic and antibacterial activity of this material. Ca12Al14O33 was successfully fabricated via rapid heating to high temperatures by high frequency electromagnetic induction. Ca12Al14O33 cement samples were characterized using XRD and UV-Vis-DRS spectroscopy. The morphology and chemical composition of the samples were also investigated using SEM and EDS techniques. The presence of free oxygen radicals (O2−ions) in the insulating structure of Ca12Al14O33 was confirmed using Raman spectroscopy showing a spectrum peak at 1067 cm−1. The excitation of free electrons in the Ca12Al14O33 cement was indicated by UV-Vis absorption spectra at 2.8 eV and an optical energy gap of 3.5 eV, which is consistent with the first-principles calculations for the band energy level. The effects of free oxygen radicals and free electrons in the Ca12Al14O33 structure as antibacterial agents against Escherichia Coli (E. coli) and Staphylococcus Aureus (S. aureus) were investigated using an agar disk-diffusion method. The presence of O2− anions as a reactive oxygen species (ROS) at the surface of Ca12Al14O33 caused inhibition of E. coli and S. aureus cells. The free electrons in the conducting C12A7 reacted with O2 gas to produce ROS, specifically super oxides (O2−), superoxide radicals (O2•-), hydroxyl radicals (OH•) and hydrogen peroxide (H2O2), which exhibited antibacterial properties. Both mechanisms were active against bacteria without effects from nano-particle sized materials and photocatalytic activity. The experimental results showed that the production of ROS from free electrons was greater than that of the free O2− anions in the structure of Ca12Al14O33. The antibacterial actions for insulating and conducting Ca12Al14O33 were different for E. coli and S. aureus. Thus, Ca12Al14O33 cement has antibacterial properties that do not require the presence of nano-particle sizes materials or photocatalysis.
Introduction
Currently, the use of antimicrobials to promote public health is a research topic with much attention [1,2,3,4]. The goal of such research is to identify materials with antibacterial properties capable of inhibiting or killing various bacteria through mechanisms not limited to photocatalytic and nano-particle effects [4,5]. Antibacterial materials have attracted significant attention due to their very interesting application in preventing bacterial growth on smart building bathroom and kitchen walls [4]. There are four types of antibacterial activity: (1) cation elution, (2) pH effects, (3) electrostatic interactions between the surfaces of bacteria with nano-size particles, and (4) the effects of active oxygen species [4,6,7]. In addition to these, photocatalytic activity, electrostatic interaction, cellular internalization of nanoparticles and production of reactive oxygen species (ROS) also have antimicrobial activity [4,7,8,9]. ROS effects are predominantly used for antibacterial activity since it is an easy mechanism to employ [7,8]. ROS are comprised of hydroxyl radicals (OH ), hydrogen peroxide (H 2 O 2 ) and superoxide ions (O 2 -) [4,8]. These materials cause death by contacting bacterial membranes and directly damaging their surfaces [4,7]. ROS species can be generated from the reaction of free particles (free electrons and holes) in the course of photocatalytic activities [9,10,11]. The photocatalytic effect causes generation of free electrons and holes in materials using phonon energy corresponding to energy gap of the materials. There have been many reports that titanium dioxide (TiO 2 ) [12] and zinc oxide (ZnO) [10] can generate ROS through photocatalysis. Practically, TiO 2 and ZnO used phonon energy at 3.1 eV [13] and 3.2 eV [7], respectively, to photocatalytically produce ROS. They are produced from O 2 and H 2 O adsorption promoting a reaction of free electrons and holes at the surfaces of materials to produce OH À , H 2 O 2 and O 2 -. Alternatively, ZnO [7,8,9,14] nanoparticles cause death of bacterial cells through electrostatic interactions between bacterial surfaces with nanoparticles in the absence of light, thereby damaging cell membranes. TiO 2 and ZnO act through photocatalysis and nanoparticle effects to produced antibacterial activity.
Recently, it has been reported that Ca 12 Al 14 O 33 cement exhibits free oxygen radical O À2 ions in a vacant cage structure [15,16,17,18]. The Ca 12 Al 14 O 33 cement structure is linked by calcium, aluminum and oxygen atoms forming empty nanometer-sized cages within the structure [19,20], as shown in Fig. 1. A unit cell of insulating Ca 12 Al 14 O 33 is comprised of two molecules occupying 12 crystallographic nano-cages while presenting a 4 þ charge at the cage wall, represented as [Ca 24 Al 28 O 66 ] 4þ [21]. The two cages in this unit cell support electrical neutrality by entrapping two free oxygen ions (O À2 ) in cages referred to as an extra-framework [21]. The Ca 12 [21,22]. Hayashi et al. [23] showed that the process of preparing Ca 12 Al 14 O 33 cement in a dry oxygen atmosphere at temperature 1350 C could produce both O À and O 2 -(as ROS species). Lu et al. [24] reported that the present of O À and O 2 resulted in antibacterial activity. Nevertheless, antimicrobial materials should be effective under various conditions and not limited to photocatalytic activity or nano-particle effects. Furthermore, Hayashi et al. [23] reported that free O À and O 2 could be replaced by free electrons in a conducting Ca 12 Al 14 O 33 cement, represented as Ca 12 Al 14 O 33 :e À . It has been reported that this material has distinct optical and electrical properties. However, to best of our knowledge, there are no reports of the effect and mechanisms of free electrons in the nano-cage structure of Ca 12 Al 14 O 33 cement on antibacterial activity. This work aims to investigate the effect of free electrons in the nanocage structure of Ca 12 Al 14 O 33 cement on the optical, electronic and antibacterial activities of these materials. The developed material presenting free electrons in a nano-cage structure was characterized. Conducting Ca 12 Al 14 O 33 cement with free electrons was rapidly prepared by heating insulator cement inside a carbon crucible at high temperatures using high frequency electromagnetic induction. The mechanism and effect of free electrons and free oxygen radicals in the nano-cage structure of these materials on its optical, electronic and antibacterial properties were also investigated. Their antibacterial activities against gramnegative E. coli and gram-positive S. aureus are reported. Moreover, the mechanism of the antibacterial action of free electrons and free oxygen radicals in the nano-cage structure of Ca 12 Al 14 O 33 cement is described.
Chemicals
Calcium carbonate (CaCO 3 , 99% Sigma-Aldrich), alumina powder (Al 2 O 3 , 99.9% Sigma-Aldrich) and ethanol (95%) were used as the starting raw materials. All chemicals were used as received with no further purification. The as-prepared CAO@1200C powder was used as a starting material for fabrication of conducting Ca 12 Al 14 O 33 cement. A mass of 100 g of CAO@1200C powder was placed in a carbon crucible with a carbon cap. Then, the carbon crucible was transferred into the middle of a Cu induction coil. High frequency electromagnetic induction heating was done using an induction coil (Model: TH-60AB (90 A, 3 phase, 380 V, 50-60 kHz)). The temperature was determined using an IR detector (Model: SENTEST (NS50PH1FF), accuracy class:2.0) focused on the surface of the carbon crucible. The CAO@1200C powder was rapidly heated from room temperature to sintering temperatures of 1350 C, 1450 C and 1550 C with a 40 sec holding time (referenced as the CAO@1350C, CAO@1450C and CAO@1550C samples, respectively). Finally, the samples were cooled by natural convection to room temperature.
Preparation of the cement pellets
For pellet fabrication, the obtained CAO@1350C, CAO@1450C and CAO@1550C powders were subjected to uniaxial compression and pressed into disc-shaped pellets that were 10 mm in diameter and 2-3 mm thick. Then, the antibacterial activity of these pellets was tested.
Characterization
The lattice parameters were determined using an X-ray diffractometer (XRD), (Rigaku, Miniflex Cu K-alpha radiation), with a 2θ scanning range from 10 to 80 o and step interval of 0.02 o . Absorption spectroscopy was also done using a UV-Vis Spectrometer (Perkin Elmer, Lamda 950). A scanning electron microscope (SEM), JSM5800LV, JEOL, Japan with energy dispersive X-ray spectroscopy (EDX) (Oxford ISIS 300) was used to measure and confirm the morphologies of all the cement particles and bacteria, along with the elemental composition of the cement samples.
First-principles calculations
A first-principles approach was employed with the density of states of Ca 12 Al 14 O 33 :2O 2cement and Ca 12 Al 14 O 33 :4e À cement using the Vienna Ab initio Simulation Package (VASP) [25]. The pseudopotential used in this work was based on the Projector Augmented Wave (PAW) approach [26]. The PAW valence states were 3s and 3p, 4s, 3s and 3p, and 2s and 2p for Ca, Al and O, respectively. In this work, the Ceperley-Alder form of the exchange-correlation functional [27], which is the local density approximation (LDA), was used to determine the electronic density of states of both the Ca 12 Al 14 O 33 :2O 2and Ca 12 Al 14 O 33 :4e À cements. A 600 eV plane-wave cutoff energy and 5 Â 5 Â 5 K-point sampling of the Brillouin zone were used for all calculations. The HSE06 hybrid functional was chosen to determine the density of the Ca 12 Al 14 O 33 :4e À states.
Property measurements
The vibration mode of atomic bonding was evaluated using Fouriertransform infrared spectroscopy (FTIR), (Bruker, Senterra). The optical properties of the samples were investigated using a diffused reflectance UV-Visible spectrometer, (DRS) (Perkin Elmer, Lambda 950). Optical measurements were used to determine the absorption coefficient spectra of the specimens at room temperature.
Antibacterial property testing
The antibacterial properties of the CAO@1350C, CAO@1450C and CAO@1550C samples were tested using an agar disk-diffusion method against a gram-negative bacterium, Escherichia coli (E. coli) (ATCC 25922) and a gram-positive bacterium, Staphylococcus aureus (S. aureus) (ATCC 25923). All samples were pellet shaped with diameter of 10 mm. E. coli and S. aureus were cultivated on Muller Hinton agar at 37 C for 24. Then, cells of E. coli and S. aureus were suspended in a 0.85%NaCl solution and the cell suspension was adjusted to 0.5 McFarland (1 Â 10 8 CFU/mL). Subsequently, the E. coli and S. aureus cell suspensions were swabbed onto Muller Hinton agar. After drying, CAO@1350C, CAO@1450C and CAO@1550C sample pellets were placed on the agar surfaces. Then, the agar plates were incubated (Contherm, Scientific Ltd., New Zealand) at 37 C for 24 h in a dark incubation chamber. Finally, the inhibition zones on the agar plates were photographed and the widths of inhibition zones reported. Inhibition of E. coli and S. aureus was confirmed using scanning electron microscopy (SEM), JSM5800LV, JEOL, Japan.
Characterization of Ca 12 Al 14 O 33 cement as a starting material
The CAO@1200C sample was prepared via a solid-state reaction at 1200 C for use as a starting material for fabrication of conducting Ca 12 Al 14 O 33 cement. Fig. 2 Fig. 3 (a) presents a schematic of an electromagnetic induction heating system for synthesizing the samples fabricated in the course of this work. The starting CAO@1200C powder was loaded into a carbon crucible. The carbon crucible was wrapped with a Cu induction coil. Cooling water was circulated in a Cu coiled tube to protect against overheating on opration of themagnetic induction heating. An IR detector was used to measure the temperature of the carbon crucible and this information used as feedback to control the power of the induction heater. The carbon crucible was rapidly heated from room temperature to 1350 C, 1450 C and 1550 C, with a holding time of 40 seconds for sintering. Then, the samples were rapidly cooled to room temperature. Fig. 3 (b) shows an electromagnetic induction heating time of approximately 1 minute for sintering. Fig. 4 shows the XRD patterns of the synthesized CAO@1350C, CAO@1450C and CAO@1550C samples prepared by heating in a carbon crucible by high frequency electromagnetic induction. The XRD results showed that the patterns of the sintered CAO@1350C and CAO@1550C samples matched the JCPDS#09-0413 file as Ca 12 Al 14 O 33 cement phase [28,29,30,31,32]. These results confirmed the synthesis of a Ca 12 Al 14 O 33 cement phase in the CAO@1350C and CAO@1550C samples. In contrast, the XRD patterns of the synthesized CAO@1450C sample exhibited a different XRD pattern pattern that matched the JCPDS#09-0413 file. This result was due to formation of a glass Ca 12 Al 14 O 33 cement phase as corresponding to that observed in a previously published XRD pattern [33]. This implied that the preparation process produced a glass phase of Ca 12 Al 14 O 33 cement at a temperature of 1350 C. Thus, this confirmed that all samples formed phases of Ca 12 Al 14 O 33 cement during high frequency induction heating. However, the CAO@1350C, CAO@1450C and CAO@1550C samples did not exhibit 2θ XRD peaks at 26.5 , 10.8 and 25.2 . These reference graphite, graphene oxide, and reduced graphene oxide, respectively. These results confirmed that graphite from the crucible did not dissolve into the samples [34]. Fig. 5(a-k) presents images CAO@1350C, CAO@1450C and CAO@1550C samples fabricated at 1350 C, 1450 C and 1550 C, respectively, and measurement of their electric resistances using a multimeter. Fig. 5 (a) shows a white colored powder sample. This result reveals that the sample did not form a single crystal phase at this temperature. The resistance of the sample was very high as shown in Fig. 5 (b) and (c). Fig. 5 (d) presents a yellow colored sample that formed from a single crystal in the crushed powder sample (Fig. 5 (e)). A single crystal phase was produced that had a very high resistance as shown in Figs. 5 (f) and (g). Fig. 5 (h) shows a greenish-black colored sample with a cement from a single crystal type forming a green colored crushed powder ( Fig. 5 (i)). This sample, sintered at 1550 C, was electrically conductive as shown in Figs. 5 (j) and (k), leading to the observation that sintering at temperatures higher than 1450 C produced a single crystal form of Ca 12 Al 14 O 33 cement. Only the CAO@1550C sample displayed electrical conductivity after its formation at 1550 C. The green color of this sample was not the same as the CAO@1350C and CAO@1450C samples implying a different mechanism for fabrication of the CAO@1550C sample than the CAO@1350C and CAO@1450C samples.
Absorption coefficients
The differences between the CAO@1550C, CAO@1350C and CAO@1450C cement phases were investigated from their optical absorption coefficients measured using UV Visible Spectrometry. Fig. 6 shows absorption coefficients in the UV spectrum of the prepared CAO@1200C starting powder, and the synthesized CAO@1350C, CAO@1450C and CAO@1550C samples at room temperature over the range of 1.6 eV-6 eV. The absorption spectra of the prepared CAO@1200C sample had the highest absorption peak at 4.1 eV. This was characteristic of the electrically insulating Ca 12 Additionally, the absorption spectra of the CAO@1550C sample displayed its two highest absorption peak positions at 2.8 eV and 1.5 eV. The first peak at 2.8 eV presented electrons transitioning from the occupied cage level (an F þ -like center level due to a relaxation time) to the framework conduction band (FCB), as previously reported [28,29]. The energy at 2.8 eV was due to the inter-cage transition energy for the free electrons in cavity-cage structure. The second peak at 1.5 eV displayed an energy level from the F þ -like center level to the cage-conduction band (CCB). A second peak energy level was reported in range of 0.4-1.5 eV [28,29,30,31]. This energy level was too large for the empty cage and an electron with less energy to occupy the cage. The two peak positions at 2.8 and 1.5 eV were characteristic of conducting Ca 12 respectively, which is good agreement with earlier reports [32]. The CAO@1200C sample, i.e., starting powder, as well as the CAO@1350C and CAO@1450C samples, displayed a light color without free electrons in the structure. The CAO@1550C powder had a green color, with approximately 2Â10 20 cm À3 of free electrons. Additionally, the CAO@1200C, CAO@1350C and CAO@1450C samples were electric insulators while the CAO@1550C sample was electrically conductive. The CAO@1200C starting powder, as well as the CAO@1350C and CAO@1450C samples, were white colored, indicating that they were transparent in the visible light region (1.6-3.2 eV), as well as acting as electrical insulators. These results illustrated that the CAO@1200C, CAO@1350C and CAO@1450C samples were electric insulators as was the insulating Ca 12 Al 14 O 33 :O cement owing to free oxygen radicals in their cavity-cages. The greenish-black color of the CAO@1550C sample indicated light absorption in the region of 1.5-2.8 eV. This confirmed that sintering at 1550 C by rapid induction heating could convert electrically insulating Ca 12 Al 14 O 33 :O cement to conducting Ca 12 Al 14 O 33 :e À cement. This process can replace free oxygen radicals with free electrons in an electron-doped process inside a nano-cage Ca 12 Al 14 O 33 cement structure.
SEM and EDS analysis
The imagery in Fig. 7
. Raman spectroscopy analysis
Raman spectroscopy [34,35,36,37] was used to characterize the molecular structure and bonding of the fabricated materials. Fig. 9 shows Raman spectra of the prepared CAO@1200C sample as well as the synthesized CAO@1350C, CAO@1450C and CAO@1550C samples. The absorption bands for the Ca 12 Al 14 O 33 cement structure have normal lattice properties due to vibrations of Al 3þ , Ca 2þ and oxygen ions in the region from 50 to 3000 cm À1 [37]. In previous work [37], Raman peaks between 200 and 1000 cm À1 were ascribed to the lattice framework of the Ca 12 Al 14 O 33 cement structure resulting from Al 3þ ions in a tetrahedral structure. The Raman spectra peaks at 330, 510, and 773 cm À1 of the CAO@1350C, CAO@1450C and CAO@1550C samples corresponded to peaks of the CAO@1200C sample. The Raman peak at 330 cm À1 was caused by the oxygen (O 2-) framework due to vibrations of Ca[AlO 4 ] and Ca-O bonding. The two bands at 510 cm À1 and 773 cm À1 indicated bending vibrations the Al-O-Al linkages and Al-O stretching vibrations, respectively, for a lattice structure with the oxygen and aluminum atoms in a symmetric framework with an Al-O [AlO 4 ] 5sub-structure [38]. The band at 178 cm À1 appeared on the CAO@1200C and CAO@1350C sample and was identified as characteristic of the lattice framework Additionally, the formation of superoxide (O 2 À ) ions occurred via the following chemical reaction in Eq. (3) [33]: Formation of O 2 2À ions can be described by Eq. (4) [39]: and O À ions were not observed in the nano-cage cavity of the CAO@1550C sample. This implied that the CAO@1550C sample was completely converted from insulating Ca 12 Al 14 O 33 cement to its conducting form. This mechanism successfully replaced free oxygen ions in a cavity-cage with free electrons via the reaction in Eq. (5): This mechanism required free radical carbon ions (C 2
2À
) at a reaction temperature of 1550 C. In this process, the C 2 2À ions were generated from the carbon crucible that reacted with O 2 À ions when it was heated to 1550 C. Then, the reaction removed free oxygen O 2 À ions and injected free electrons (e À1 ) into the nano-cage structure. No peaks at 1870 cm À1 were observed for the CAO@1550C sample. Kim et al. [33] reported Raman band spectra at 1870 cm À1 , ascribing them to C 2 2À ions. C 2
2À
ions were dissolved into the CAO@1550C sample from the C 2 2À atmosphere of the carbon crucible and were released out of the nano-cages during cooling. In summary, the extra-framework O 2 2À and O 2 À ions present in the CAO@1200C, CAO@1350C and CAO@1450C samples were responsible for producing an insulating Ca 12 Al 14 O 33 cement. The CAO@1550C sample, i.e., Ca 12 Al 14 O 33 :e À cement, had free electrons in a nano-cage structure that allowed for electrical conduction.
Optical properties
The absorption coefficients shown in Fig. 6 present the optical properties of the CAO@1200C, CAO@1350C, CAO@1450C and CAO@1550C samples. They were used to calculate to optical energy gap (E g ) following the relationship in Eq. (6) [20]: where hυ denotes the photon energy, E g represents the direct optical gap, and 1/2 is a value for the allowed direct transition type. Thus, the allowed direct optical gap can be calculated from Eq. (7): where A is a constant. The optical energy gap (E g ) was fitted to a straight line to intercept the photon energy (hυ) axis. Fig. 10 shows
First-principles calculations
First-principles calculations for evaluating the optical properties of Ca 12 Al 14 O 33 were performed as previously outlined [40,41]. The electronic properties of insulating Ca 12 Fig. 11 (a) clearly shows a peak located at 0.3 eV below the Fermi level corresponding to the 2p states of extraframework 2O 2ions. Additionally, the energy gap in this case is the difference between the energies of the highest states of the framework valence band (FVB) and the lowest states of the CCB. In this case, we found that the energy gap is approximately 4.1 eV. This result was close to the energy gap of the experimental results for CAO@1200C (3.9 eV), CAO@1350C (4.1 eV) and CAO@1450C (4.0 eV) as shown Fig. 10 The electronic density of states of Ca 12 Al 14 O 33 :e À was determined using the local density approximation (LDA) scheme shown in Fig. 11 (b). It is well established that electrical conductivity is directly related to the electron states at the Fermi level. A large number of electron states at the Fermi level energy, E F (the highest occupied state), which is at 0 eV, results in high electrical conductivity.
According to the density of states of Ca 12 Al 14 O 33 :e À , it can easily be seen that there are CCB states at the Fermi level. This result implies that electron conduction comes from the CCB, in this case. Hence, Ca 12 Al 14 O 33 :e À can be electrically conductive. However, the energy gap obtained from LDA functional analysis was always smaller than the experimentally derived energy gap. To improve the accuracy of our calculations, a hybrid functional was used. For Ca 12 Al 14 O 33 :4e À , we determined the electronic density of states using the Heyd-Scuseria-Ernzerh of 06 (HSE06) hybrid functional shown in Fig. 11 (c). The electronic density of states calculated from the HSE06 hybrid functional looks similar to that determined from the LDA functional. Moreover, we found that the FVB states shifted to slightly lower energy levels relative to the FVB of LDA. Hence, the energy gap increased in this case. The energy difference between the highest FVB state and the lowest CCB state was approximately 4.7 eV. Additionally, the energy required for the electronic transition from FVB to FCB was rather high, 7.4 eV. Thus, it is impossible for electrons to be excited from the FVB to the FCB.
The energy difference between the highest FVB state and the lowest CCB state was approximately 3.7 eV-4.7 eV. This result was close to the experimentally derived energy gap for CAO@1550C (3.5 eV). The energy gap between FCB and CCB was about 0.42 eV, representing the metallic bands of Ca 12 Al 14 O 33 :e À that are characteristic of the cage-like structures with no extra-framework oxygen species.. According to Fig. 11 (b) and (c), only electron states of the CCB exist at the Fermi level. Consequently, electronic conduction was observed in this material because of the free electrons in the cage. The above results are consistent with previous work [42,43]. From Figs. 6, 10, and 11, the energy diagram for insulating Ca 12 Al 14 O 33 :2O 2and conducting Ca 12 Al 14 O 33 :e À cements obtained from both experiments and calculations [28,30,43] is summarized in Fig. 11(d).
Moreover, the experimental and calculation results showed that the insulating Ca 12 Al 14 O 33 :2O 2cement had a direct optical gap of approximately 4.0 eV for the electronic transition from the FVB to the CCB. For conducting Ca 12 Al 14 O 33 :e À , the experimental and calculational results showed the direct transition gap from FVB to CCB was around 3.5-3.7 eV. This information indicated that it is impossible for electron excitations to occur between the FVB to FCB due to a rather large energy gap (3.5 eV). The conducting Ca 12 Al 14 O 33 :e À cement displayed metallic behavior when it absorbs external energy at around 2.8 eV, corresponding to the energy level from CCB to FCB for free electron movement between cages. This energy (2.8 eV) corresponds to the absorption peak position at 2.8 eV that results in the greenish-black color of the sintered CAO@1550C sample shown in Fig. 5. Additionally, electrons become free electrons when their energy becomes greater than the work function energy (E vac ), which is 2.4 eV [31,44]. The broad absorption band at 2.8 eV for the CAO@1550C sample is attributed to the transition to intra-cage s-to-p forms of electrons trapped in the cages. The absorption band at 1.5 eV was due to the transition of inter-cage s-to-s charge transfer from an electron trapped in a cage to a vacant neighboring cage [38]. The transition of free electron movement of the CAO@1550C sample (Ca 12 Al 14 O 33 :e À cement) was supported by oxygen gas adsorbed at the surfaces of the materials to produce free radicals [45] according to the reaction Eq. (8):
Schematic diagram for fabrication of conducting Ca 12 Al 14 O 33
A conducting Ca 12 Al 14 O 33 :e À cement was successfully fabricated via a sintering process starting with insulating Ca 12 Al 14 O 33 :O 2cement inside a carbon crucible using high frequency induction heating to 1550 C. A schematic presenting the mechanism for converting insulating Ca 12 Al 14 O 33 :O 2cement to conducting Ca 12 Al 14 O 33 :e À cement via this process is shown in Fig. 12. Fig. 12(a) shows the starting Ca 12 Al 14 O 33 :O 2cement powder loaded inside a carbon crucible. The Ca 12 Al 14 O 33 :O 2structure is a linkage of calcium, aluminum and oxygen atoms consisting of empty nano-sized cages containing of free oxygen ions (O À2 ), loosely bound to a lattice framework, inside cages as an extra-framework. Fig. 12(b) presents the process for producing conducting Ca 12 Al 14 O 33 :e À cement by rapid heating carbon via electromagnetic induction heating to 1550 C in a carbon crucible. The thermal energy (Q) in the carbon crucible was obtained from eddy currents and induced electromagnetic power [46] via the relationship Eq. (9): where σ(T) is electrical conductivity as a function of temperature(S/m), ω is angular frequency (rad/sec), and B is the magnetic vector potential.
At 1550 C, an active carbon radical (C 2 2À ) species was generated from the heated carbon for producing a C 2 2À atmosphere inside the crucible [31,33,34]. Then, the C 2
2À
ions reacted with free oxygen ions in the cavity-cages of the Ca 12 Al 14 O 33 :O 2cement powder to produce CO or CO 2 gas, leaving an electron in the cavity-cage [34].
This process replaced free oxygen ions with free electrons by reacting active carbon C 2 2ions with free oxygen ions. First, carbon C 2
2À
ions were produced inside the crucible. Then, the C 2 2À gas reacted with free oxygen species (O 2-) in cavity-cage at the surface layer of the insulating Ca 12 Al 14 O 33 :O 2cement powder to produce CO or CO 2 gas and ejected free electrons into the cavity-cages via the reaction in Eq. (10), This resulted in formation of two free electrons in the cage obtained from O 2À ions. Finally, the reaction ran to completion when the insulting ) atmosphere inside the carbon crucible at high temperature [34].
Anti-bacterial activities of Ca 12 Al 14 O 33 cements
Anti-bacterial activities of the as-fabricated CAO@1350C, CAO@1450C and CAO@1550C samples were investigated using two different bacterial species: (1) gram-negative E. coli and (2) gram-positive S. Aureus as shown in Figs. 13 and 14, respectively. The antibacterial properties of the samples were tested using an agar disk-diffusion method. Agar plates were first seeded with the test bacteria, followed by placed on the agar surfaces in a dark incubation chamber at 37 C for 24 h. Both the insulating Ca 12 structural differences in the bacterial cell walls. E. coli and other gramnegative bacteria have an outer cell membrane [47,48]. This provides E. coli with more resistance to reactive oxygen species (ROS). The SEM results show rod and spherical-shaped cells in the growth zones of E. coli and S. aureus in Figs. 13 (a) and (b), and Figs. 14 (a) and (b), respectively. Neither E. coli nor S. aureus grew in the inhibition zones shown in the SEM images (Figs. 13 (f), (g), (h), and Figs. 14 (f), (g) and (h), respectively). These results confirm that both bacterial species were inhibited to varying degrees by the CAO@1350C, CAO@1450C and CAO@1550C cements.
The mechanism to inactivate these bacteria was likely due to the effect of reactive oxygen species (ROS) such as super oxides ( and O À ions) with concentrations up to 2 Â 10 20 cm À1 [24]. Clearly, the presence O 2 À species at the surface of the samples can cause inactivation of E. coli and S. aureus cells, as reported earlier [24], and this mechanism requires no photocatalytic effect.
In the case of conducting Ca 12 Al 14 O 33 :e À , as shown in Fig. 9 Fig. 16. These active oxygen species are extremely reactive, oxidizing and decomposing the organic substances of the bacteria. This implied that the conducting Ca 12 Al 14 O 33 :e À cement displayed antibacterial activity with no photocatalytic effect. The reaction generating the ROS species occurred via redox reactions [8] as follows Eqs. (11), (12), (13), (14) and (15): CAO(e À ) þ H 2 O 2 → OH þ OH À Superoxide anions (O 2 À ) were first generated from O 2 gas in the atmosphere reacting with caged free electrons (Ca 12 Al 14 O 33 :e À (cage)) at the surface of conducting Ca 12 Al 14 O 33 :e À cement following Eq. (11). Then, H 2 O 2 was generated as depicted in Eq. (12). H 2 O 2 reacted with caged free electrons in conducting Ca 12 Al 14 O 33 :e À cement, along with O 2 À , and O 2 À to generate OH radicals according to the reactions in Eqs. (13), (14), and (15). In this manner, conducting Ca 12 Al 14 O 33 :e À cement inhibited both E. coli and S. aureus through the action of nano-caged free electrons in (Ca 12 Al 14 O 33 :e À (cage). Fig. 16 schematically shows a proposed mechanism for the conducting Ca 12 Al 14 O 33 :e À cement's antibacterial activity. In summary, in the first mechanism of antibacterial activity, bacteria near the surface of this material were inactivated by OH at the surfaces of cement materials. Testing confirmed that the Ca 12 Al 14 O 33 :e À cement displayed antibacterial action against E. coli and S. aureus. Neither effect involved photocatalytic activity. These cement materials can be used in smart antibacterial walls for operation rooms and hospital wards, restaurants, nurseries, and in homes. There are also applications in HVAC and food processing.
Conclusions
Antibacterial Ca 12 Al 14 O 33 material was successfully prepared by a rapid heating of insulating Ca 12 Al 14 O 33 powder in a carbon crucible to a high temperature by high frequency electromagnetic induction heating. The CAO@1550C (Ca 12 Al 14 O 33 :e À ) cement sample formed a conducting phase in the Ca 12 | 7,557.2 | 2019-05-01T00:00:00.000 | [
"Materials Science"
] |
pIL6-TRAIL-engineered umbilical cord mesenchymal/stromal stem cells are highly cytotoxic for myeloma cells both in vitro and in vivo
Background Mesenchymal/stromal stem cells (MSCs) are favorably regarded in anti-cancer cytotherapies for their spontaneous chemotaxis toward inflammatory and tumor environments associated with an intrinsic cytotoxicity against tumor cells. Placenta-derived or TRAIL-engineered adipose MSCs have been shown to exert anti-tumor activity in both in-vitro and in-vivo models of multiple myeloma (MM) while TRAIL-transduced umbilical cord (UC)-MSCs appear efficient inducers of apoptosis in a few solid tumors. However, apoptosis is not selective for cancer cells since specific TRAIL receptors are also expressed by a number of normal cells. To overcome this drawback, we propose to transduce UC-MSCs with a bicistronic vector including the TRAIL sequence under the control of IL-6 promoter (pIL6) whose transcriptional activation is promoted by the MM milieu. Methods UC-MSCs were transduced with a bicistronic retroviral vector (pMIGR1) encoding for green fluorescent protein (GFP) and modified to include the pIL6 sequence upstream of the full-length human TRAIL cDNA. TRAIL expression after stimulation with U-266 cell conditioned medium, or IL-1α/IL-1β, was evaluated by flow cytometry, confocal microscopy, real-time PCR, western blot analysis, and ELISA. Apoptosis in MM cells was assayed by Annexin V staining and by caspase-8 activation. The cytotoxic effect of pIL6-TRAIL + -GFP +-UC-MSCs on MM growth was evaluated in SCID mice by bioluminescence and ex vivo by caspase-3 activation and X-ray imaging. Statistical analyses were performed by Student’s t test, ANOVA, and logrank test for survival curves. Results pIL6-TRAIL + -GFP +-UC-MSCs significantly expressed TRAIL after stimulation by either conditioned medium or by IL-1α/IL-1β, and induced apoptosis in U-266 cells. Moreover, when systemically injected in SCID mice intratibially xenografted with U-266, those cells underwent within MM tibia lesions and significantly reduced the tumor burden by specific induction of apoptosis in MM cells as revealed by caspase-3 activation. Conclusions Our tumor microenvironment-sensitive model of anti-MM cytotherapy is regulated by the axis pIL6/IL-1α/IL-1β and appears suitable for further preclinical investigation not only in myeloma bone disease in which UC-MSCs would even participate to bone healing as described, but also in other osteotropic tumors whose milieu is enriched of cytokines triggering the pIL6.
Background
Mesenchymal stem cells (MSCs) are presently under intensive investigation aimed not only at elucidating their nature and propensity to generate skeletal-related tissues [1,2], but also to develop possible cell-based therapies against a number of diseases including cancer [3][4][5]. In this regard, within a few gene-therapy approaches, MSCs from bone marrow (BM) as well as from adipose tissue (AT) have been modified to enhance their secretory functions in a more targeted fashion either by releasing specific cytokines [6,7] or as pro-apoptogen molecule producers to reverse the cell growth in both in vitro and in vivo models of solid tumors [8][9][10]. Consistent work by different groups of investigators showed indeed that both BM-derived and AT-derived MSCs are capable of inducing the tumor shrinkage in xenografted human glioma [11], gastric [12] or pancreatic [13] cancers, as well as in melanoma [14], and that this native anti-tumor cell growth activity may be definitely enhanced with MSCs transduced to express the tumor necrosis factor related apoptosis inducing ligand (TRAIL), namely a proapoptogen molecule linking the death receptors (DR) 4 and 5 on cancer cells [15]. In this context, fetal MSCs from umbilical cord (UC) engineered to produce a membrane-TRAIL protein have been reported to efficiently restrain the intracranial glioblastoma growth in mice as an effect of their innate chemotactic tendency to migrate toward the tumor microenvironment while exposing the death ligand to the tumor cells [16].
Based on their fetal tissue derivation, UC-MSCs have thus been focused with high interest to design novel cell-based therapeutic strategies. Besides their native restraining activity on Burkitt's lymphoma cell proliferation [17], these cells have been demonstrated by ourselves to express a peculiar molecular profile of inhibitory properties on malignant plasma cells in vitro in relation to their secretome which significantly differs from both BM-derived and AT-derived MSCs [18]. On the other hand, placenta-derived MSCs, as a further fetal tissue source, are also capable of exerting spontaneous killing of multiple myeloma (MM) cells both in vitro and in a severe combined immunodeficient (SCID)-rab mice MM model developing osteolytic lesions [19]. In these studies, in addition to the anti-myeloma activity, MSCs were also found capable of repairing the bone loss within the bone lytic lesions in relation to their bone-regenerating constitutive capability [20].
However, since MSCs, particularly BM-MSCs, physiologically support hematopoiesis, it has also been debated whether a MSC-based model of cytotherapy for MM would sustain, rather than suppress, the proliferation of malignant plasma cells [21]. Contrarily to BM-derived and AT-derived MSCs, those from fetal tissues appear resistant to the genomic conditioning by MM cells that drives the molecular potential to support tumor cell proliferation [22], whereas UC-MSCs apparently show a genomic profile of a definite anti-MM killing secretome [18].
Previous work with TRAIL-engineered BM-MSCs or UC-MSCs, however, included transduction of cells with viral vectors allowing the constant expression of the apoptogen molecule by the cell membranes with the potential risk of cell death induction even in normal cells exposing DR4/DR5 receptors. Unselective binding of target cells thus represents a major drawback of this cytotherapy model since the occurrence of liver as well as of other parenchymal damage has already been reported in preliminary studies by Kim et al. [16] when treating glioma in mice with TRAIL-transduced UC-MSCs.
To overcome such a major drawback and generate UC-MSCs transduced to kill MM cells, we thus used a bicistronic vector to regulate TRAIL expression only after their molecular cross-talk with soluble factors of the MM tumor microenvironment. During the tumor progression of myeloma within the bone marrow, indeed, both interleukin (IL)-1α and IL-1β secreted by MM cells stimulate the stroma to produce IL-6 [23] through the linkage of the early growth response (EGR)-1 protein to the promoter of IL-6 (pIL6) [24]. Therefore, we transduced the UC-MSCs with a vector containing the full-length cDNA sequence of TRAIL under the control of the pIL6 and we evaluated the potential of pIL6-TRAIL + -GFP + -UC-MSCs to eradicate MM cells both in vitro and in SCID mice bearing intratibial human myeloma.
Results from our study support the effectiveness of an anti-MM cytotherapy approach in terms of selective killing of malignant plasma cells.
Cell cultures
UCs were obtained from parturients at the Obstetrician and Gynecology Department after informed consent approved by the local Ethical Committee of the University of Bari. UC-MSCs were isolated and maintained in alpha-Modified Eagle's Medium (MEM) (Gibco, Life Technol., Lofer, Austria) and, at the second passage, were used for retroviral transduction. The U-266 MM cell line (DSMZ, Braunschweig, Germany) was grown in complete Roswell Park Memorial Institute (RPMI)-1640 medium (Gibco), whereas HEK293T, a human embryonic kidney cell line (Sigma Aldrich, St. Louis, MO, USA), was cultured in Dulbecco's modified Eagle's medium (DMEM) (Gibco).
TRAIL transduction of UC-MSCs
To generate TRAIL-transduced UC-MSCs, we adopted a bicistronic retroviral vector (pMIGR1) from a murine stem cell virus encoding for the green fluorescent protein (GFP) gene. This vector was modified to include the pIL6 sequence upstream of the full-length human TRAIL cDNA (Fig. 1a). Briefly, a 315-nucleotide fragment of human pIL6 (nucleotides -303 to +12, Ensembl ENSG00000136244), obtained from genomic DNA by cutting with restriction enzymes for BglII and XhoI sites, was amplified in polymerase chain reaction (PCR) by dedicated primers (forward, 5′-GAATTAGATCTTCAAGACATGC CAAAGTGC-3′; and reverse, 5′-GCCATCCTCGAGGG CAGAATGAGCCTCA-3′). Full-length human TRAIL gene (NM_003810.2) was amplified from cDNA using Expand High Fidelity Taq (Roche, Indianapolis, IN, USA) by primers containing XhoI (forward 5′-GCCCTCGAG GATGGCTATGATGGAGGTCCA-3′) and EcoRI (reverse 5′-GCGGAATTCCTTAGCCAACTAAAAAGGCCCC-3′) sites. Both PCR products were digested by XhoI and ligated with each other to generate a single insert. Thus, the pIL6-TRAIL was cloned into pMIGR1 at BglII and EcoRI sites, and defined as pIL6-TRAIL + -GFP + -pMIGR1 vector, whereas the empty GFP + -pMIGR1 vector was used as control.
Retroviruses were produced by cotransfection of HEK293T cells with both pMIGR1 construct and the packaging plasmids, namely pΔ8.9 and pVSV-G, using XTreme Gene 9 DNA transfection Reagent (Roche).
TRAIL expression and modulation in transfected UC-MSCs
After transfection, UC-MSCs were investigated by flow cytometry (FACScanto; Becton-Dickinson, San Diego, CA, USA) for GFP fluorescence to define the efficiency of the vector insertion. The transfected UC-MSCs were then sorted for GFP expression by FACS Aria III (Becton Dickinson, Milan, Italy) to obtain homogeneous populations to be used in subsequent experiments.
Basal levels of TRAIL and further expression on pIL6-TRAIL + -GFP + -UC-MSCs by U-266 supernatant, and by 10 ng/ml of IL-1α and IL-1β, were investigated by flow A B Fig. 1 Structure of pMIGR1 vector and steps for UC-MSC transduction. a Structural construction of the bicistronic retroviral vector including both TRAIL and GFP sequences controlled by the IL-6 promoter. PIRES sequence was inserted to codify two different proteins from a single mRNA. b Sequential phases of multiple cell transfection, viral particle enrichment, and final transduction of UC-MSCs. GFP green fluorescent protein, MSC mesenchymal/stromal stem cell, pIL6 interleukin-6 promoter, PIRES poliovirus internal ribosome entry site, TRAIL tumor necrosis factor related apoptosis inducing ligand, UC umbilical cord cytometry, confocal microscopy, quantitative (q)PCR, and western blot (WB) analysis.
TRAIL flow cytometry analysis was assessed using specific phycoerythrin (PE)-conjugated monoclonal (Mo) antibody (Ab) (Abcam, Cambridge, UK) in triplicate with isotype control. Data were reported as both the percentage of TRAIL-positive cells and the mean fluorescence intensity (MFI) ratio as described previously [25]. Furthermore, TRAIL expression by pIL6-TRAIL + -GFP + -UC-MSCs was also analyzed by confocal microscope and NIS element software (C2plus; Nikon Instr., Lewisville, TX, USA). Transfected UC-MSCs were incubated with unconjugated anti-human TRAIL rabbit polyclonal Ab (Cell Signaling, Danvers, MA, USA) and then with anti-rabbit fluorescein isothiocyanate (FITC) (Sigma). The samples were counterstained by tetramethylrhodamine (TRITC)-conjugated phalloidin (Life Technologies) to visualize F-actin and by Hoechst 33342 (Sigma Aldrich) for nuclear staining.
In addition, to assess the TRAIL expression by qPCR, total RNA was extracted by RNeasy kit (Qiagen, Hilden, Germany), and 500 ng was reverse-transcribed by IScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA). cDNA was then amplified by Fast SYBR Green Master Mix and the StepOne Plus Real Time PCR (Life Technologies Inc., Carlsbad, CA, USA) using the specific primers for TRAIL: forward, 5′-GCTCTGGGCCGCAAAAT-3′; and reverse, 5′-TGCAAGTTGCTCAGGAATGAA-3′. Data were normalized on glyceraldehyde-3-phosphate dehydrogenase (GAPDH) levels and TRAIL amounts were detected as fold change with respect to basal condition.
Also, the protein was evaluated by WB analysis using polyclonal anti-human TRAIL Ab (Abcam) and ECL reagent (Bio-Rad), and then visualized by the UVIchemi (UVItec, Cambridge, UK) imaging system using UVI-1D quantification software. Expression levels were calculated as mean ± 3 standard deviations (SDs) of the optical density (OD) ratio between TRAIL and housekeeping GAPDH in three different experiments. Finally, soluble TRAIL was also measured in supernatants of pIL6-TRAIL + -GFP + -UC-MSCs after treatment with U-266 conditioned medium, or IL-1α and IL-1β, by dedicated ELISA kit (Abcam).
In vitro apoptosis of U-266 cells
To investigate the pro-apoptogen potential of pIL6-TRAIL + -GFP + -UC-MSCs toward U-266 cells, we carried out cocultures at 1:2 ratio and evaluated the cytotoxicity at 24 h by Annexin V-FITC/propidium iodide (PI) staining (eBioscience, Bender MedSystems GmbH, Vienna, Austria) using the FACScanto. The U-266 population was gated based on forward scatter (FSC) and side scatter (SSC) parameters.
Specificity of apoptosis of U-266 cells following the TRAIL cross-talk was analyzed by caspase-8 activation using the CaspGLOW™ Fluorescein Active Caspase-8 Staining Kit (eBioscience, Bender MedSystems GmbH) and the active caspase-8 was evaluated by flow cytometry.
In vivo functional studies
To investigate the activity of transfected UC-MSCs against MM cells in vivo, we generated stably transduced bioluminescent U-266 cells (Red-Luc + U-266) using RediFect™ lentiviral particles containing red-shifted firefly luciferase (Luciola italica) transgene (Perkin Elmer, Waltham, MA, USA). Briefly, U-266 cells were seeded into a 24-well plate and were incubated with 10 6 viral particles for 24 h. Transduced cells were selected and expanded in medium containing 1 μg/ml puromycin for 4 weeks. Luciferase expression by transduced U-266 was assayed using IVIS Lumina SIII (Perkin Elmer) by adding D-luciferin potassium salt (Perkin Elmer) to the cultures.
Anti-MM activity of pIL6-TRAIL + -GFP + -UC-MSCs was investigated in 6-8-week-old NOD.CB17-Prkdcscid/J mice (Charles River, Milan, Italy) in line with the rules and institutional guidelines of the Italian Ministry of the Health. For this, 42 mice were intratibially (IT) injected with 2 × 10 5 Red-Luc + U-266 cells and the tumor engraftment was evaluated by luminescence imaging 3 days after inoculation. Briefly, anesthetized mice were intraperitoneally administered with luciferin (150 mg/kg) and the tumor luminescence was captured 10 min after injection. Regions of interest encompassing the area of signal were defined using Living Image software and the total photons per second (photons/s) were recorded. The MM-bearing mice were then randomly divided into three groups and injected intracardially (IC) as follows: phosphate buffered saline (PBS) for the control group (N = 14); 2.5 × 10 5 GFP + -UC-MSCs (N = 14); and 2.5 × 10 5 pIL6-TRAIL + -GFP + -UC-MSCs (N = 14). Three mice from each group were sacrificed after 12 and 48 h to evaluate the distribution of UC-MSCs and MM cell apoptosis within tibiae. In addition, the tumor burden was evaluated by luminescence imaging in six mice of each condition at different times up to 30 days after the treatment with pIL6-TRAIL + -GFP + -UC-MSCs and GFP + -UC-MSCs as control. The tumor growth rate in each group of MM-bearing mice was expressed as the relative fold increase of tumor volume calculated as the ratio of total photon flux at various time points with respect to basal condition. All animals were ultimately sacrificed at 30 days for ethical reasons.
Ex vivo measurement of pIL6-TRAIL + -GFP + -UC-MSC activity against MM Several tissue samples, including heart, lung, spleen, liver, and kidney from MM-bearing mice sacrificed at 12 and 48 h, were fixed in formaldehyde and embedded in paraffin, whereas explanted tibiae were decalcified in formic acid and included in paraffin.
To evaluate the intratibiae MM cell apoptosis, sections 3 μm thick were stained with hematoxylin-eosin and in parallel for active caspase-3 by a specific anti-human mouse MoAb (MyBiosource, San Diego, CA, USA). The test was completed by EnvisionFlex kit (DakoCytomation, Santa Clara, CA, USA) according to the manufacturer's instructions. All samples were then examined under light microscopy (Olympus Bx61; Shinjuku, Tokyo, Japan). To visualize the macroscopic effect of our model, we completed radiography evaluations of tibiae. Briefly, animals were euthanized by carbon dioxide and X-ray scans were taken at 20 kV and 25 mAs for 5 s using a mammographic device (Model Flat E; Metaltronica, Rome). Films from the three groups were inspected comparatively for visible bone lesions that were carefully measured for their bone devastation size (mm 2 ) (ImageJ software, version 1.45; NIH, Bethesda, MD, USA).
Statistical analysis
Results were shown as mean ± SD of experimental triplicates. Statistical analyses were completed by Microsoft® Excel (Microsoft, Inc., Redmond, WA, USA) and Graph-Pad Software (GraphPad Software, San Diego, CA, USA). Significance between differences in Kaplan-Meier survival curves were generated using MedCalc 12.7.0.0 software. For the Kaplan-Meier analyses, survival curves were compared using the logrank test. Student's t test was used to compare two groups while comparisons between multiple groups (n > 2) were performed by ANOVA and differences were considered significant with p < 0.05.
Results
pIL6-TRAIL + -GFP + -pMIGR1 vector construction and UC-MSC transduction pMIGR1 bicistronic retroviral vector containing poliovirus internal ribosome entry site (PIRES) and GFP sequences was modified to express full-length TRAIL under the control of pIL6 (Fig. 1a). pIL6-TRAIL construct was obtained by ligation of the relative PCR products in the XhoI restriction site and subsequently cloned between the BglII and EcoRI sites on pMIGR1. Thus, modified pMIGR1 construct and viral packaging plasmids were transfected in HEK293T competent cells and the culture supernatant containing viral particles bearing pIL6-TRAIL + -GFP + -pMIGR1 vector was used to infect UC-MSCs (Fig. 1b). pMIGR1 wildtype construct was used as control empty vector.
TRAIL expression in transduced UC-MSCs
The efficiency of transfection was evaluated by GFP expression in flow cytometry. As shown in Fig. 2a (left), GFP was largely detected in 83.5% of GFP + -UC-MSCs and 87.6% of pIL6-TRAIL + -GFP + -UC-MSCs, whereas wildtype UC-MSCs were negative as expected. After cell sorting we obtained GFP + -UC-MSCs and pIL6-TRAIL + -GFP + -UC-MSCs with purity of 99.0% and 99.5% respectively. These last cell populations were expanded and used for subsequent experiments.
The expression of TRAIL, both constitutively and following the variation by U-266 conditioned medium, or by IL-1α and IL-1β, in pIL6-TRAIL + -GFP + -UC-MSCs is depicted by flow cytometry histograms in Fig. 2a (right). As shown, TRAIL was expressed in 62.5% transduced cells in basal conditions suggesting that pIL6-TRAIL + -GFP + -UC-MSCs were able to maintain the constitutive protein production including cytokines that activate pIL6 [18]. However, when treated with the U-266 conditioned medium, pIL6-TRAIL + -GFP + -UC-MSCs significantly enhanced their TRAIL expression up to 80.3% of positive cells (blue histogram), which was confirmed up to 91.2% of positive population by IL-1α and IL-1β (red histogram). Moreover, TRAIL MFI was 74.6 on pIL6-TRAIL + -GFP + -UC-MSCs after stimulation with the U-266 conditioned medium that was significantly increased compared to basal condition MFI (44.7). The TRAIL upregulation was also confirmed after stimulation by IL-1α and IL-1β (MFI = 70.7). The increased TRAIL expression was shown on representative pictures by confocal microscopy. Figure 2b depicts the enriched TRAIL fluorescent signal on pIL6-TRAIL + -GFP + -UC-MSCs stimulated with U-266 conditioned medium (middle), or in response to IL-1α/IL-1β (right) with respect to basal condition (left).
Furthermore, as shown in Fig. 2c, qPCR of TRAIL mRNA confirmed the stimulation of pIL6-TRAIL + -GFP + -UC-MSCs by either U-266 conditioned medium or IL-1α and IL-1β, since the TRAIL gene expression significantly increased up to 2.2-fold and 2.5-fold respectively as compared to basal condition (Fig. 2ci). This was also confirmed by WB analysis that revealed the increase of TRAIL protein after stimulation. The OD ratio of pIL6-TRAIL + -GFP + -UC-MSCs treated with both U-266 conditioned medium and IL-1α/IL-1β, indeed, was 2.67 ± 1.4 and 3.72 ± 2.1 respectively, compared to untreated cells (p < 0.005 in all instances) (Fig. 2cii).
Finally, we also demonstrated the production of soluble TRAIL by transduced UC-MSCs. In fact, ELISA detected minimal mean levels of the ligand (25 pg/ml) in basal condition which, however, was significantly increased in 24-h treated cells up to 140 pg/ml after stimulation with both conditioned medium and IL-1α/ IL-1β (p < 0.005 in all instances) (Fig. 2ciii).
These results suggested that following the transduction of UC-MSCs with the pIL6-TRAIL + -GFP + -pMigr vector, these cells significantly enhanced their TRAIL expression as both membrane-bound and soluble protein after their activation by pIL6. GAPDH was used as loading control. (iii) ELISA measurement of soluble TRAIL in supernatants of pIL6-TRAIL + -GFP + -UC-MSCs after 24 h of treatment with both U-266 conditioned medium and IL-1α/IL-1β. In both instances the increase was significant as compared to control cells (p < 0.05). GFP green fluorescent protein, MSC mesenchymal/stromal stem cell, pIL6 interleukin-6 promoter, TRAIL tumor necrosis factor related apoptosis inducing ligand, UC umbilical cord, WT wildtype cells was 50.4% ± 6.6 as compared to control wildtype UC-MSCs (19.4% ± 3.4) or to GFP + -UC-MSCs (20.2% ± 2.3). When IL-1α and IL-1β were added to the cocultures, we observed an increase of the programmed cell death associated to the additional expression of TRAIL by pIL6 (70.8% ± 6.5) (p < 0.01). Figure 3a shows a representative pattern of U-266 apoptosis induced by TRAIL-transduced UC-MSCs.
To assess the molecular pathway of apoptosis in U-266 cells we measured their levels of active caspase-8. Figure 3b shows these results. As depicted, the flow cytometric analysis of U-266 cells cocultured by pIL6-TRAIL + -GFP + -UC-MSCs suggested a significant enrichment of active caspase-8 up to 97.8% ± 0.7 of this population (p < 0.05). The addition of IL-1α and IL-1β was ineffective in changing the active caspase-8 levels (98.9% ± 0.6) as compared to the same cells cultured with UC-MSCs or GFP + -UC-MSCs (17.9% ± 3.3 and 22.0% ± 2.7 respectively) (p < 0.01).
pIL6-TRAIL + -GFP + -UC-MSCs exert anti-MM activity in vivo
We generated stably transduced bioluminescent Red-Luc + U-266 to monitor the tumor growth in mice. The timeline of in vivo experiments is reported in Fig. 4a. Fortytwo mice were implanted IT with Red-Luc + U-266 cells and, after 3 days, the tumor engraftment was evaluated by bioluminescence imaging. Recorded total photon flux at day 3 in all animals was variable, ranging from 3.5 × 10 6 to 5 × 10 7 (photon/s) with a mean of 2.64 × 10 7 ± 2.16 × 10 7 . Thus, mice distributed in the groups were injected IC with PBS (group A), GFP + -UC-MSCs (group B), or pIL6-TRAIL + -GFP + -UC-MSCs (Group C). Their drop-out [26], representing the rate of dead animals following the ventricle injection failure, was approximately 14%.
To investigate the organ distribution of transduced UC-MSCs in mice, we analyzed tissues isolated at 12 h after injection IC. PCR analysis for GFP confirmed the presence of pIL6-TRAIL + -GFP + -UC-MSCs in tibiae as well as in lung, heart, and renal glomeruli as described previously [27], while transduced cells were not detected in spleen and liver (Fig. 4b). However, there was no evidence of toxicity on healthy tissues evaluated at the experimental endpoint (data not shown).
The MM burden was evaluated by bioluminescence in mice of each group at experimental time points (10, 20, 30 days), as depicted by representative images in Fig. 4c (upper). The quantitative analysis (Fig. 4c, lower), calculated as the ratio of total photon flux at various time points with respect to basal condition, showed a decrease of tumor growth, although not significant, already on D 10 in group C compared to groups A and B (relative fold increase: group C = 185 ± 80; group A = 778 ± 131; group B = 595 ± 221). At D 20 the differences between groups were increased, becoming significant (relative fold increase: group C = 360 ± 96; group A = 1442 ± 288; group B = 1264 ± 182), and this effect was maintained at the endpoint (relative fold increase: group C = 844 ± 167; group A = 2076 ± 332; group B = 2136 ± 325) (p < 0.03 in each instance).
Bone devastations induced in vivo by MM cells were also evaluated by X-ray imaging. The average size of the osteolytic areas measured on X-ray films of tibiae in group A and B mice was apparently larger and displaying blown cortical bone (mean value 2.5 ± 0.35 mm 2 ), as compared to that observed in pIL6-TRAIL + -GFP + -UC-MSC-treated mice (0.96 ± 0.39 mm 2 ) (p < 0.02) (Fig. 4c).
Finally, to demonstrate MM cell apoptosis in tibia samples, we measured the active caspase-3 by immunohistochemistry. Figure 5a shows the enrichment of plasma cells evidenced by hematoxylin-eosin staining in the bone matrix of mice tibiae in each group. As shown, plasma cells are accumulated within resorptive lacunae where they appear strictly adjacent to the resorbed bone. Also, the active isoform of caspase-3 was detected already after 12 h from injection IC of pIL6-TRAIL + -GFP + -UC-MSCs and this effect persisted up to 48 h (Fig. 5b). On the contrary, the sections of mice inoculated with GFP + -UC-MSCs were negative for active caspase-3.
These data suggest that pIL6-TRAIL + -GFP + -UC-MSCs were able to remarkably inhibit the tumor burden and ultimately restrain the bone devastation by U-266 cells.
Discussion
Novel anti-cancer cytotherapies are presently under intensive investigation and MSCs have potential as cell vehicles for targeted delivery and/or local production of cytotoxic molecules for tumor cells. Besides their large bioavailability and easy recruitment from the human body, the suitability of these cells in fighting cancers is based on their tropism toward inflamed sites including the tumor microenvironment, as well as on their proclivity to be genetically engineered with gene sequences encoding for anti-tumor biological agents [28,29]. In this context, we have demonstrated previously that UC-MSCs constitutively migrate in vitro toward myeloma cells and express a defined anti-MM secretome [18]. Here, we provide further evidence that, once engineered to express TRAIL in the presence of malignant plasma cells, UC-MSCs specifically induce apoptosis in these cells both in vitro and in vivo in SCID mice bearing human MM. The selective activity against MM cells is related to the control of TRAIL production by pIL6 which is inserted before the TRAIL sequence within the vector, thus rendering suitable this cell-based approach of anti-MM treatment for future translation in human studies.
Previous models of cytotherapies with MSCs in MM included placenta-derived MSCs [19] that showed a native activity against MM cells in vitro in parallel with A B Fig. 3 In vitro apoptosis of U-266 cells induced by pIL6-TRAIL + -GFP + -UC-MSCs. a Apoptosis in U-266 cells by transduced UC-MSCs was measured by Annexin V/PI staining using flow cytometry. Representative dot plots revealed that the apoptosis extent was significantly increased (43.8%) after 24 h of coculture with pIL6-TRAIL + -GFP + -UC-MSCs with respect to control UC-MSCs (19.4%) and GFP + -UC-MSCs (20.2%). The effect was also enhanced when IL-1α and IL-1β were added to the cocultures (70.9% of U-266 cell apoptosis). b Active caspase-8 in U-266 cells as signature of TRAIL-induced apoptosis was measured by flow cytometry after 24 h of coculture with UC-MSCs, GFP + -UC-MSCs, and pIL6-TRAIL + -GFP + -UC-MSCs. This representative experiment depicts the activity of caspase-8 in 97.8% of U266 cells cocultured with pIL6-TRAIL + -GFP + -UC-MSCs as compared to 17.9% of control UC-MSCs, and 22% of GFP + -UC-MSCs. Such high levels of active caspase-8 were not further modified by supplementing the cultures with IL-1α/IL-1β (98.9% of positive cells). GFP green fluorescent protein, MSC mesenchymal/stromal stem cell, pIL6 interleukin-6 promoter, TRAIL tumor necrosis factor related apoptosis inducing ligand, UC umbilical cord, WT wildtype a moderate osteogenic potential in vivo in healing the MM bone lesions in SCID-rab mice. These studies, however, supported the general cytostatic property of MSCs which was apparently mild and unselective for MM cells while the bone-repairing capacity was also related to the constitutive osteoblast differentiation of cells belonging to the mesenchymal lineage as placenta-derived MSCs [19]. Further work by ourselves adopted AT-MSCs which were transduced with TRAIL and definitely promoted apoptosis in U-266 cells by the caspase-3 activation [30] in a similar manner as for other tumor models using these engineered MSCs [31][32][33][34]. In the present study, we preferred to transduce UC-MSCs with TRAIL for several reasons: large availability of MSCs from UCs; spontaneous chemotactic activity toward the tumor; and constitutive anti-MM activity by their secretome. We also used UC-MSCs since fetal MSCs are very poorly immunogenic for the minor expression of costimulatory molecules [18] while showing multipotent plasticity [35] and no tumorigenic potential [36]. In addition, UC-MSCs have been reported to show higher kinetics of proliferation and karyotypic stability in culture than AT-MSCs and BM-MSCs [37,38].
Animal models of xenogenic tumors treated with TRAIL-transduced MSCs provided evidence on the effectiveness of this cell-based approach in relation to the expression of DR4/DR5 by the target tumors [16,33,34,39], although several concerns regard the defective specificity since the engineered MSCs also deliver TRAIL to both tumor and normal cells, and therefore normal tissues including liver and kidney are recurrently damaged following the systemic injections of these cells [27]. On (See figure on previous page.) Fig. 4 In vivo effect of pIL6-TRAIL + -GFP + -UC-MSCs on MM growth. a Timeline elucidating the experimental design of SCID mice xenografted intratibially (IT) with 2 × 10 5 Red-Luc + U266 cells (D 0 ), followed after 3 days (D 3 ) by injection intracardially (IC) with PBS (group A), 2.5 × 10 5 GFP + -UC-MSCs (group B), or 2.5 × 10 5 pIL6-TRAIL + -GFP + -UC-MSCs (group C) (N = 14 for each group). Three mice for each group were sacrificed randomly, respectively at 12 h (h 12 ) and 48 h (h 48 ) after injection IC for ex vivo evaluation of UC-MSC distribution and tumor apoptosis. The remaining mice were investigated at defined time points (D 3 , D 10 , D 20 and D 30 ) by bioluminescence imaging with IVIS Lumina. b The biodistribution of pIL6-TRAIL + -GFP + -UC-MSC, revealed by PCR analysis for GFP after injection IC, confirmed the presence of these cells in tibiae as well as in lung, heart, and kidney, while transduced cells were not observed in spleen and liver. Actin was used as loading control. c Representative bioluminescence images at different time points of MM-bearing mice, injected IC with PBS (group A), GFP + -UC-MSCs (group B), or pIL6-TRAIL + -GFP + -UC-MSCs (group C). The color scale ranged from blue (just higher than background noise; set to 1 × 10 7 photons/s/cm 2 /sr) to red (at least 2.5 × 10 8 photons/s/cm 2 /sr). Quantitative analysis of tumor growth in mice was assessed by Living Image Software. Data represent the relative increase of median photon flux (photon/s) within ROI areas in each group at different time points. Tumor growth was timely reduced in mice treated with pIL6-TRAIL + -GFP + -UC-MSCs as compared to both PBS and GFP + -UC-MSC treated groups (p < 0.03). Error bars represent the standard error of the mean (SEM); *p value calculated by Student's t test. b Immunostaining of active caspase-3 showed a high extent of apoptosis in tibiae sections of mice injected systemically with pIL6-TRAIL + -GFP + -UC-MSCs after 12 and 48 h. Active caspase-3 was undetectable in control mice treated with GFP + -UC-MSCs. GFP green fluorescent protein, MSC mesenchymal/ stromal stem cell, pIL6 interleukin-6 promoter, TRAIL tumor necrosis factor related apoptosis inducing ligand, UC umbilical cord the other hand, it has been reported that, particularly in hematological malignancy including MM, based on the high expression of DR4/DR5 molecules, the soluble TRAIL isoform is also capable of inducing apoptosis in a similar manner, although less effective, than in the membrane bound form as presented by the transduced cells [40]. Thus, differentially structured viral vectors were engineered for TRAIL gene insertion to transduce MSCs and the full-length gene sequence codifying its soluble form has been variably inserted. However, some evidence demonstrated that the cancer cell-killing potential induced by the full-length form expressed on MSC membrane is more efficacious than that obtained by the soluble molecule [15].
Therefore, to maintain the full-length structure of the apoptotic protein and to selectively kill MM cells, we designed a vector inducing the expression of TRAIL only in the presence of cytokines secreted locally within the MM microenvironment. To this, we adopted the retroviral bicistronic vector pMIGR1 incorporating the fulllength TRAIL gene whose expression is regulated by pIL6. Several cytokines such as IL-1α and IL-1β are largely secreted by MM cells within the marrow microenvironment to activate the IL-6 secretion by BM MSCs [41][42][43] through the stimulation of its own promoter by EGR-1 [23,24,44]. We reasoned that the pIL6 insertion just before TRAIL within the pMIGR1 would have been able to upregulate the expression of the protein by pIL6-TRAIL + -GFP + -UC-MSCs in response to IL-1α and IL-1β. Although, in the presence of a basal expression of TRAIL which was probably driven by an autocrine loop of cytokine secretion regulated by the UC-MSCs secretome [18], in our experiments we found that this structural variant of the vector significantly increased the production of TRAIL as both membrane-bound and soluble protein also in response to the conditioned medium from U-266 cells. On the other hand, the minor expansion of the osteolytic lesions in tibiae of mice treated with those cells confirmed the capacity of our vector to trigger the secretion of the apoptosis inducer in the presence of MM cells resulting in tumor cell death by activation of caspase-8 [31,39]. This result was enough to support the efficacy of our cellbased approach against MM in mice.
The anti-MM killing of these cells was tested in our orthotopic in vivo model of MM. SCID mice were injected IT with U-266 cells to resemble the human MM model in which malignant plasma cells expand within the bone marrow and promote the bone resorption inducing osteolytic lesions [45]. Then, after developing the bone lesions, MM-bearing mice were treated with injections IC of pIL6-TRAIL + -GFP + -UC-MSCs and periodically investigated for the anti-MM effect as well as for their tissue distribution. By evaluating the GFP expression in several ex vivo organs, we found that the accumulation of pIL6-TRAIL + -GFP + -UC-MSCs occurred in tibiae as well as in lung, heart, and renal glomeruli, thus supporting the typical tropism also to the tumor sites ascribed to MSCs [27]. However, these cells did not produce local damage in healthy tissues and they had no toxicity that could compromise the quality of life in mice. This is probably due to the low levels of both membrane and soluble TRAIL basally expressed by pIL6-TRAIL + -GFP + -UC-MSCs, since the transcription of TRAIL is reinforced by both IL-1α and IL-1β usually abundant within the tumor sites. Such a spontaneous homing of UC-MSCs toward the myeloma tumor milieu within tibiae is apparently related to the overexpression of a number of genes such as growth factor receptorbound 2 (GRB2), which are activated to promote the cell migration toward inflamed sites in response to the cell stimulation by tumor-derived chemokines [18]. Thus, in line with previous studies in different tumor models [6,46], we interpreted the accumulation of pIL6-TRAIL + -GFP + -UC-MSCs within tibiae of MM-bearing mice as an effect of their typical attraction toward the cytokine-enriched MM environment. Our result is consistent when considering that these cells migrated toward the tumor sites after the injections IC that were adopted in our animal model to avoid the potential entrapment of transduced UC-MSCs within lungs after intravenous administration.
In our model, as pIL6-TRAIL + -GFP + -UC-MSCs were induced in vitro to express TRAIL in response to the U-266 conditioned medium and IL-1α and IL-1β stimulation, we observed in vivo the selective pressure of the MM microenvironment to induce TRAIL at high levels. In fact, after 12 h post inoculation we observed on tibiae sections an extended apoptosis of U-266 cells by caspase-3 activation. This effect persisted up to 48 h, suggesting that the pIL6-TRAIL + -GFP + -UC-MSCs exhibit a higher survival rate in bone tissue compared to other organs in which these cells are cleared faster [20], while the reduction of the bioluminescent signal of the tumor burden at different time points and even 30 days after injection supported their MM cytotoxic activity.
It has been described that placenta-derived MSCs are capable of rebuilding the MM bone lesions in mice [19]. In our model the transduced UC-MSCs also induced a partial restoration of the bone structure as shown by X-ray analysis of tibiae. Although this aspect needs to be confirmed by further work, UC-MSCs also appear capable of restoring the balance between osteoblasts and osteoclasts altered by intratibial MM expansion. The accumulation of functional MSCs within tibiae was apparently functional in interacting and stimulating the bone marrow osteoblast precursors by secreted factors that induce their differentiation into bone-building osteoblasts [20]. At the same time, UC-MSCs also restrain the osteoclast activity by secreting specific molecules [19].
Conclusions
Our model of cytotherapy appears suitable in overcoming the drawback of the high soluble TRAIL amounts injected to induce tumor suppression, which have a short half-life when systemically infused and for which previous clinical trials failed to obtain the expected results [47]. By contrast, since pIL6-TRAIL + -GFP + -UC-MSCs are committed to overexpress TRAIL only in the presence of specific cytokines secreted within the bone MM microenvironment, this cell-based therapy model would be suitable for human studies not only in controlling the marrow MM progression, but also in other osteotropic tumors since preclinical observation confirmed the biosafety of viral-transduced MSCs for TRAIL expression [48]. | 8,443.4 | 2017-09-29T00:00:00.000 | [
"Biology"
] |
Privacy Enhancement on Unilateral Bluetooth Authentication Protocol for Mobile Crowdsensing
As an open standard for the short-range radio frequency communications, Bluetooth is suitable for Mobile Crowdsensing Systems (MCS). However, the massive deployment of personal Bluetooth-enabled devices also raises privacy concerns on their wielders. Hence, we investigate the privacy of the unilateral authentication protocol according to the recent Bluetooth standard v5.2. ,e contributions of the paper are twofold. (1) We demonstrate that the unilateral authentication protocol suffers from privacy weakness. ,at is, the attacker is able to identify the target Bluetooth-enabled device once he observed the device’s previous transmitted messages during the protocol run. More importantly, we analyze the privacy threat of the Bluetooth MCS, when the attacker exploits the proposed privacy weakness under the typical Internet of ,ings (IoT) scenarios. (2) An improved unilateral authentication protocol is therefore devised to repair the weakness. Under our formal privacy model, the improved protocol provably solves the traceability problem of the original protocol in the Bluetooth standard. Additionally, the improved protocol can be easily adapted to the Bluetooth standards because it merely employs the basic cryptographic components available in the standard specifications. In addition, we also suggest and evaluate two countermeasures, which do not need to modify the original protocol.
Introduction
Bluetooth [1] is an open technology standard for wireless short-range radio frequency communications. Bluetooth hardware and software modules are already integrated into many kinds of consumer and business devices including smart phones, headsets, laptops, keyboards, mouses, tablets, and automobiles. Actually, Bluetooth offers a highly practical approach to establishing Mobile Crowdsensing Systems (MCS) because of its universality, convenience, and adaptation.
In order to protect device users and their sensitive data, Bluetooth provides a security solution for the hostile environments [2]. More precisely, the effective Bluetooth standard specifications [3][4][5] define four security modes, namely, modes 1 through 4. Each Bluetooth-enabled device must operate in one of the four security modes. Security mode 1 does not employ any security measure. Security modes 2 and 4 are treated as the service level-enforced security modes. at is to say, the device in security mode 2 or 4 will not start any security procedure until it receives or initiates a channel establishment request. Security mode 3 is the link level-enforced security mode, where the security procedures are initiated before the physical link is fully established. Security modes 2, 3, and 4 are all composed of three crucial procedures, i.e., pairing and link key generation, authentication, and confidentiality. It needs to be pointed out that the authentication procedure and the confidentiality procedure in security modes 2, 3, and 4 are fully the same. We briefly review the three procedures as follows: Pairing and link key generation: this procedure is responsible for establishing the link key between a pair of the devices and further binding the trusted devices. e link key will be used throughout the subsequent security procedures. Security modes 2 and 3 employ the same pairing and link key generation scheme called Personal Identification Number (PIN) pairing. e PIN pairing is analyzed and improved in the literature such as [6,7]. However, in security mode 4, Secure Simple Pairing (SSP) is used instead. A series of works such as [8][9][10][11] address the security properties of SSP under the distinctive practical concerns. Authentication: this procedure applies to two paired devices and needs their shared link key generated by the previous procedure. e procedure goal is to validate the legitimate identity claimed by the pairing device itself. When the authentication attempt fails, any retry with the same identity will be delayed for a waiting interval. Confidentiality: this procedure provides a separate confidential service to the data transmitted between the pairing devices. e widespread use of Bluetooth-enabled devices has given birth to many wireless personal applications, such as connecting mobile phones to wireless headsets, emergency systems of cars, and digital wallets and merchants. Bluetooth is often used for establishing Wireless Personal Area Network (WPAN) because of its usability and performance. In fact, the Bluetooth standard is adapted in IEEE 802.15 [12] for WPAN. Obviously, it is important to protect the privacy of the users under Bluetooth WPAN. However, the absence of physical contact during communications and the expected ubiquity of sensitive applications (such as the MCS [13,14]) will encourage nefarious entities to observe and track the devices through their transmitted Bluetooth messages. In a Bluetooth WPAN application, the attacker may intercept and analyze the transmitted messages among devices. If at this time a device is linked to a user, the identity of the user will then be disclosed by his device. Figure 1 illustrates a Bluetooth application scenario where the attacker intercepts and analyzes the transmitted messages among devices. Hence, the need for Bluetooth to be resistant against the privacy threats arises. To guarantee the privacy of the user, the transmitted messages of the security mode in particular should not be exploited to identify the target device.
e National Institute of Standards and Technology (NIST) [15] surveyed the privacy features and threats according to the early versions of the Bluetooth standard. A few of works [16][17][18] showed that the privacy of a particular user can be compromised if the Bluetooth-enabled device address associated with the user is captured, and therefore proposed the improved schemes to repair it. Moreover, some devices [19,20] have been implemented with the protection mechanisms for their Bluetooth addresses. Many researchers [21][22][23][24][25] found that the public advertising channels in Bluetooth Low Energy (BLE) may leak the identity information of the device and therefore designed the privacy countermeasures. Celosia and Cunche [26] reported a timing attack, which can be triggered by a remote attacker, to infer the state of a device from Bluetooth information and undermine the privacy of the user. As a potential privacy threat, Bluetooth traffic can be sniffed even if the device is in indiscoverable mode, as demonstrated by Albazrqaoe et al. [27]. Bello-Ogunu et al. [28] developed a privacy management framework that provides a policy configuration platform for BLE beacon. We [29] addressed the privacy vulnerability of SSP's BLE version in security mode 4. In addition, Bluetooth systems [30][31][32] in the application level are designed to protect the privacy of the user. Due to the fast development of Bluetooth WPAN and its integration into Internet of ings (IoT), more and more privacy protection features are now already included in the Bluetooth standard specifications [4].
To the best of our knowledge, there is still no active research on the privacy of the authentication procedure in the Bluetooth security solution. However, the attacker may exploit the vulnerabilities in the authentication procedure to compromise the privacy of the user, especially when the Bluetooth-enabled devices are deployed in the IoT environment. Moreover, the overall strength of privacy protection would be dominated by the weakest procedure in the Bluetooth security solution. Hence, we focus on the privacy enhancement of the authentication procedure according to the recent Bluetooth standard v5.2 [4].
Two challenge-response protocols, i.e., the unilateral (or legacy) authentication protocol and the mutual (or secure) authentication protocol, are used to realize the authentication procedure in the Bluetooth standard v5.2. In this paper, we will systematically investigate the privacy of the unilateral authentication protocol. We demonstrate that the attacker can track the target Bluetooth-enabled device once he observed the device's previous transmitted messages during the protocol run. We further evaluate its impact on the Bluetooth MCS, when our proposed privacy weakness is exploited under typical IoT scenarios. An improved unilateral authentication protocol is therefore proposed to overcome the privacy weakness in the original protocol. Without high extra implementation costs, our improved protocol provably solves the traceability problem in the original protocol. In addition, two non-protocol countermeasures are also suggested and evaluated for the privacy enhancement.
Review of Unilateral Bluetooth Authentication Protocol
In the unilateral authentication protocol [4], each Bluetoothenabled device is referred to as either the claimant or the verifier. e claimant is a device manifesting its own identity to the verifier, and the verifier is a device validating the identity of the claimant. e protocol validates the devices by verifying the knowledge of the shared link key, which is established in the pairing and link key generation procedure. e protocol makes use of a cryptographic hash algorithm E 1 with an output of 128 bits. e algorithm E 1 is based on the SAFER + block cipher but with some minor modifications [4]. Let K LINK be the shared link key between the claimant and the verifier. Let BD_ADDR be the claimant's device address. e protocol is shown as Figure 2, and the authentication session is described as follows.
Step 1: the verifier generates a 128-bit random number AU_RAND as the challenge and then sends it to the claimant.
Step 2: both devices calculate the authentication token {SRES, ACO} � E 1 (K LINK , BD_ADDR, AU_RAND), where SRES known as the signed response is the 32 most significant bits of the 128-bit output of the algorithm E 1 , and ACO known as authenticated ciphering offset is the remaining bits for creating the encryption key in the confidentiality procedure.
Step 3: the authentication response SRES is sent from the claimant to the verifier.
Step 4: the verifier compares the received SRES with the counterpart calculated locally. If they are the same, the protocol run succeeds and the verifier accepts the identity of the claimant; otherwise, the protocol run fails. e link key is either semi-permanent or temporary according to Bluetooth standard specifications [4]. A semipermanent link key may be still used after the current secure session is terminated. is implies that a semi-permanent link key may be activated in the authentication of several subsequent connections between the devices. e lifetime of a temporary link key is limited by the lifetime of the current secure session; that is, it shall not be reused in a later secure session. In brief, a link key will remain valid until the next successful run of the pairing and link key generation procedure.
Privacy Weakness of Unilateral Bluetooth Authentication Protocol
Assume that the attacker eavesdrops and records AU_R-AND and SRES during a protocol run between the claimant (Device 1) and the verifier (Device 2). Herein, this assumption is reasonable because AU_RAND and SRES are insecurely transmitted over a Bluetooth wireless channel. As shown in Figure 3, the attacker can use the past AU_RAND and SRES to validate whether the claimant is Device 1 or not by the following steps.
Step 1: replay the recorded AU_RAND in Step 1 of the unilateral authentication protocol.
Step 2: omit Step 2 of the unilateral authentication protocol.
Step 3: upon receiving the claimant's SRES in Step 3 of the unilateral authentication protocol, compare it with the recorded SRES. If they are equal, the claimant is Device 1.
If the claimant is Device 1, the same BD_ADDR, AU_RAND, and K LINK should be used as the input of algorithm E 1 during the attacker's authentication run. Hence, the algorithm E 1 will output the same SRES as the attacker's recorded one. On the contrary, if the claimant is not Device 1, the different K LINK and BD_ADDR should be used as the input of the algorithm E 1 . In this case, the algorithm E 1 will output the same SRES as the previous recorded one with a negligible probability. Hence, the attacker can always use the proposed attack to identify Device 1. We further discuss the proposed attack as follows.
To prevent the proposed attack, the device can update its K LINK before each authentication session. However, we argue that it is an impractical method due to the heavy overheads of the pairing and link key generation procedure. If K LINK and BD_ADDR are available, the attacker is also able to identify the device by the authentication session. However, this implies that the attacker must be powerful enough to break into the device. In practice, it is hard for the attacker to directly crack a device. e proposed attack merely exploits the vulnerabilities in the unilateral authentication protocol of the Bluetooth standard. Moreover, since security modes 2, 3, and 4 all employ this protocol, the proposed attack is regarded as a broad-spectrum tracking method.
In addition, it is worth noting that the Bluetooth standard specifications [4] give a countermeasure to prevent the attacker from repeating the authentication procedure. at is, "when the authentication attempt fails, a waiting interval shall pass before the verifier will initiate a new authentication attempt to the same claimant, or before it will respond to an authentication attempt initiated by a device claiming the same identity as the failed device. For each subsequent authentication failure, the waiting interval shall be increased exponentially. For example, after each failure, the waiting interval before a new attempt can be made could be twice as long as the waiting interval prior to the previous attempt. e waiting interval shall be limited to a maximum." However, this countermeasure cannot overcome the proposed privacy weakness on the unilateral authentication protocol because the attacker merely uses one authentication attempt to confirm the identity of the claimant.
Privacy Threat Analysis of Bluetooth MCS due to Proposed Weakness
In some Bluetooth applications such as [9,11], the privacy of the user is not a serious concern. However, when the Bluetooth devices enter into the MCS, we must take user privacy very seriously because a large amount of personal sensitive data may be collected automatically. In this section, we use the proposed privacy weakness to analyze the privacy threat of the Bluetooth MCS under several typical IoT scenarios. We know that MCS architecture always consists of three tiers, i.e., the devices, the edge gateway, and the cloud. e devices include networked sensors, actuators, and embedded communication hardware, which adopt the widely used standards such as Bluetooth and Zigbee.
Bluetooth IoT Scenarios.
Since the standard v4.2 [33], the Bluetooth group begins to promote the IoT technology. With the fast development of the IoT applications, many MCS designs are proposed by using Bluetooth-enabled devices and their networks. For example, the MCS architecture under the Bluetooth IoT [34] can be divided into six layers: hardware layer, microcontroller layer, Bluetooth connectivity layer, connectivity layer, Bluetooth IoT cloud stack layer, and application layer. We can enumerate several typical Bluetooth IoT scenarios as follows.
Child Care.
In this scenario such as [35], children wear wristbands with Bluetooth-enabled devices, and the Bluetooth network would provide the nursery teachers with information on whether the children are still within reach. Furthermore, the nursery administrators could get statistics of the total daily children movement information from their Bluetooth-enabled devices.
Medical and Healthcare.
Bluetooth-enabled health monitoring devices [36] can be deployed to realize remote health monitoring and emergency notification systems. ese devices can range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants. Furthermore, if the hospital beds are equipped with Bluetooth-enabled devices, the doctor can detect whether the hospital bed is occupied or when a patient is attempting to get up.
Animal Tracker.
By mounting the relay nodes around the pastures, farmers can monitor the livestock with Bluetooth-enabled devices and keep track of each individual animal of the herd. Moreover, with the Bluetooth data of the livestock, the precision feed mechanisms can be implemented with using artificial intelligence to count the number of the livestock, analyze the health trend of the livestock, and evaluate the breeding effectiveness.
Barcode Scanner.
Bluetooth IoT is suitable for barcode scanner applications, since Bluetooth network could cover a large warehouse or even multiple large warehouses potentially with a lot of obstacles and walls. Each warehouse worker may use his own barcode scanner with the constant Bluetooth network coverage.
Greenhouse
Monitoring System. Bluetooth-enabled devices could collect plant data on temperature, rainfall, humidity, wind speed, pest infestation, and soil content and further communicate with sprinkler and ventilation systems. e user could manually control and configure systems of the greenhouse with an app using smart phone as a Bluetooth controller. is greenhouse facility could easily be expanded by adding more and more nodes when necessary.
Battlefield Surveillance.
Soldiers and military equipment with the wireless network access (called the node) could search the objects for probably hostile forces. en, they provide the real-time situational awareness to the base station which in turn sends over those data to other nodes as well as to the command center. It is high time that the military may tend to use commercial off-the-shelf Bluetooth sensors due to their inherent price advantages.
Proposed Weakness for Bluetooth MCS.
Here, we show that the proposed attack endangers the privacy of the device owner and further poses a threat to the privacy of the Bluetooth MCS. e sophisticated attacker sends his owner AU_RANDs to the target devices and records their corresponding SRESs. He also collects the target devices' AU_RANDs and SRESs over the Bluetooth channel. ese values can be stored in a data table as shown in Figure 4. e attacker may remove AU_RAND and SRES from the data table, if their corresponding K LINK expires. He simultaneously attempts to initiate the authentication runs with all potential devices within the reach of the imitative device under his control. If any device responds to the right SRES, the corresponding identity linkage will be disclosed to the attacker. With supplementary information such as locations, times, user behavior, and known identities, the attacker can deduce the user identities of the target devices and compromise the privacy of the Bluetooth MCS.
For example, consider the child care scenario in Section 4.1. e proposed privacy weakness can be exploited to track how a victim child's Bluetooth-enabled device within some area moves. is helps the attacker to infer the child's movement characteristics. What is more important, the attacker can derive the relationship among children via a great deal of the tracked Bluetooth-enabled devices.
Privacy reat of Bluetooth MCS.
We now analyze the privacy threat of the MCS under the IoT scenarios in Section 4.1, when the attacker exploits the proposed privacy weakness. In Table 1, we first collect and analyze the privacy features of the MCS on three criteria: the correlation between the device and the user identity, the system deployment range, and the domain's privacy demands. en, we can comprehensively evaluate the privacy features of the MCS and the proposed privacy weakness discussed in Section 4.2. Finally, we deduce the privacy threat level as in Table 2.
Improved Unilateral Bluetooth Authentication Protocol
In this section, we improve the unilateral authentication protocol to prevent the proposed privacy weakness. To be Security and Communication Networks compatible with the standard, the improvement should be built by the same cryptographic components as in the original unilateral authentication protocol.
Protocol Description.
As shown in Figure 5, we propose an improved protocol to repair the traceability weakness in the original protocol. e authentication session is now as follows.
Step 1: the verifier generates a random number AU_RAND V as the challenge and then sends it to the claimant.
Step 2: the claimant creates another random number AU_RAND C and calculates the authentication token {SRES, ACO} � E 1 (K LINK , BD_ADDR, AU_RAND V , AU_RAND C ). Herein, based on the SAFER + block cipher, it is necessary for the cryptographic hash algorithm E 1 to have an input with a larger size. Alternatively, both AU_RAND V and AU_RAND C can be set to 64 bits, and the algorithm E 1 remains the same.
Step 3: the claimant sends the response AU_RAND C and SRES to the verifier.
Step 4: upon receiving AU_RAND C and SRES, the verifier first uses AU_RAND C to calculate the authentication token {SRES, ACO} � E 1 (K LINK , BD_ADDR, AU_RAND V , AU_RAND C ). en, it compares the received SRES with the counterpart calculated locally. If they are the same, the protocol run is successful and the verifier accepts the identity of the claimant; otherwise, the protocol run fails.
Remark 1
In practice, we can employ a non-symmetric split strategy for the random numbers AU_RAND V and AU_RAND C . If the risk of the tracking devices is low in some applications, the claimant's AU_RAND C can be shortened to 32 bits or about, while keeping the length of the verifier's AU_RAND V longer to ensure that the actual authentication strength is not degraded. e random number AU_RAND C can also be replaced with a sequence number, which never repeats in each protocol run. One example of the sequence number is a counter. It demands that the state information of the claimant be maintained after a protocol run. However, the bit length of the sequence number can be very short.
Efficiency Comparison.
In this section, we compare the implementation costs of the improved protocol and the original one. Clearly, both protocols have the same secret storage cost due to the same K LINK . As far as the computation cost is concerned, both protocols need one cryptographic hash computation to obtain {SRES, ACO} in each device. For the communication cost, the improved protocol has to transmit AU_RAND V , AU_RAND C , and SRES, whereas the original protocol only transmits AU_RAND and SRES. When AU_RAND V and AU_RAND C both are 128 bits, the improved protocol incurs an additional overhead of transmitting 128 bits in an authentication session. However, such overhead is insignificant in most of the devices. When AU_RAND V and AU_RAND C both are 64 bits, the communication cost of both protocols is the same. In conclusion, the improved protocol is as efficient as the original protocol.
Privacy Evaluation of Improved Unilateral Bluetooth Authentication Protocol
A fact we shall see from the proposed privacy weakness in Section 3 is that the design of the unilateral authentication protocol is extremely error-prone, even though it originated in standard documents. erefore, to avoid the design defects as much as possible, we adopt the formal method to examine the privacy of the improved unilateral authentication protocol in the following. Let 0, 1 { } * denote the set of finite binary strings and {0, 1} l represent the set of binary strings of the bit length l. Let Pr [Ev] be the probability of the event Ev and Pr[Ev 1 |Ev 2 ] the conditional probability of the event Ev 1 with respect to the event Ev 2 .
Model Definition.
We use a formal model to evaluate the privacy of the unilateral authentication protocols. Similar work to improve the privacy analysis of the authentication protocols begins with the paper of Juels and Weis [37]. Under the Bluetooth network setting, let I � {1, 2, . . ., n} be a set of Bluetooth-enabled devices with either a claimant or a verifier. e unilateral authentication protocol Π rules how the claimant and the verifier behave during the protocol run. For any i, j ∈ I, let Π i,j be i's instance of Π interacting with j. According to Π, Π i,j generates, transmits, and receives the message(s) to authenticate the claimant. To some extent, Π i,j can be treated as an efficiently computable function. e internal state of Π i,j includes the following variables: sid: the unique identifier of a protocol run. K LINK : the link key shared by both i and j. BD_ADDR: the claimant address. tran: a transcript of i's current protocol run so far, i.e., the ordered set of messages transmitted and received by i so far. δ: a Boolean variable set to true or false denoting whether to accept or reject at the end of the protocol run.
is variable is merely valid in the verifier's instance.
Without loss of generality, we assume that i is the verifier and j is the intended claimant in the following. A protocol run can be modeled by collaboratively running Π i,j and Π j,i . At the end of the protocol run, i should either accept or reject the purported identity of j, which is indicated by Π i,j 's δ.
Here, i verifies j by using Π i,j 's K LINK and BD_ADDR. A function ε: N ⟶ R is negligible in n if for all constants c ≥ 0 there always exists an integer N such that for all integers n > N it holds that ε (n) < n −c . If ε is negligible, then 1 − ε is said to be an overwhelming. Let k or 1 k be the security parameter of the unilateral authentication protocol Π. We firstly define the notion of correctness as follows.
Definition 1 (correctness). A unilateral authentication protocol Π with the security parameter k is correct if, given any honest verifier i ∈ I and any honest claimant j ∈ I, the protocol run of the pair Π i,j and Π j,i succeeds with overwhelming probability in k.
Correctness of Π means that if both Π i,j and Π j,i collaboratively generate tran using the same K LINK and BD_ADDR, Π i,j 's δ is true at the end of the protocol run. Clearly, each Π must satisfy it. It is easy to check that both original and improved protocols are correct.
6.1.1. Attacker. Assume that the attacker A has complete control over all communications during the run of Π. e capability of A is essentially done by specifying the actions that he is allowed to perform, i.e., a group of the oracles he can query. Under the Bluetooth network setting, the interaction between the devices and A is modeled by sending the queries to the oracles and receiving the results from the oracles. e oracles define how A interacts with Π.
Launch (i, j) ⟶ {sid, Π i,j , Π j,i }: the Launch oracle means that the system initiates a unilateral authentication protocol run, where sid is set to a unique identifier for this run. Π i,j and Π j,i maintain the same K LINK and BD_ADDR. trans in both Π i,j and Π j,i are set to null, and δ in Π i,j is set to false. Send (m, sid, Π i,j ) ⟶ m′ (resp. Send (m, sid, Π j,i ) ⟶ m′): the Send oracle sends a message m to i (resp., j) and receives the answer m′, which should be sent to the counterpart j (resp., i). If m is valid according to Π i,j
2.
Calculate {SRES, ACO} = E 1 (K LINK, BD_ADDR, AU_RAND V, AU_RAND C ) If the received SRES is equal to the local SRES, accept the claimant. Otherwise, reject the claimant Execute (i, j) ⟶ {Π i,j , Π j,i , tran, sid}: the Execute oracle is used to group one Launch query and successive use of the Send queries to execute a complete protocol run between i's Π i,j and j's Π j,i . tran contains the transcript of all transmitted messages during this protocol run. Besides, the protocol run is identified by sid. Result (Π i,j ) ⟶ x: the Result oracle can decide whether Π i,j successfully completes. at is, if Π i,j 's δ is true, then x = 1; otherwise, x = 0. Corrupt (Π i,j ) ⟶ {K LINK } (resp., Corrupt (Π j,i ) ⟶ {K LINK }): the Corrupt oracle returns the link key K LINK secretly stored in Π i,j (resp., Π j,i ).
In the following, we present an experiment to define the protocol privacy by using the above oracles. Figure 6 we present the experiment Pri − Exp Π,A (k) to examine the privacy of the unilateral authentication protocol Π. In the setup stage, a set of devices are initiated by obtaining their K LINK and BD_ADDR. In the training I stage, the attacker A can select any pair of the devices and learn the run of Π by invoking the Launch, Send, Execute, Result, and Corrupt oracles. A then chooses two uncorrupted devices j 0 and j 1 at his will and provides them to the Test oracle in the challenge stage. e Test oracle flips a coin bit b∈ {0, 1} and returns a device j b back to A. en, to guess the b, A can control the protocol runs between the claimant j b and any verifier i. at is, the training II stage continuously allows A to access the Launch, Send, Execute, and Result oracles. Finally, A should output his guessing of b. By this experiment, we propose the following definition for the protocol privacy.
Privacy. As shown in
Definition 2 (privacy). A unilateral authentication protocol Π is private if, for any probabilistic polynomial-time (PPT) attacker A, the guessing advantage, is negligible in the security parameter k. We use Definition 2 to examine the privacy of the unilateral authentication protocol as shown in Figure 2.
e attacker A can invoke the Execute oracle to record a tran = {AU_RAND, SRES} between a claimant j and a verifier i during the training I stage. en, A submits j 0 = j and any other j 1 ∈I and calls the oracle Test (j 0 , j 1 ) in the challenge stage. During the training II stage, A calls the oracle Launch (i, j b ) to obtain Π j b ,i and sid and then invokes the oracle Send (AU_RAND, sid, Π j b ,i ) to receive the corresponding SRES. A outputs the guess bit b′ = 0 if the received SRES is equal to the recorded SRES during the training I stage. Otherwise, he outputs the guess bit b′ = 1. Let ] be the probability that both the Send (AU_RAND, sid, Π j 0 ,i ) and the Send (AU_RAND, sid, Π j 1 ,i ) output the same SRES. We have Obviously, ] is negligible in the security parameter k. e protocol as shown in Figure 2 does not satisfy Definition 2 and therefore is not private. Consider the improved unilateral authentication protocol as shown in Figure 5. Although the attacker A can intercept the previous values AU_RAND V , AU_RAND C , and SRES exchanged between Device 1 and Device 2, he cannot reuse them to identify the target Device 1. In the subsequent protocol run, A can replay the previous AU_RAND V to Device 1. However, Device 1 should compute a different SRES because a different AU_RAND C is generated as an input of the algorithm E 1 . As a result, A cannot determine the identity of Device 1 by comparing the received SRES with the one recorded in the past protocol run. Certainly, the above privacy discussion is informal. In the following, we will present the privacy result of the improved protocol under our formal model.
Privacy Property and Its Proof.
To evaluate the privacy of the improved unilateral authentication protocol, we need to use the keyed pseudorandom function assumption [38]. A keyed function F receives for input some K∈ {0, 1} k and m ∈ 0, 1 { } * and outputs some h ∈ 0, 1 { } * . Here, K is the key chosen uniformly at random. F is a keyed pseudorandom function such that no polynomial-time distinguisher D can detect if it is given a string sampled according to F or a real random function f. e formal definition is given as follows.
where the k-bit key K is chosen uniformly at random and f is chosen uniformly at random from the set of random functions mapping l 1 -bit strings to l 2 -bit strings.
Note that D in Definition 3 has oracle access to the function in question (either F or f ).
at is to say, D is allowed to query the oracle at any time x, in response to which the oracle returns the value of the function evaluated at x. Finally, D outputs 1 if it makes a correct guess. Now, we have the following theorem. Theorem 1. Let Π be the improved unilateral authentication protocol as shown in Figure 5. If the algorithm E 1 in Π is a keyed pseudorandom function and the k-bit link key K LINK is kept secret, then Π is private in k under Definition 2.
Proof. We know that the Corrupt oracle cannot help the attacker A to guess the random bit b in the experiment Pri − Exp Π,A (k). e reason is that all K LINK s of the pairing devices are independent and the oracle Corrupt is not allowed if the corresponding K LINK is used in the training II stage. Hence, we do not consider the Corrupt oracle in the following discussions.
During the training II stage in the experiment Pri − Exp Π,A (k), the attacker A can interact with the claimant j b . We specify a simulator Sim to simulate j b 's behavior in each run of Π in this stage. However, Sim has no knowledge of the value of the random bit b or the link key K LINK in Π. We demonstrate that A's interaction with Sim will be computationally indistinguishable from a real interaction with j b . is means that A cannot identify j b at the guess stage because A gains no knowledge from its interaction with j b by the runs of Π.
Recall that the attacker A selects j 0 and j 1 in the challenge stage of the experiment Pri − Exp Π,A (k). Let L be the full list of the session transcript tran related to both j 0 and j 1 in the training I stage. Let L′ be the full list of the session transcript tran of j b in the training II stage. When A invokes the Launch, Send, Execute, and Result oracles during the training II stage, Sim simulates the four oracles as follows.
Launch oracle: when the attacker A calls Launch (i, j b ), Sim generates its sid, Π (1 k , i, j b , null), and Π (1 k , j b , i, null) and then sends them to A. Here, Π (1 k , i, j b , null) and Π (1 k , j b , i, null) are, respectively, used to simulate Π i,j b and Π j b ,i and null means that Sim does not know the link key K LINK . Send oracle: (1) When the attacker A calls Send (null, sid, Π (1 k , i, j b , null)), Sim randomly generates the AU_RAND V as the verifier i in Π, records the AU_RAND V in its L′, and sends it to A. (2) When A calls Send (AU_RAND V , sid, Π (1 k , j b , i, null)) and AU_RAND V is generated by Sim, Sim generates the random AU_RAND C and the random SRES itself and records them in its L′ and then sends the AU_RAND C and SRES to A. Sim terminates the protocol run if A calls Send (AU_RAND V , sid, Π (1 k , j b , i, null)) and Sim does not generate AU_RAND V . (3) When A calls Send ({AU_RAND C , SRES}, sid, Π (1 k , i, j b , null, Security and Communication Networks 9 AU_RAND V )), Sim sends the "accept" decision to A if {AU_RAND V , AU_RAND C , SRES} has existed in its L′; otherwise, Sim sends the "reject" decision to A. Execute oracle: to simulate Execute (i, j b ), Sim generates sid, AU_RAND V , AU_RAND C , and SRES just like the Launch oracle and the Send oracle; records {AU_R-AND V , AU_RAND C , SRES} in its L′; and then sends them to A.
Result oracle: when the attacker A invokes Result (Π (1 k , i, j b , null, tran)), Sim returns 1 to A if the tran is in its L′; otherwise, it returns 0 to A. In the case of calling Result (Π (1 k , i, j b , K LINK , tran)), Sim should replay the query to i or j b and return the corresponding response to A.
To distinguish Sim's training II stage from a real training II stage, the attacker A must be able to identify at least one invalid session between the claimant j b and any other verifier i. In other words, A must rule out at least one tran = {AU_RAND V , AU_RAND C , SRES} in order to determine that Sim is present during the training II stage. Assume that A, respectively, makes at most q (k) queries to the Send oracle and the Execute oracle in each training stage of the experiment Pri − Exp Π,A (k), where q (k) is a polynomial function. Consequently, one of the following two cases must occur at some point during the experiment Pri − Exp Π,A (k).
Case 1: there are two session transcripts: AU RAND L V , AU RAND L C ,SRES L }∈L and AU RAND L′ V , AU RAND L′ C , SRES L′ }∈L ′ such that AU RAND L V �AU RAND L′ V and AU RAND L C � AU RAND L′ C . We know that SRES L′ is randomly generated by Sim. Hence, the attacker A can figure out Sim by verifying whether SRES L is equal to SRES L′ . Let |L| and |L′| denote, respectively, the number of the session transcripts in L and L′. We have |L|≤q (k) and | L′|≤q (k) because A makes, respectively, at most q (k) Send and Execute calls in each corresponding training stage. A can control the AU RAND L V and AU RAND L′ V ; however, AU RAND L C and AU RAND L′ C are, respectively, generated by the claimant j b and Sim. Since the values AU RAND L C and AU RAND L′ C are all random k-bit values, Case 1 occurs with the probability at most q (k) 2 /2 k . Case 2: Sim randomly chooses SRES in the session. Comparatively, the j b should use the algorithm E 1 to calculate SRES. Hence, the attacker A depends on whether SERS is randomly chosen or calculated to recognize Sim. We know that the algorithm E 1 is a keyed pseudorandom function (see Definition 3). According to (3), the probability u that A can pick out Sim is Hence, the polynomially bounded attacker A can distinguish Sim from the real j b with the negligible probability at most q (k) 2 /2 k + u (k) ≤ q (k) 2 /2 k + ε (k).
Due to eorem 1, the improved unilateral authentication protocol can prevent the attacker from identifying, tracing, or linking to the device, if and only if the device does not compromise its secret parameter K LINK stored in the memory. where FA is the number of failed authentication attempts since the link key was last changed and t is the threshold number of failed attempts. If t � 0, then the attacker can at most track the identity of the claimant one time. However, if the value of t is too small, it may lead to the high overheads of the pairing devices. e reason is that the device's hardware and software faults may also cause authentication failure attempts. In this situation, the pairing devices need frequently to update their link key by running the pairing and link key generation procedure.
Avoiding Unilateral Authentication Protocol.
To overcome the proposed privacy weakness, it is required that both pairing devices always execute the mutual authentication protocol instead of the unilateral authentication protocol. However, this countermeasure has disadvantages as follows.
e mutual authentication protocol is performed, only if both paring devices support the mutual authentication protocol. Otherwise, the unilateral authentication protocol is performed. In addition, during the link initialization stage, the attacker can also cheat the paring devices to be backward compatible with the unilateral authentication protocol. One way of deceiving is by intercepting and modifying the Bluetooth data packets of the device capabilities. e mutual authentication protocol is merely used when the link key has been generated using SSP with the P − 256 Elliptic Curve. Comparatively, the unilateral authentication protocol supports SSP with not only the P − 256 Elliptic Curve but also the P − 128 Elliptic Curve.
e unilateral authentication suffices for many Bluetooth MCS cases. Further, it is more desirable in the Bluetooth MCS, since the mutual authentication incurs more implementation costs compared with the unilateral authentication.
Conclusions
We investigate the traceability weakness relating to the unilateral authentication protocol in the Bluetooth standard, which is widely applied in the Bluetooth-enabled devices. e authentication service should provide strong privacy protection in fear of vulnerabilities and abuse mounting. Such privacy requirement for the weak authentication protocol is reinforced by the possibility of more and more Bluetooth-enabled device abuses in the real world, such as the Bluetooth MCS. An improved unilateral authentication protocol is thus proposed to overcome the weakness of traceability. Our improved protocol is simple and easy to implement in Bluetooth-enabled devices, though with a little extra overheads. In addition, non-protocol countermeasures are also suggested to fix the weakness of traceability. Certainly, the unilateral authentication protocol is still unable to assure the overall privacy of the Bluetooth-enabled devices and their networks. Nevertheless, we believe that our results are a steady step toward enhancing Bluetooth security solutions. Our future work is to study the privacy problems of the mutual authentication protocol and the confidentiality procedure in accordance with the Bluetooth standard.
Data Availability
e data used to support the findings of this study are included in the article.
Conflicts of Interest
e authors declare that they have no known conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper. | 9,728.6 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
SPD in illumination system of HV air insulated substation
The article discusses the distribution of the lightning current between communications of the object as the main aspect determining surge protective devices (SPDs) parameters for applying in substation’s low voltage networks. Wireframe model was designed to compute field-circuit model of earthing device (ED) and define the element potentials and currents using circuit analysis methods. Transients were calculated using operator method. Results of calculation and measurements of the lightning current’s distribution in ED and cables in illumination network of air insulated substation presented. Experimental study results obtained in training ground PJSC «Lenenergo», one of the largest electric power distribution company in Russia. Case of lightning strike in the floodlight mast combined с with lightning rod in 110 kV air insulated substation was studied. Requirements for the parameter "impulse discharge current amplitude" SPD defined. Described general approaches to the illumination network’s lightning surge protection on air insulated substation.
Introduction
Lightning overvoltages perform the most significant impact on requirements for ensuring EMC of at high-voltage substations up to 220 kV. Other interference sources, including the high-voltage network, have significantly lower energy.
Due to the relatively small dimensions of the objects, cable lines (CL) of substation's low voltage systems are located close to the elements of the lightning protection system. However, air insulated substations are not compact objects, being compared with lower voltage class indoor switchgears or with other infrastructural facilities, for example, mobile communication station, etc. In described conditions the routs of CL are not restricted by shielded volume (in terms of IEC 62305 standard [1] -within one lightning protection zone (LPZ)).
One of the most effective means of limiting overvoltage is the hardware protection of equipment using SPD. SPD is an additional element of the low voltage network that can reduce reliability. Incorrect selection of SPD parameters and its installation location can lead to damage to protected equipment by lightning current [2][3][4][5][6].
SPDs class I placed in LPZ 0 and 1 must be capable of conducting partial lightning currents of 10/350 μs wave form without being destroyed. The main parameter determining the SPD class I applicability is an impulse discharge current. This parameter has an influence on the device's price. Standardized impulse discharge current values for class I test defined in [7]. Impulse current evaluation methods described in IEC 62305 and 61643 standards [1,8], do not consider the design characteristics of the power facilities such as developed grounding network and the absence of low-voltage networks extending beyond the ED. Parameters of other protected objects also aren't considered in terms of lightning current distribution. The aforecited problems may be the cause of incorrect selection of SPDs parameters at the design stage.
It has to be noted, that impulse discharge current parameters recommended to test SPD do not consider data published in [9]. The main concern is the lack of consideration for the multi-component nature of lightning discharges.
The purpose of the measurements and calculations presented in the article is to evaluate the distributed currents in conductors on air insulated substation.
The illumination system of HV air insulated substation is one of the most exposed to conductive surges due to the placing of the lightning equipment directly on tower with lightning rods. It seems appropriate to use SPDs for this system protection. Placing other equipment on structures with lightning rods is less common, and the topology of the power supply networks of such equipment is more complicated. The certainty in the mutual position of the interference source and the cable network of the illumination system allows us to create the required calculation model. An analysis of the processes of distribution lightning current was carried out for the lighting equipment of the floodlight masts located in air insulated substation.
Model and parameters
Wireframe model of 110 kV air insulated substation ( Fig.1) was designed to compute the field-circuit model of ED [13,15] and to demonstrate the impact of substation design on the part of the lightning current distributed in the conductors of the illumination system. The results obtained using FDTD method was excluded due to the inconsiderable difference and the prolonged computation [10][11][12]. The software program ZYM (AutoCAD application) also was used for electromagnetic fields and transients calculations [14]. Overvoltages evaluation was performed in order to justify the necessity of the use of SPD in the substation illumination system. The impact of the CL's length and the presence of metal pipe in a CL construction were considered. The floodlight mast had multiple connections through the ED to the illumination control system earthing point placed in the building of the substation. An assessment of overvoltage and currents in CL was carried out with an impulse with a waveform 10/350 μs and amplitude of 100 kA, which is considered a simulation of lightning current [1].
Lightning strikes in floodlight masts combined с with lightning rod were simulated. Cable length between illumination control switchboard and masts is 50m and 220m. Different cable lying conditions were considered: shielded CL in metal pipe, not shielded CL in metal pipe, not shielded CL without pipe. Parameters of a single-phase cable were used.
The PE conductor was connected to the illumination distribution board located on the mast, to the floodlight and to the illumination control system earthing point. The capacity of the power supply of the floodlight is taken equal to 100 nF. The cable from the side of the substation control house is disconnected, the phase conductor is broken, the neutral conductor is grounded.
Computation results of lightning current distribution in modeled illumination network also were carried out with an impulse with a waveform 10/350 μs and amplitude of 100 kA. In calculations weren't considered the presence of a shield at the CL and the resistance of the SPD. All conductors of CL were grounded.
Computation results discussion
Obtained results of overvoltages values for a floodlight mast placed 50 m (via a cable line) away from the illumination control board in substation control house are summarized in Table 1.
As seen, the overvoltage level is determined mainly by the type of cable lying conditions. The pipe and the shield due to capacitive and inductive coupling with cable conductors significantly reduce the level of overvoltage. It corresponds to the theoretical concepts. If the shielded cable and metal pipe are used in CL construction, the level of overvoltage from the side of the substation control house is close to permitted 4-6 kV with moderate and low soil conductivity. However, as the front of the lightning current pulse decreases, the level of overvoltage increases (Fig. 2). In conditions of low soil resistivity, considering equipment insulation strength margin, studied network is reliable to lightning overvoltage. In other conditions, application of SPD is justified.
Fig. 2.
Overvoltage in CL of illumination system with different waveforms of current pulse, when using a shielded cable and a metal pipe. Fig. 4. Due to the overvoltage growths when currents flow in the ED, SPDs have to be installed according to the L-РЕ, N-РЕ scheme (Fig. 3) to provide common-mode interference protection. It has to be noted, that interference transmission gain value (ratio of overvoltage amplitude on floodlight mast to overvoltage in substation control house) is less than minimum value 10 defined in [1]. So, according to calculation results, the impulse attenuation is slower. Overvoltage from floodlight mast direction rises with an increase in the length of the CL. The reason is neutral wire's grounding point moving away from the grounding area of the floodlight mast. Thus, necessity of SPD appliance in illumination system of air insulated substation is reasonable.
Recommended for illumination system of air insulated substation SPDs installation scheme is shown in
Computation results of current values in conductors of CL divided on two sections: from floodlight to distribution board on mast and from mast to substation control house, are summarized in Table 2. Not shielded cable without metal pipe was studied in order to determine the maximum value of the current flowing through the SPD. For the short CL the impact of the current impulse front on the current value in the CL also was studied. Note: 1 Current in CL section from floodlight to distribution board on mast -computed current value flowing in one conductor of CL section from floodlight to distribution board on mast; 2 Current in CL section from distribution board on mast to the illumination control system in substation control house -computed current value flowing in one conductor of CL section from distribution board on mast to the illumination control system in substation control house.
Experimental study results
Experimental study results obtained in training ground PJSC «Lenenergo». In training ground territory placed open switchgears 110, 35, 10 kV, buildings and complete transformer substations. The voltage unavailability in high voltage network allows conducting experiments. Unfortunately, there are some differences between the illumination system of the training ground and the real substation conditions. First of all, floodlight masts located far from the open switchgear and horizontal grounding network has a big step (up to 20 m) near the masts. Secondly, the illumination control board placed in security building (illumination is mainly used as security lighting). CL not shielded and the metal pipe was not used for cable laying. In described conditions we expected that the measured partial current, flowing into the cable conductors during experiment, should be much higher than in the functioning substation. Floodlight masts are powered in series, that is, two three-phase CLs outgoing from the mast. Thus, the number of CL conductors is 10. According to design documentation CL length in one direction from the mast is 160 m and in another -200 m (real lengths have to be shorter). The measurement scheme is shown in Fig. 5. Voltage was measured on generator circuit shunts and CL with Fluke 190-504 oscilloscope. At the day of the measurement, the soil was frozen and the upper layer had a resistivity 1200 Ω •m. A voltage impulse of 1.2/50 μs and an aperiodic current of 8/20 μs with an amplitude 9 A was applied to the floodlight mast, the phase and neutral conductors were grounded on both sides of CLs in the illumination control board placed in security building and on another floodlight mast. The second output of the generator was grounded to the footing of the pole also placed in training ground but not connected to the ED. The voltage ratio made it possible to estimate the part of current flowing in CL. The oscillogram of voltage on the shunts of the generator and CL is shown in Fig. 6. parameters and if an SPD installed the amplitude of the current impulse in PE and N conductors would be slightly higher.
According to data from Table 2 value of current flowing in one conductor of CL section from distribution board on mast to the illumination control system in substation control house for CL length 220 m and soil resistivity 1000 Ω •m is 1402 A. Then in the training ground conditions (two five-conductor CL with length 360 m and soil resistivity 1200 Ω •m) approximate evaluation of the part of lightning current 10/350 μs impulse can be found from the equation: , where: n -number of CLs, N -number of CL conductors; Lc -CL length from calculation model; lm -CL length in training ground; ICL -current flowing in CL (sum of currents in all CL conductors); Ist -standardized value of the lightning current impulse amplitude (100 kA); Ic -calculated current value in CL conductor CL length Lc (1402 A).
The expected part of the current flowing in all the conductors of two studied in training ground CLs is 17.5% of the calculated current amplitude. In the training ground conditions part of the current flowing from the mast to CLs have to be higher than in a real substation, due to ED and masts footing design features.
The obtained values of the currents in the CL conductors are significantly lower than the impulse discharge current of SPDs installed in substation. It can be explained by the following circumstance: 1. selection of SPDs parameters based on unreasonable methods or the absence of any methods in the presence of biased recommendations of the manufacturers of SPDs; 2. the design characteristic of the substation such as developed grounding network due to which a significant part of the lightning current is redirected to the ground is not considered; 3. the impact of cable length and cable lying conditions are not considered. As a result, even in the 110 kV substations conditions, in case of cable laying without pipe and simplifying the ED design, the use of an SPD with an impulse discharge current parameter higher than 10 kA is inappropriate. At substations of a higher voltage class the length of the CL of the lighting network increases significantly. Thus, the conclusion made about the necessity and sufficiency of using SPDs with impulse discharge current parameter lower than 10 kA, is also relevant for substations of a higher voltage class.
Conclusions
In illumination system of air insulated substation application of SPDs for lightning protection is justified. The substation is characterized by developed grounding network and this is the one of the decisive features in the SPD parameters selection.
SPD parameters selection methodology for low-voltage substation networks protection, based on "estimated calculations" of lightning current part in the CL conductors, should be reviewed, since its use can lead to unjustified costs.
Recommended value of SPD impulse discharge current parameter for appliance in in illumination system of 110 kV air insulated substation is 10 kA or lower. | 3,155.6 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Ursolic Acid Promotes Autophagy by Inhibiting Akt/mTOR and TNF-α/TNFR1 Signaling Pathways to Alleviate Pyroptosis and Necroptosis in Mycobacterium tuberculosis-Infected Macrophages
As a lethal infectious disease, tuberculosis (TB) is caused by Mycobacterium tuberculosis (Mtb). Its complex pathophysiological process limits the effectiveness of many clinical treatments. By regulating host cell death, Mtb manipulates macrophages, the first line of defense against invading pathogens, to evade host immunity and promote the spread of bacteria and intracellular inflammatory substances to neighboring cells, resulting in widespread chronic inflammation and persistent lung damage. Autophagy, a metabolic pathway by which cells protect themselves, has been shown to fight intracellular microorganisms, such as Mtb, and they also play a crucial role in regulating cell survival and death. Therefore, host-directed therapy (HDT) based on antimicrobial and anti-inflammatory interventions is a pivotal adjunct to current TB treatment, enhancing anti-TB efficacy. In the present study, we showed that a secondary plant metabolite, ursolic acid (UA), inhibited Mtb-induced pyroptosis and necroptosis of macrophages. In addition, UA induced macrophage autophagy and enhanced intracellular killing of Mtb. To investigate the underlying molecular mechanisms, we explored the signaling pathways associated with autophagy as well as cell death. The results showed that UA could synergistically inhibit the Akt/mTOR and TNF-α/TNFR1 signaling pathways and promote autophagy, thus achieving its regulatory effects on pyroptosis and necroptosis of macrophages. Collectively, UA could be a potential adjuvant drug for host-targeted anti-TB therapy, as it could effectively inhibit pyroptosis and necroptosis of macrophages and counteract the excessive inflammatory response caused by Mtb-infected macrophages via modulating the host immune response, potentially improving clinical outcomes.
INTRODUCTION
Tuberculosis (TB), caused by Mycobacterium tuberculosis (Mtb) infection, is one of the leading causes of death from infectious agents.Macrophages are the primary host cells of Mtb in vivo.After being phagocytosed, Mtb can resist the killing of macrophages and live in macrophages, leading to chronic inflammation and lung damage.The inflammatory response triggered by Mtb infection is a double-edged sword.It is well known that inflammation is a protective host response to infection and tissue damage, preventing pathogen transmission and promoting tissue repair.However, as the inflammatory response develops unbridled, it will become chronic inflammation detrimental to the host [1].
Inflammatory cytokines, such as tumor necrosis factor α (TNF-α) and interleukin-1β (IL-1β), are essential for protecting host cells, while excess cytokine production can lead to increased macrophage death.On the one hand, it will promote the excessive inflammatory reaction of host cells and subsequent severe tissue damage.On the other hand, cell death, cell membrane rupture, and extracellular transmission of Mtb make the infection spread [1].Therefore, controlling inflammation is an essential approach for effective anti-TB therapy.Hostdirected therapy (HDT) is a necessary adjuvant therapy for multidrug-resistant (MDR) TB.HDT can increase the anti-TB efficacy by increasing host immunity, regulating inflammation, reducing lung tissue destruction, and killing or controlling Mtb [2].
Programmed cell death (PCD) is essential for eukaryotic development and integrity, while the dysregulation of this program is associated with many diseases, including immune deficiency, autoimmune diseases, infectious diseases, neurodegenerative diseases, and cancers [3].Apoptosis is the first known form of cell death, and this immune process is generally silent and noninflammatory.Unlike apoptosis, pyroptosis and necroptosis are lytic forms of cell death that release many intracellular components and immunogenic molecules.Genetic evidence suggests that when these cell death pathways are overactivated, they induce a robust inflammatory response [4][5][6].
Some studies have shown that pyroptosis and necroptosis in Mtb-infected cells are the main ways of cell death [7].In previous studies, our group has shown that Mtb infection triggers pyroptosis and expands the inflammatory response [8].Another process involved in Mtb infection of macrophages, amplifying the inflammatory response, is the inhibition of autophagy.It has been suggested that autophagy induction not only leads to the fusion of the lysosomal compartment with mycobacterial phagosomes but also triggers the production of new antimicrobial peptides, which are essential for mycobacterial killing [9].Moreover, in vivo autophagy experiments have shown that the autophagy protein Atg5 in macrophages is required to inhibit Mtb infection in mice [10], and autophagy inhibitors enhance mycobacterial burden in zebrafish embryonic models [11].In vitro studies have also demonstrated the antibacterial and anti-inflammatory mechanism of autophagy [12].Besides, our research group has shown that the recovery of autophagy is conducive to the inhibition of pyroptosis.However, the interaction between the autophagy pathway and necroptosis in the pathogenesis of Mtb infection remains largely unexplored.
There is a class of compounds in plants called pentacyclic triterpenes (PT).Ursolic acid (UA) is widely distributed in many plants, fruits, and vegetables [13].Studies have shown that UA has low cytotoxicity and various pharmacological properties, including anti-tumor, cardioprotective, and hepatoprotective activities, antibacterial and anti-inflammatory properties, and preventive effects of oxidative damage.Furthermore, studies have shown that UA can enhance macrophage autophagy, inhibit IL-1β secretion [14], and reverse TNF-α-induced NLRP3 upregulation [15], suggesting that UA can play an auxiliary anti-inflammatory role in some highly inflammatory diseases.In addition, other studies have shown that UA also plays a vital role in inhibiting bacterial burden and lung inflammation during Mtb infection [16].However, no studies have focused on the effect of UA on pyroptosis and necroptosis.
In the present study, we also focused on the proinflammatory cytokine TNF-α.TNF-α is closely associated with cell death [17,18].TNF-α and tumor necrosis factor receptor-1 (TNFR1) levels were significantly increased in Mtb-infected macrophages.Moreover, Akt/ mTOR and TNF-α/TNFR1 signal axis occurred upstream of cell death.In conclusion, UA may promote autophagy by inhibiting TNF-α and inhibiting Akt/mTOR pathway, thereby inhibiting pyroptosis and necroptosis.Here, we describe a novel mechanism of UA and demonstrate its anti-inflammatory efficacy as an adjunct anti-tuberculosis agent by inhibiting pyroptosis and necroptosis.
Cell Culture
U937 cell was obtained from ATCC and cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS), 1% penicillin-streptomycin in 5% CO 2 at 37 ℃.U937 cells were seeded in 6-well plates for differentiation into macrophages and treated for 24 h with 100 nM PMA.The cells were then replenished with fresh medium immediately before any treatment was used in this study.J774 A.1, a murine macrophage cell line, was obtained from ATCC and cultured in DMEM supplemented with 10% FBS, 1% penicillin-streptomycin in 5% CO 2 at 37 ℃.
Mtb Infection
The J774 A.1 or PMA-treated U937 cells were seeded on different cell culture plates and grown at 37 ℃ overnight.The next day, Mtb H37Ra infected cells for 4 h when MOI (multiplicity of infection) was 10:1, followed by two PBS washes to remove extracellular bacteria.
MTT Assay
Based on the enzymatic reduction of the water-soluble, yellowish tetrazolium dye 3-(4,5)-dimethylthiahiazo (-z-y1)-3,5-di-phenytetrazoliumromide (MTT) to purple formazan is commonly used for assessment of cell viability and proliferation.The J774 A.1 cells (1 × 10 4 cells/ well) or U937 cells (2 × 10 4 cells/well) were seeded into 96-well plates overnight.The culture medium was substituted with the medium containing different concentrations of UA (0, 2.5, 5, 10, 20, or 40 μM) for 24, 48, and 72 h, respectively.After incubation for the assigned time, the medium in the wells was discarded.According to the proportion of 100 μl containing 10 μl MTT (5 mg/ml), add to DMEM complete medium mix well, then add to each well and continue to incubate for 4 h.Whereafter, the medium was aspirated, and the formazan precipitate was dissolved in 150 μl DMSO to lyse the cells.The absorbance was determined at 490 nm employing a Synergy 2 Microplate Reader (Bio-Tek, USA).
Western Blot Assay
Total protein was isolated with cell lysis buffer for western blot and immunoprecipitation (P0013, Beyotime Biotechnology, China).Cells were centrifuged at 12,000 rpm at 4 ℃ for 15 min, and the supernatant was collected.Total protein concentration was evaluated by bicinchoninic acid (BCA) assay (WB6501, NCM Biotech, China).Then, whole cell lysate was added with 5 × SDS loading buffer, boiled at 100 ℃ for 10 min, separated by SDS-PAGE, transferred onto nitrocellulose membranes, blocked in 5% (w/v) skim milk for 1.5 h, and incubated with specific primary antibody overnight at 4 ℃.After three times washing with TBST, the membranes were incubated with HRP-conjugated secondary antibodies at room temperature for 1 h.After extensive washes with TBST, the chemiluminescence was probed by ECL detection kit (Thermo Scientific, MA, USA) with Fluor Chem E (Protein Simple, USA).Band intensities were quantified with the assistance of Image J software (US National Institutes of Health, Bethesda, MD, USA).
Lactate Dehydrogenase (LDH) Release Assay
Quantification of cytotoxicity can be achieved by examining the activity of LDH released into the culture medium from cells with ruptured plasma membranes.Cells were seeded in 24-well plates and grown at 37 °C overnight, then treated with H37Ra and UA for 24, 48, and 72 h.The cell supernatant was collected and centrifuged (400 g, 5 min).Cell death was evaluated by applying the LDH cytotoxicity assay kit.The absorbance was determined using a microplate reader (490 nm wavelength).
Coimmunoprecipitation (Co-IP)
J774A.1 and U937 cells were lysed at 4 °C in icecold cell lysis buffer, followed by centrifugation at 4 ℃ (12,000 rpm, 15 min).Supernatants were collected and incubated with 1-2 μg of the indicated antibody overnight at 4 ℃.Protein A/G agarose beads were added into the lysates and incubated for 3 h at 4 ℃ with gentle rotation.After incubation, the beads were washed 4 times with lysis buffer and boiled in 1 × SDS loading buffer at 100 ℃ for 10 min.Immunoprecipitates in the sample buffer were subjected to immunoblotting analysis.
Immunofluorescence
The cells were seeded in special plates for immunofluorescence, set up control group, Mtb group and Mtb + UA group, and incubated overnight at 37 ℃.The next day, H37Ra was added after the cells adhered to the wall.After 4 h of treatment, the cells were washed away with sterile PBS.Fresh medium was added to the control group and Mtb group, and fresh drug-containing medium was added to the Mtb + UA group.After 12 h of drug treatment, follow the steps of paraformaldehyde fixation, permeabilization, closure, primary antibody, washing, fluorescent secondary antibody, washing, staining DAPI, washing, and finally, adding 1 ml PBS in sequence.The steps after the secondary antibody need to be done with care to avoid light.Observe the localization of pMLKL under a confocal microscope.ASC and LC3 co-localization steps are the same as above.
IL-1β, TNF-α ELISA
After the cells were treated differently, the supernatant was collected and centrifuged at 3000 rpm for 10 min at 4 ℃.Inflammatory cytokine IL-1β and TNF-α levels were assessed using the IL-1β, TNF-α ELISA kits from R&D System following the manufacturer's protocol.
Determination of ASC Oligomerization
Cells were cultured in 6-well plates overnight, then infected with Mtb and treated with UA 4 h later.After being treated with UA for 12 h, cells were lysed with Triton Buffer (PBS, 0.5% Triton X-100) and then centrifuged at 6000 g at 4 ℃ for 15 min.The pellets were washed with PBS, and then crosslinked by incubation with 2 mM disuccinimidyl suberate (DSS, Sigma-Aldrich) for 30 min at room temperature.Subsequently, the sample was centrifuged again at 6000 g for 15 min at 4 ℃.The supernatants were discarded, and 40 μl of 1 × SDS loading buffer was added to crosslinked pellets.The samples were boiled at 100 ℃ for 10 min and then subjected to separation through SDS-PAGE.
Statistical Analysis
Statistical analyses were performed using GraphPad Prism 9 (GraphPad Software, La Jolla, CA, USA).Statistical significance was analyzed by one-way ANOVA analysis, and the results were expressed as mean ± standard deviation (SD).The data shown represent at least three replicate experiments; p < 0.05 was considered statistically significant.
Effect of UA on the Viability of J774A.1 and U937 Cells
In order to select the appropriate drug concentration for subsequent experiments, the cytotoxicity of UA was determined by MTT assay.Using three time points (24,48, and 72 h), we found that UA did not affect the viability of J774A.1 and U937 cells at a dose of 10 μM (Fig. 1b, c).Therefore, the concentration of UA within 10 μM was considered safe and adopted for subsequent experiments.
UA Inhibits Cell Death Induced by Mtb Infection
To examine Mtb-induced cell death in vitro, we employed a model in which J774A.1 or U937 cells were exposed to Mtb, and cell death was evaluated by detecting the release of lactate dehydrogenase (LDH).As a stable enzyme in all cell types, LDH is rapidly released into the cell culture medium after cell death and plasma membrane damage.Therefore, LDH is a critical marker used in cell death studies.The results demonstrated that the proportion of LDH released by macrophages increased after Mtb infection, which was rescued by UA treatment (Fig. 2a, b).Furthermore, the change in LDH activity indicated that UA could alter the plasma membrane permeability of J774A.1 and U937 cells.
UA Alleviates Pyroptosis in Mtb-Infected Macrophages
Pyroptosis, also known as inflammatory cell necrosis, is a violently programmed cell lysis of death [19].The inflammasome activation plays a primary role in the occurrence and development of pyroptosis.For example, the NOD-like receptor (NLR) family member NLRP3 assembles intracellular protein complexes called NLRP3 inflammasomes in response to sensing certain pathogen products or sterile danger signals, which is a well-studied inflammasome [20,21].
Our previous studies have proved that Mtb-infected macrophages can promote the activation of the NLRP3 inflammasome, thereby causing pyroptosis and leading to the spread of bacteria and a severe inflammatory reaction [8,12,22].In the present study, we found that UA significantly inhibited the expression of NLRP3 in a timeand concentration-dependent manner (Fig. 3a-d) and inhibited the formation of apoptosis-associated specklike protein containing a caspase recruitment domain (ASC) oligomerization (Fig. 3e, f), thereby inhibiting the assembly of the NLRP3 inflammasome (Fig. 3g, h), effectively suppressing the activity of caspase-1, blocking N-terminal fragment of GSDMD (GSDMD-N), and inhibiting the occurrence of pyroptosis (Fig. 3i, j).In addition, UA inhibited the expression of high-mobility group box 1 protein (HMGB1) (Fig. 3i, j), which is closely related to inflammatory response, and suppressed the secretion of inflammatory factor IL-1β in a concentration-dependent manner (Fig. 3k, l), thus restricting the excessive inflammatory response.
UA Alleviates Necroptosis in Mtb-Infected Macrophages
Necroptosis is another form of inflammatory cell death triggered by extracellular stimuli that activate inflammation and cell death.Necroptosis is induced by various signals, including cell death receptor ligands, among which the TNF-α/TNFR1 signaling pathway is the most representative [23].We demonstrated that during Mtb infection, TNF-α was overcharged, and UA inhibited the release of TNF-α (Fig. 4a, c) and the overactivation of TNFR1 (Fig. 4b, d).In the necroptosis pathway, the RIPK1-RIPK3-MLKL signaling cascade is a crucial feature.Sequential phosphorylation of receptor-interacting protein kinase 1 (RIPK1) and RIPK3, in turn, phosphorylates pseudokinase mixed lineage kinase domain-like (MLKL) protein.This process leads to MLKL oligomerization and translocation to the plasma membrane, disrupting cell membrane integrity and resulting in lytic cell death [24].Our data supported that RIPK1, RIPK3, and MLKL were phosphorylated after Mtb infection, and the phosphorylation degree of these proteins was decreased after UA treatment (Fig. 4e, f).Correspondingly, the immunofluorescence assay showed that UA inhibited the accumulation of phosphorylated MLKL (pMLKL) on the cell membrane after Mtb infection (Fig. 4g, h).
UA Induces Autophagy in Macrophages by Inhibiting the Akt/mTOR Pathway
Autophagy plays a vital role in both innate and adaptive immunities of the host infected with Mtb, which is reflected in the degradation of dysfunctional and unnecessary cellular components in the process of autophagy to achieve cell homeostasis and organelle renewal [25].
The ability of Mtb to survive and replicate in host macrophages is the core of the TB mechanism [9,26].Some studies have demonstrated that Mtb clearance can be enhanced by targeting the macrophage autophagy mechanism to reduce inflammation [12,27].To evaluate the ability of UA to induce autophagy in Mtb infection, we measured the expressions of p62 and LC3 II at the protein level when exposed to different concentrations of UA (0, 5, 10, or 20 μM) and at different times of treatment.P62 is a selective substrate for autophagy and a marker of autophagy flux.LC3 lipidation is a specific marker of autophagosome in mammalian cells, widely used in autophagy detection.Our results showed that UA treatment decreased the expression of p62 and increased the expression of LC3 II in a concentration-and timedependent manner (Fig. 5a-d).
Many studies have reported that Akt/mTOR is a classical negative regulator of autophagy [28,29].Western blot analysis showed that Mtb treatment significantly enhanced the expressions of phosphorylated Akt (p-Akt) and phosphorylated mTOR (p-mTOR) at the protein level compared with the control group.However, UA treatment down-regulated the elevation of p-Akt and p-mTOR in a time-dependent manner (Fig. 5e, f).Based on these results, we concluded that UA could inhibit Akt/mTOR signaling pathway to activate autophagy.
UA Inhibits Pyroptosis and Necroptosis by Promoting Autophagy
Some studies have shown that the activation of autophagy can inhibit pyroptosis [30][31][32], and autophagy can also regulate necroptosis [33,34].Based on the above results, we proposed a conjecture as follows: is there any relationship between autophagy, pyroptosis, and necroptosis in Mtb-infected macrophages?Therefore, in the present study, we adopted the classical activator of autophagy, rapamycin (Rapa), and the autophagy inhibitor chloroquine (CQ) to investigate the effects of autophagy on pyroptosis and necroptosis of J774A.1 and U937 cells.The expressions of p62, LC3 II, GSDMD-N, and pMLKL were detected by Western blot analysis after treatment with Rapa, CQ, and UA.
On the one hand, similar to Rapa treatment, UA treatment also inhibited the expression of p62 and enhanced the expression of LC3 II, indicating that UA had a brilliant ability to promote autophagy.After treatment with Rapa and UA, the expressions of GSDMD-N, an essential protein of pyroptosis, and pMLKL, a key protein of necroptosis, were decreased.On the other hand, the results were further verified by CQ.The results showed that 20 μM CQ caused significant accumulation of p62 and LC3 II, increased the expressions of GSDMD-N and pMLKL, and blocked the smooth autophagy flow process induced by UA and the inhibition of pyroptosis and necroptosis to a certain extent (Fig. 6a, b).Furthermore, we observed that UA could promote the co-localization of ASC and LC3 through confocal microscopy (Fig. 6c, d), which further verified the close connection between autophagy and pyroptosis.In addition, it has been noted in the literature that the ZZ domain of p62 (amino acid 122-167) interacts with RIPK1 [33,35], which was also confirmed by Co-IP, and UA inhibited the interaction of p62 with RIPK1 compared with the Mtb group (Fig. 6e, f).RIPK1, a member of the death domain family of proteins, acts as a primary upstream regulator that controls cell survival and inflammatory signaling, including necroptosis [36].Therefore, the inhibition of the interaction between p62 and RIPK1 by UA also indirectly indicated the close relationship between autophagy and necroptosis.In conclusion, UA inhibited pyroptosis and necroptosis by promoting autophagy.
UA Promotes Autophagy and Inhibits Pyroptosis and Necroptosis, Which Is Related to TNF-α/TNFR1
As mentioned above, TNF-α overproduction promotes the progression of TB and is not conducive to host recovery, and TNF-α plays a crucial role in mediating the occurrence and development of necroptosis and pyroptosis [18,23,37].TNF-α has also been reported to be associated with autophagy [38].Therefore, we tried to investigate the role of TNF-α in necroptosis, pyroptosis, and autophagy.We detected the expressions of related indicators by exposing the Mtb-infection macrophages to a specific dose of TNF-α.As expected, we found that pMLKL and GSDMD-N were significantly up-regulated after co-incubation with TNF-α compared with the Mtb group, and UA could still inhibit the enhanced expressions of pMLKL and GSDMD-N after TNF-α treatment.In addition, it was worth noting that autophagy also showed a correlation with TNF-α.Compared with the Mtb group, the addition of TNF-α resulted in a significant accumulation of p62 and LC3 proteins, inhibiting autophagy and impeding UA-induced patency of the autophagic flow process (Fig. 7a, b).
DISCUSSION
TB remains the leading cause of infectious death worldwide.Current research has found that the primary cause of death in most people infected with Mtb in the fight against TB is severe tissue damage due to the excessive inflammation that results from the infection.Mtb has evolved immune evasion mechanisms that bypass the macrophagekilling mechanism by blocking phagosome maturation, mediating inflammation, and manipulating the host cell death program through a series of proteins encoded by its virulence genes [39].The inflammatory response induced by Mtb infection is a double-edged sword.The excessive inflammatory response promotes cell death, cell membrane rupture, and intracellular diffusion, allowing for the amplification of the infection [1].Therefore, treatment of excessive inflammation induced by Mtb infection can help advance the treatment process.HDT is based on balancing the host immune system and may provide a new avenue for discovering new anti-TB therapies.
In recent years, PCD does not only refer to apoptosis with the discovery of various cell death forms, such as pyroptosis and necroptosis.Different forms of cell death have various impacts on the body due to their different mechanisms [40].Pyroptosis and necroptosis are lytic forms of cell death that release potentially immune stimulatory molecules and have been described as proinflammatory cell death [6].The inflammasome activation plays a vital role in the classical pyroptosis pathway.Necroptosis, another form of inflammatory PCD, is often seen as a backup defense mechanism against cell death.Triggered when apoptotic caspase-8 is impeded, TNF-α binding to TNFR1 is the most characteristic triggering condition of necroptosis in vitro [41].
UA belongs to terpenoids and is widely distributed in natural plants.Previous studies have shown that UA can play an auxiliary anti-inflammatory role in some highly inflammatory diseases [15].Our results showed that UA inhibited the activation of the NLRP3 inflammasome, the cleavage of caspase-1 and GSDMD, and the release of IL-1β.In addition, UA inhibited TNF-α/TNFR1 and suppressed the cascade activation of RIPK1-RIPK3-MLKL.Therefore, pyroptosis and necroptosis co-existed in the process of Mtb infection, which expanded the inflammatory response.However, to our knowledge, we showed UA's effects on pyroptosis and necroptosis for the first time.
Since UA inhibited the expressions of crucial molecules involved in pyroptosis and necroptosis, we further explored whether there were relevant regulatory mechanisms upstream to manipulate these changes.Autophagy widely exists in eukaryotic cells as a self-protection mechanism and is akin to a self-feeding phenomenon.Autophagy degrades damaged organelles, old and abnormal or non-functional proteins, and other substances in the cell through lysosomes.Autophagy also contributes to cell energy metabolism, maintains cell homeostasis, and regulates cell survival and death [42].Previously, our group has demonstrated that activation of the Akt/mTOR signaling pathway during Mtb infection leads to the inhibition of autophagy and ultimately exacerbates the inflammatory response [12].We demonstrated that UA activated autophagy by inhibiting the classical autophagy signaling pathway Akt/mTOR and maintaining cell homeostasis.
However, then the question arises.Is there a parallel relationship between the activation of autophagy by UA and the inhibition of pyroptosis and necroptosis?Or is there some concatenation?Studies have shown that in the absence of Dram1, a stress-induced regulator of autophagy, infected macrophages become overburdened with bacteria, initiating pyroptosis and leading to the spread of infection [43].This finding suggests that inhibition of autophagy promotes the activation of pyroptosis in Mtb infection models.Although no studies have pointed to a relationship between autophagy and necroptosis in Mtb infection models, it shows that autophagy can regulate necroptosis in other infection models [33].Therefore, we hypothesized a similar association between autophagy and necroptosis in this study model, and UA might reverse inflammatory cell death by promoting autophagy.By adopting the autophagy activator Rapa and the autophagy inhibitor CQ, we proved that UA reversed pyroptosis and necroptosis by promoting autophagy.
Moving on to the details of the inferred pathway, RIPK1 is a crucial scaffold protein that regulates cell death and inflammation.RIPK1 has been implicated downstream of various immune receptors [44,45], and most studies have focused on TNF-α/TNFR1mediated RIPK1 activation.We found that the RIPK1/ RIPK3/MKL cascade was activated in Mtb-infected macrophages.We focused on p62, which is not only involved in autophagy but also a RIPK1 binding protein [33,35].In our cell culture model, we verified the formation of the p62-RIPK1 complex, which was inhibited by UA treatment.The inhibition of autophagy substrate p62 by UA could be interpreted as the degradation of p62 due to the activation of autophagy by UA.Therefore, the binding of p62 and RIPK1 was reduced accordingly.This finding also indirectly further illustrated the regulation of necroptosis by autophagy.
In addition, combined with the detrimental effect of cytokine overproduction on the development of inflammation and the significant inhibitory effect of UA on the expressions of TNF-α and its receptor TNFR1 after Mtb infection, we also focused on the critical proinflammatory factor TNF-α, which is closely associated with the process of Mtb infection.It has been suggested that TNF-α is closely associated with necroptosis and pyroptosis [38].In addition, TNF-α-inducing necroptosis inhibits late autophagy [46].Our results were consistent with the findings mentioned above.Mtb successfully escaped autophagy and induced necroptosis and pyroptosis.UA could reverse this event, which was primarily related to the inhibition of TNFα/TNFR1 by UA.
In summary, our data indicated that UA had an antibacterial and anti-inflammatory effect on Mtb-infected macrophages, which was reflected by the effect of UA on several key continuous events, namely, inhibition of Akt/ mTOR signaling pathway, induction of autophagy flux activation, inhibition of pyroptosis and necroptosis, and maintenance of cell homeostasis (Fig. 8).These results suggested that UA could be used as a novel HDT candidate for adjuvant therapy against TB to reduce the side effects of antibiotics and enhance efficacy.In addition, this study on the inflammatory cell death caused by Mtb infection of macrophages provided more perspectives for further understanding of the pathophysiology and escape mechanism of Mtb, which might provide some references for finding more critical and practical targets in the next step.
In the present study, we used the in vitro infection model of H37Ra, which is an attenuated strain and evolved from H37 like its sister strain H37Rv [47].Although there are differentially expressed genes in the two strains, H37Ra is considered a good material for studying virulence genes and the pathogenic mechanism of Mtb because it retains the immunogenicity of the strains and has particular safety.Therefore, elucidating the pathogenic mechanism of H37Ra is of great significance for understanding the pathogenic mechanism of Mtb [48].In addition, the low bioavailability of UA due to its poor solubility and permeability may limit the application of UA in biomedicine.At present, there are also some methods to improve drug bioavailability, such as nanocrystals technology or nanosuspension technology.Some studies have shown that it can significantly improve the solubility and dissolution rate of insoluble drugs, and it is suitable for a variety of drug delivery routes, including injection and oral administration, and has a good application prospect [49,50].In addition, the design and development of synthetic analogs of UA through structural modifications can also better break through its usage limitations [51].Therefore, future studies are needed to further focus on the improvement of UA bioavailability and the combined application with first-line anti-TB drugs to improve the treatment of TB in animal models and clarify whether there is potential for clinical application.
Fig. 1
Fig. 1 Effect of UA on the viability of J774A.1 and U937 cells.a The chemical structure of UA. b Proliferation assay to assess the cytotoxic effect of UA on J774A.1 cells.c Proliferation assay to assess the cytotoxic effect of UA on U937 cells.Data are shown as mean ± SD of at least three independent experiments.* * * p < 0.001.
Fig. 2
Fig. 2 UA inhibits cell death induced by Mtb infection.a, b The levels of LDH released from the supernatants of J774A.1 and U937 cells were measured using the LDH cytotoxicity assay kit, respectively.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.
Fig. 3
Fig. 3 UA alleviates pyroptosis in Mtb-infected macrophages.a-d The protein expression levels of NLRP3 in cell lysates of J774A.1 and U937 cells were determined by western blot assay, respectively.The bar graph below shows the statistical results of the grayscale values of NLRP3 protein levels.e, f Protein fractions of different treatment groups and DSS cross-linked in J774A.1 and U937 cells were analyzed by western blot, respectively, as indicated.The monomer, dimer, and oligomers forms of ASC are indicated in the figures.g, h The effect of UA on ASC and NLRP3 protein interactions was examined in J774A.1 and U937 cells that have been validated to express ASC and NLRP3, after Mtb treatment alone or in combination with UA, and immunoprecipitated with anti-ASC antibody.i, j The effects of UA on the expression of pyroptosis-associated proteins cleaved-caspase-1, GSDMD-N and HMGB1 in J774A.1 and U937 cells were detected by western blot, and the results of grayscale values were counted.k, l The inhibition of Mtb-induced IL-1β content in J774A.1 and U937 cells by different concentrations of UA were detected by ELISA.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.ns: nonsignificant.
Fig. 4
Fig. 4 UA alleviates necroptosis in Mtb-infected macrophages.a, c Inhibition of Mtb-induced TNF-α release from J774A.1 and U937 cells at different concentrations of UA by ELISA.b, d Effect of UA on TNFR1 protein expression in J774A.1 and U937 cells by western blot.e, f The effects of UA on the expression of necroptosis key proteins phosphorylated RIPK1 (pRIPK1), phosphorylated RIPK3 (pRIPK3) and pMLKL in J774A.1 and U937 cells were determined by western blot assay.g, h Rabbit anti-pMLKL (green) and DAPI (blue) were used for immunofluorescence staining.The fluorescence signal and localization of pMLKL in J774A.1 and U937 cells were observed by confocal microscopy.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.ns, nonsignificant.
Fig. 5
Fig. 5 UA induces autophagy in macrophages by inhibiting the Akt/mTOR pathway.a-d In J774A.1 and U937 cells, the expression levels of p62 and LC3 II in cell lysates were determined by western blot assay, respectively.The bar graphs below show the statistical results of p62 and LC3 II protein levels.e, f Western blot assay of p-Akt and p-mTOR expression in J774A.1 and U937 cells.Total Akt and Total mTOR were used as internal controls, respectively.The bar chart below shows the statistical results of the relative intensity of p-Akt and p-mTOR.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.ns, nonsignificant.
Fig. 6
Fig. 6 UA inhibits pyroptosis and necroptosis by promoting autophagy.a, b Detection of p62, LC3 II, GSDMD-N, and pMLKL protein expression levels after addition of Rapa (1 µg/ml) or CQ (20 μM) in J774A.1 and U937 cells.The bar graph below shows the statistical results of the grayscale values of each protein level.c, d Confocal microscopic observation of J774A.1 and U937 cells after different treatments with anti-ASC (red), LC3 (green), and DAPI (blue), respectively.Yellow arrows indicate ASC spots, and white arrows indicate co-location of ASC and LC3.e, f The effect of UA on the interaction of RIPK1 and p62 proteins was examined by immunoprecipitation with anti-RIPK1 antibody in J774A.1 and U937 cells that have been validated to express RIPK1 and p62 after Mtb treatment alone or in combination with UA.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.ns, nonsignificant.
Fig. 7
Fig. 7 UA promotes autophagy and inhibits pyroptosis and necroptosis, which is related to TNF-α/TNFR1.a, b Western blot detection of pMLKL, GSDMD-N, p62, and LC3 II protein levels in lysates of Mtb-infected J774A.1 cells or U937 cells with or without TNF-α (30 ng/ml) or UA treatment.The bar chart below shows the statistical results of the grayscale values for each protein level.Data are shown as mean ± SD of three independent experiments.* p < 0.05, * * p < 0.01, and * * * p < 0.001.ns, nonsignificant.
Fig. 8
Fig. 8 An illustration of the role of UA in inhibiting pyroptosis and necroptosis in Mtb-infected macrophages.In the present study, we found that UA promotes autophagy through synergistic inhibition of Akt/mTOR and TNF-α/TNFR1 signaling pathways, thereby inhibiting pyroptosis and necroptosis in Mtb-infected macrophages. | 7,379.4 | 2023-05-22T00:00:00.000 | [
"Biology"
] |
Observation of a photoinduced, resonant tunneling effect in a carbon nanotube–silicon heterojunction
A significant resonant tunneling effect has been observed under the 2.4 V junction threshold in a large area, carbon nanotube–silicon (CNT–Si) heterojunction obtained by growing a continuous layer of multiwall carbon nanotubes on an n-doped silicon substrate. The multiwall carbon nanostructures were grown by a chemical vapor deposition (CVD) technique on a 60 nm thick, silicon nitride layer, deposited on an n-type Si substrate. The heterojunction characteristics were intensively studied on different substrates, resulting in high photoresponsivity with a large reverse photocurrent plateau. In this paper, we report on the photoresponsivity characteristics of the device, the heterojunction threshold and the tunnel-like effect observed as a function of applied voltage and excitation wavelength. The experiments are performed in the near-ultraviolet to near-infrared wavelength range. The high conversion efficiency of light radiation into photoelectrons observed with the presented layout allows the device to be used as a large area photodetector with very low, intrinsic dark current and noise.
applied voltage and excitation wavelength. The experiments are performed in the near-ultraviolet to near-infrared wavelength range. The high conversion efficiency of light radiation into photoelectrons observed with the presented layout allows the device to be used as a large area photodetector with very low, intrinsic dark current and noise.
Introduction
Negative differential resistance (NDR), where the current decreases as a function of voltage, has been observed in the current-voltage curves of several types of structures (e.g., heavily doped p-n junction, double and triple barrier, quantum well, quantum wires and quantum dots, nanotubes and graphene) [1][2][3][4][5][6][7]. In general, it has been associated with the occurrence of a process at the junction that allows the electrons to tunnel between energy levels that are aligned only at a certain applied voltage. In the case of carbon nanotubes (CNTs), a number of cases have been reported in which this effect has been observed both for single-walled as well as for doublewalled CNTs [3][4][5].
In this work, a photosensitive junction was fabricated which exhibits a current-voltage characteristic showing a marked tunneling-like shape with a NDR in the region between 1.5 and 2.2 V of excitation light. In fact, in this region, the observed current decreases and varies with the incident photon wavelength. The effect of the incident radiation is so strong it allows the carriers to cross the junction through the 2.4 V barrier, even at voltages of a few hundred mV.
The optoelectronic properties of semiconducting carbon nanotubes are advantageous for the development of photodetector devices in the near-to-mid-infrared region (from ≈1 to ≈15 μm) [8]. The mechanisms behind the infrared sensitivity of CNTs have been discussed by various authors [9,10]. The photoconductivity of individual CNTs, as well as ropes and films of CNTs have been studied extensively both in the visible [11] and the infrared [12] range. The variations in the photoconductivity of CNT-based devices have been attributed to the photon-induced generation of charge carriers in single-wall CNTs and the subsequent charge separation across the carbon nanotube-metal contact interface [11]. To the best of our knowledge, there is a lack of measurements in the UV region [8], and moreover, there are no reports on the observation of the NDR generated by light radiation to date.
In this paper, we report on the device characteristics, optoelectronic properties and, for the first time, a portion of the I-V curve showing a bell-shape tunneling behavior with a marked presence of a NDR. The tunneling current is generated by the incident radiation and it is function of the wavelength and the incident power intensity.
Experimental
In a similar manner to that described [13], the photodevice was realized by growing a film of multiwall carbon nanotubes (MWCNTs) on an n-doped silicon substrate. The substrates used to build the photodetector were fabricated by Fondazione Bruno Kessler (FBK) in Povo, Trento (Italy), unlike the substrate of the devices shown in Figure 1 of [13]. On the upper part of the n-doped silicon wafer (1 × 1 cm 2 , 300 μm thickness and resistivity of 3-12 Ω•cm) an insulating layer of 60 nm of silicon nitride (Si 3 N 4 ) is grown by plasma-enhanced chemical vapor deposition (PECVD). Two, circular, metallic Ti/Pt electrodes of 1 mm in diameter are placed at a distance of 4 mm from each other (Figure 1a) on the silicon nitride surface. A metallic guard ring, 1 mm wide, serves to inhibit superficial current dispersions during electrical measurements. In the bottom part of the silicon wafer, a thin n + implanted layer ensures ohmic contact between the silicon and the metallic Ti/Pt electrodes, covering the entire back surface (Figure 1b). Thus, the main differences between the FBK substrate used in this work and the substrate used in [13] are: the Si 3 N 4 insulation layer on the upper part of the Si is much thinner (60 nm instead of 140 nm), the different thickness of Si (300 μm instead of 500 μm), the different Si resistivity (3-12 Ω•cm instead of 40 Ω•cm) and the absence of a Si 3 N 4 insulating layer in the bottom part of the Si layer. Due to these differences, the results from this work are different from those obtained in earlier work reported in [13].
The FBK substrate was then covered with a uniform layer of MWCNTs grown on the implantation area by CVD. The MWCNTs grow due to the presence of catalytic particles of about 60 nm in diameter, which are obtained by annealing a 3 nm thick Ni film at 700 °C for 20 min in a hydrogen atmosphere. The film was deposited on the substrate by thermal evaporation at a pressure of 10 −6 Torr. The diffusion of Ni on Pt guarantees the absence of catalyst particles directly on the electrodes, and the growth of MWCNTs only on the Si 3 N 4 substrate. The MWCNTs were grown by keeping the substrate at a temperature of 700 °C for 10 min in an acetylene atmosphere. In Figure 2a, a scanning electron microscopy image of the resulting MWCNT is reported and in the inset a Raman spectrum of MWCNT exhibits two main peaks attributed to the Dand G-bands. The G-band at ≈1600 cm −1 corresponds to the splitting of the E 2g stretching mode of graphite. The intense D-band indicates the presence of defective graphitic structures or amorphous carbon [14].
Regarding the electrical measurements performed, a drain voltage was applied between the topside and backside of the electrodes (Figure 2b). The topside electrodes were both connected to ground. The investigation of the device behavior as a radiation detector was performed with continuous emitting laser diodes (LDs) at several wavelengths. The LD intensity was controlled by a low voltage power supply and measured with a power meter. Measurements were performed at room temperature, at LD powers from 0.1 to 1.0 mW with a 0.1 mW step, with a drain voltage ranging from −5 to 30 V with a step of 0.1 V, and at fixed excitation wavelengths of 378, 405, 532, 650, 685, 785, 880 and 980 nm. The current was measured with a Keithley 2635 source meter, which also provided the drain voltage. The measurement procedure was controlled by LabView routines running on a PC.
Results and Discussion
Measurements were carried out on both the CNT-Si heterojunction and the Si substrates to compare the behavior of the pure substrate and the CNT-Si junction. Figure 3 shows the comparison between the dark currents of the bare substrate and of the CNT-Si heterojunction. The curves were obtained after stressing the junctions through different sweep voltages in the The reverse current for positive voltages, however, is very different. As the substrate shows a linear trend due to internal thermionic emission and a low shunt resistance, the CNT-Si junction exhibits a null dark current until a threshold is reached. For this device, the threshold was found at 2.4 V. Above this threshold, the current assumes a linear trend. In any case, the thermionic current through the heterojunction is less than that present in the substrate alone.
The detailed characteristics of the dark current around the threshold voltage are shown in Figure 4a, and Figure 4b shows the plot of the capacitance-voltage (C-V) measurement, which evidence the rapid decrease of the charge accumulation layer of the heterojunction around the threshold.
The CNT-Si junction exhibits interesting photosensitivity properties. While the substrate is light insensitive, the device with CNT deposited on the Si 3 N 4 layer is greatly sensitive to radiation in the range from 378 to 980 nm. Figure 5a reports the photocurrent measured in the configuration shown in Figure 2b. When the drain voltage exceeds the threshold voltage shown in Figure 3, the reverse photocurrent begins to grow linearly until reaching a plateau, which is constant over a large voltage range. The photocurrent depends quite linearly on the intensity of the illumination, as shown in Figure 5b. No saturation effects were observed up to tens of mW. The photodetector is sensitive to light radiation over a wide range of wavelengths. Figure 5c shows the measured photoresponsivity (photocurrent generated by 1 mW of light intensity) for incident light of wavelengths ranging from 378 to 980 nm. When illuminated by the monochromatic intensity of a filtered xenon lamp, the external quantum efficiency (EQE) trend is similar to that of the LED illuminated experiment, as shown in Figure 5d.
It should be noted the efficiency of the detector for near-ultraviolet radiation is well above that of the Si photodetectors. This effect was observed in several similar devices as reported in [13,[15][16][17]. However, in this report, there are some relevant, new aspects to be noted. The first one is that the EQE of the present device exhibits a maximum around 700 nm, which is at a wavelength much shorter than observed in earlier works [13]. In addition, the EQE shape is more symmetric over a large wavelength range and remains high at wavelengths from the near-UV to near-IR. The second important difference is the smaller threshold value obtained in this case. Both of these results lead to improved performance of the current device. Moreover, for these devices, we observed for the first time a non-zero current in the reverse voltage region below the 2.4 eV junction threshold under light. The shape of the current-voltage curve presents a NDR and resembles that of a resonant tunneling junction. The drain voltage at maximum photocurrent varies weakly as a function of the wavelength of the incident radiation and is at about 1.8 V for 378 nm, 1.5 V for 650 nm and 1.7 V for 980 nm. The ratio between the peak and valley tunnel photocurrent depends on the light intensity and wavelength, as well as the NDR. The peak current is proportional to the EQE of photoconversion to any intensity and any wave- length, and is about 5 × 10 −4 times the corresponding reverse current (at plateau) for all intensities and at all wavelengths. These effects were tested in a number of samples. Figure 5 and Figure 6 show the data obtained for two of these samples. Thus, the values reported for the ratio between the NDR peak current and the reverse current at plateau cannot be calculated given the current values reported in the two figures.
These observations clearly indicate that incident light and (as a consequence) the photogenerated charges play a fundamental role in the heterojunction behavior. The similarity in the current shape with that of a typical resonant tunneling junction suggests that a kind of electronic resonance process induced by the photogenerated charges may be present. Recently, Castrucci et al. [18] stressed that multiwall CNTs can contribute to the photocurrent because their density of states shows the same van Hove singularities as the single-walled CNTs. The excitation of electron-hole pairs is the responsible for this effect in each single wall of the multiwall CNT. In the present case, the incident light produces the sizeable absorption band observed around 1.5-2.4 eV that is a convolution of the several electronic transitions occurring in each nanotube. The contacts among the nanotubes ensure the charge transfer between the nanotubes and the observation in the I-V curve. The bell shape of the absorption band detected in the I-V spectra mimics that observed in the tunneling effect between a highly doped p-n junction; however, in our case, the physics behind this process is completely different. However, several questions are still open regarding the interpretation of the experimental data, for which we cannot exclude the presence of different mechanisms.
Conclusion
In this paper, we report the results of a negative differential resistance behavior generated by the incident radiation, which varies as a function of wavelength and incident power intensity for a new photosensitive device consisting of MWCNTs grown at 700 °C on a Si substrate. The junction presents rectifying properties with a 2.4 V threshold to the flow of reverse current, a strong photosensitivity to light radiation at wavelengths between 378 and 980 nm, a very broad plateau extended over a large range of drain voltages, and a good linearity of the photoresponsivity versus light intensity. The conversion efficiency of light radiation to photocurrent is maximum at 730 nm, with an external quantum efficiency of ≈92%, and an EQE of ≈43% at 378 nm. No saturation phenomena were observed at high intensity, and no significant differences between the diffuse light of a xenon lamp and the directed light of LDs were observed.
The most surprising result was the observation of a remarkable photoinduced resonant tunneling-like current, which was completely absent in dark conditions, and which was absent in the substrate without CNTs. Therefore, the resonant tunnel-like current is generated only under light radiation and it is function of the wavelength as well as of the power intensity. The ratio between the resonant tunneling-like peak photocurrent and the plateau of the reverse photogenerated current was about 5 × 10 −4 for all intensities and wavelengths. These features, which are currently still under investigation, suggest the potential use of the device for optoelectronics applications. | 3,277.4 | 2015-03-10T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Simultaneous choice of time points and the block design in the growth curve model
The aim of this paper is to consider the optimality in the growth curve model with respect to two aspects: time and the block design and to show some relations between information functions for different designs. The A-, D- and E- optimality are studied.
correlations between observations, authors determined the optimal allocation of time points in the given interval.
The aim of this paper is to consider the optimal choice of time points and the block design in the experiment and show some relations between information functions for different designs.
We have organized the paper as follows. In Sect. 2 the model of experiment is considered and the form of the information matrix for the estimation of the treatment effects in this model is given. In Sect. 3 optimality criteria for designs are formulated. Some results about the optimality with respect to an allocation of time points and an allocation of treatments in the experiment are shown in Sect. 4. In Sect. 5 some relations for information functions for different designs are given. The last section contains the discussion part where some limitations of the present study are presented and the possibility of future research is given.
Extended growth curve model
Consider an experiment in the block design where treatments are arranged in n = bk plots where b denotes the number of blocks and k is the size of each block. The illustration of this experiment is as follows The material within the blocks is relatively homogeneous but it differs between blocks. Suppose that in the block design the characteristic is measured at q time points which are denoted by l j , j = 1, 2, . . . , q and observations for v treatments are compared in time. Assume that observations on each plot are taken once at each time point, and observations on all plots are taken at the same time points. In this experiment we consider two kinds of designs: design t ∈ T and design d ∈ D which mean an allocation of time points in the experiment and an allocation of treatments in the block design, respectively. Thus, T denotes the class of sets of time points chosen from the given time interval: Class D denotes the class of block designs with v treatments, b blocks of size k. The different allocation of treatments in blocks means the different block design in D. The block design is described by the matrix where rows denote blocks and the number of columns is equal to the size of blocks. Elements of the matrix are labels of treatments.
The extended growth curve model in the experiment on the block design has the form of where Y is the n × q matrix of observations, the n × v matrix A 1,d is the design matrix of treatment effects, the n ×b matrix A 2 = I b ⊗1 k is the design matrix of block effects, where I b is the b × b identity matrix and 1 k means the vector of 1's. The matrix A 1,d is indexed by d because it changes for the different allocation of treatments in blocks.
The v × ( p + 1) matrix B 1 , the b × ( p + 1) matrix B 2 are matrices of unknown parameters of treatment effects and block effects, respectively. The ( p + 1) × q matrix C 1,t is the design matrix of time points, where p denotes a degree of polynomial, and E is the n × q matrix of random errors with the mean equal to zero. We assume that in the block observations of plots on different treatments measured at the same time points are uncorrelated, while the correlation between observations of plots at different time points is described by the matrix . Hence, the dispersion matrix of errors has the form D(E) = ⊗ I n , where is the known q × q positive definite matrix and the symbol ⊗ denotes the Kronecker product. In the considered model the design matrix of time points, C 1,t , is the type of Vandermond matrix of the form Model (1) can be also written as and it is the extension of the model given by Potthoff and Roy (1964). However in this model one matrix of curve coefficients can be estimated and its elements are sums of treatment effects and block effects. In our considerations only the estimation of treatment effects is interesting so the form of model (1) is more appropriate.
Assuming that the matrix of random errors has the normal distribution and the known dispersion matrix we can determine the information matrix for the estimation of treatment effects in model (1). From the paper Markiewicz and Szczepańska (2007) this matrix has the following form where Q A = I − A(A A) −1 A is the orthocomplement of the column space of matrix A. Observe that the matrix C 1,t −1 C 1,t depends only on an allocation of time points in the experiment and is the information matrix for the estimation of β 1 in the following univariate model where y ∈ IR q , β 1 ∈ IR ( p+1) and ∈ IR q . The matrix A 1,d Q A 2 A 1,d depends on an allocation of treatments in the block design and is the information matrix for the estimation of treatment effects (β 1 ) in the following model in the block design where y ∈ IR n , β 1 ∈ IR v , β 2 ∈ IR b , ∈ IR n .
Looking for the optimal choice of time points in the experiment in model (1) we assume that the number of time points is equal to the number of regression coefficients. It was showed by Garza (1954) that for the polynomial regression model of degree p with uncorrelated errors the dispersion matrix of the estimated polynomial coefficients can be attained by the spacing the information at only p + 1 values. The de la Garza phenomenon was used in the theory of optimal designs, for example in papers written by Luoma et al. (2001), Mandal (2002). Moreover, Moerbeek (2005) showed that the efficiency of optimal design with p + 1 ≤ q ≤ 6 time points relative to the optimal design with p + 1 time points and as function of autocorrelation coefficient relative decreases when the number of time point increases in model (4). This property was analyzed using criteria of A-, D-and E-optimality for the estimation of β 1 in model (4). Also Yang (2010) showed that the de la Garza phenomenon exists for many nonlinear models.
Moreover, we assume in the matrix C 1,t that where P D(q) denotes the class of positive definite matrices of order q. Moreover, we look for the optimal design with respect to an allocation of treatments in the block in the class of connected designs. Following Caliński and Kageyama (2000) a block design is said to be connected if, for given any two treatments i and i , it is possible to construct a chain of treatments i = i 0 , i 1 , . . . , i m = i such that every consecutive two treatments in the chain occur together in a block. The example of the disconnected design and the connected design which fulfils the above definition is given below.
Example (John 1987) Disconnected Design : Connected Design : In the class of connected designs the rank of information matrix is equal to the number of treatments minus 1. We assume that D is the class of connected designs so Taking into consideration the above properties of matrices C 1,t −1 C 1,t and (3), can be written as
Optimality criteria
Let consider optimality criteria given in Pukelsheim (1993) which are based on the information function φ from the closed cone of nonnegative definite matrices into the real line: where the function φ is isotonic, concave, nonconstant and positive homogeneous.
Denote by the class of all information functions and let G be the class of designs and W g be the information matrix depended on design g ∈ G. The definition of optimality of design is as follows: Definition 1 Design g * is called a φ-optimal design in the class of designs G, if g * maximizes φ(W g * ) for any arbitrary function φ ∈ .
The aim of the paper is to consider the optimality of designs in two aspects: time and the block design. Let χ ∈ and F t,d be the information matrix dependent on design (t, d) ∈ T × D, where T × D means the class of pairs of designs: the first from class T and the second from class D. We determine the optimal design in the new context using the following definition: for any arbitrary function χ ∈ .
The classical optimality criteria as the average-variance criterion, the determinant criterion and the smallest-eigenvalue criterion are based on the following information functions 418 A. Szczepańskã where C is the k × k nonsingular information matrix and tr(.), det(.), λ min (.) denote trace, determinant and the smallest eigenvalue of matrix.
Observe that in the class of connected designs the information matrix C ∈ IR k×k is singular and rank(C) = k − 1. Then functions given in (7) are not specified correctly becauseφ D (C),φ E (C) are equal to zero andφ A (C) can not be calculated. In this case the first zero eigenvalue of matrix C should not be taken into account and information functions have to be formulated as follows where λ i (C) denotes i-th eigenvalue of matrix C and λ min (C) means the smallest nonzero eigenvalue of matrix C.
In the case, C = A ⊗ B, where A ∈ P D( p) and B ∈ N N D(r ), rank(B) = r − 1, information functions, given in (8), are where λ min (C) means the smallest nonzero eigenvalue of matrix C.
The examples of φ-optimal designs are A-, D-, E-optimal designs. We say that design g * is A-, D-or E-optimal, if for each design g ∈ G the following condition is fulfiled respectively. The same criteria are fulfiled when instead of functionφ we use φ or χ .
Optimal designs
Based on optimality criteria presented in Sect. 3 we formulate the theorem which characterizes the optimal design with respect to an allocation of time points and an allocation of treatments in the experiment. (4) and d * ∈ D is D-optimal (φ D -optimal) for the estimation of β 1 in model (5). (1) if and only if t * ∈ T is E-optimal (φ E -optimal) for the estimation of β 1 in model (4) and d * ∈ D is E-optimal (φ E -optimal) for the estimation of β 1 in model (5).
b) Design (t * , d * ) ∈ T × D is D-optimal (χ D -optimal) for the estimation of B 1 in model (1) if and only if t * ∈ T is D-optimal (φ D -optimal) for the estimation of β 1 in model
Proof a) Observe that the information matrix for the estimation of B 1 in model (1), given in (6), is singular so to find the optimal design we use information functions given in (9). Let consider matrix M t,d = V t ⊗ U d and let be the class of all information functions and χ A , φ A ,φ A ∈ . From (9) and using properties of Kronecker product we have the following forms Observe that from Definition 2 we get Design (t * , d * ) which maximizes the above form in the class of design T × D is optimal in model (1) so the same designs t * and d * are also optimal in models (4) and (5).
The proofs of (b) and (c) run similar as in (a).
Some relations for information functions for different designs
In this section we show some relations between information functions for different optimality criteria. Let ≥ L denote Loewner ordering. Remember that A ≥ L B if and only if A − B is the nonnegative definite matrix. Let W g 1 and W g 2 be the information matrices under designs g 1 and g 2 , respectively. It is said that design g 1 dominates design g 2 , g 1 g 2 , if W g 1 ≥ L W g 2 . Unfortunately, using Loewner ordering, the optimal design in the given class can not be found. Pukelsheim (1993) showed that there no exists optimal design which dominates in Loewner ordering sense all designs from given class. Theorem 2 shows some relations between information functions for different designs in models (1), (4), and (5) under assumptions t 1 t 2 , d 1 d 2 .
Theorem 2 Let V t 1 ∈ IR q×q and V t 2 ∈ IR q×q be the information matrices for the estimation of β 1 in model (4) for two different designs t 1 , t 2 ∈ T and U d 1 ∈ IR v×v and U d 2 ∈ IR v×v be information matrices for the estimation of β 1 in model (5) for two different designs d 1 , d 2 ∈ D, where D is the class of connected designs. Moreover, let V t 1 ⊗ U d 1 ∈ IR vq×vq be the information matrix for the estimation of B 1 in model (1) for the design (t 1 , d 1 ).
Proof Observe that from assumptions given in the theorem we have These relations imply that nonnegative definite matrices, P and R, exist and V t 1 = V t 2 + P and U d 1 = U d 2 + R. Let consider Kronecker product of matrices V t 1 and U d 1 , The matrix V t 2 ⊗ R + P ⊗ U d 1 is nonnegative definite so we get NND(qv). (10) Based on Theorem 3.18 given by Schott (1997) and from (10) we obtain Let take only positive eigenvalues of matrices (a) From (11) we have where λ min (V t 2 ) is the smallest eigenvalue of matrix V t 2 . Taking the above expression to the power 1 qv−q we have In the same manner we can see that (b) Similar arguments to (a) apply to case (b). Let take nonzero eigenvalues of matrices V t 1 ⊗ U d 1 and V t 2 ⊗ U d 2 . Then .
Multiplying the inequality by 1 qv−q and inverting the above form we get Using the same method we can show The example of designs which fulfil Theorems 2 is given below.
Observe that by formulating a proof similar to the proof of the second formula of Theorem 2, part b, we can show relations given in the following corollary.
Conclusion
It was shown in Sect. 4 that designs which are optimal in the growth curve model are also optimal in the appropriate univariate model with respect to the same optimality criteria. Properties of determinant, trace and eigenvalues of Kronecker product of two matrices facilitate the determination of optimal design in model (1). It is also interesting to find the optimal design in the growth curve model using mixed criteria of optimality. This problem is more complex and to solve it the analyse of properties of information function is needed. The function which is the product of two arbitrary information functions is not the information function. To find the optimal design in the growth curve model with respect to mixed optimality criteria probably the invention of new functional should be considered. It opens the possibility of future research. | 3,735.2 | 2012-03-20T00:00:00.000 | [
"Mathematics"
] |
Developments of the Physical and Electrical Properties of NiCr and NiCrSi Single-Layer and Bi-Layer Nano-Scale Thin-Film Resistors
In this study, commercial-grade NiCr (80 wt % Ni, 20 wt % Cr) and NiCrSi (55 wt % Ni, 40 wt % Cr, 5 wt % Si) were used as targets and the sputtering method was used to deposit NiCr and NiCrSi thin films on Al2O3 and Si substrates at room temperature under different deposition time. X-ray diffraction patterns showed that the NiCr and NiCrSi thin films were amorphous phase, and the field-effect scanning electronic microscope observations showed that only nano-crystalline grains were revealed on the surfaces of the NiCr and NiCrSi thin films. The log (resistivity) values of the NiCr and NiCrSi thin-film resistors decreased approximately linearly as their thicknesses increased. We found that the value of temperature coefficient of resistance (TCR value) of the NiCr thin-film resistors was positive and that of the NiCrSi thin-film resistors was negative. To investigate these thin-film resistors with a low TCR value, we designed a novel bi-layer structure to fabricate the thin-film resistors via two different stacking methods. The bi-layer structures were created by depositing NiCr for 10 min as the upper (or lower) layer and depositing NiCrSi for 10, 30, or 60 min as the lower (or upper) layer. We aim to show that the stacking method had no apparent effect on the resistivity of the NiCr-NiCrSi bi-layer thin-film resistors but had large effect on the TCR value.
Introduction
A wide variety of materials have been investigated as thin-film resistors in integrated circuit (IC) applications. The need for appropriate properties-such as high sheet resistance, low-temperature coefficient of resistance, and stability under ambient conditions-has motivated investigations into electronic conduction mechanisms in a number of ceramal [1,2] and alloy resistor systems [3,4]. In IC fabrication technologies, resistors can be implemented by using diffusion methods fabricated in the base and emitter regions of bipolar transistors, or in the source/drain regions of a CMOS, or by depositing thin films on the surfaces of wafers. Numerous studies have been published on the active metal brazing of engineering ceramics to increase service temperatures [5]. Thompson The resistances of the NiCr and NiCrSi thin-film resistors were measured using the four-point probe method, and the resistivity was calculated with the measured resistances and thicknesses of the NiCr and NiCrSi thin films, according to Equation (1): where R is the resistance, ρ is the resistivity, A is the area of the resistor, and l is the length of the resistor. In this study, the measured temperatures were 25, 50, 75, 100, and 125 °C. The resistivity measured at those temperatures was used to find the TCR values of the NiCr-based and NiCrSi-based thin-film resistors and the two bi-layer thin-film resistors. As already mentioned, we found that the NiCr-based thin-film resistors had positive TCR values whereas the NiCrSi-based thin-film resistors had negative TCR values, so we investigated novel NiCr-NiCrSi bi-layer thin-film resistors, the structures of which are shown in Figure 2. We hoped to develop a thin-film resistor with a TCR value close to 0 ppm/°C, so we created the bi-layer thin-film resistors using two different stacking methods: (i) NiCr thin films were deposited for 10 min as the upper layer, and NiCrSi thin films were deposited for 10, 30, or 60 min as the lower layer ( Figure 2a); or (ii) NiCr thin films were deposited for 10 min as the lower layer and NiCrSi thin films were deposited for 10, 30, or 60 min as the upper layer ( Figure 2b). To measure each layer's and bi-layer's thicknesses, at least eight samples were deposited at the same time. After deposition, at least three samples were used to measure the thickness of the lower layer, which was also obtained using α-step equipment and confirmed with FESEM. The other five samples were used to deposit the upper layer, and the total thickness of the bi-layer thin films was also measured by the same process. The thickness of the upper layer was calculated using the total thickness minus the thickness of the lower layer and was confirmed by FESEM. The resistance of the bi-layer thin-film resistors was also measured using the four-point probe method, and the The resistances of the NiCr and NiCrSi thin-film resistors were measured using the four-point probe method, and the resistivity was calculated with the measured resistances and thicknesses of the NiCr and NiCrSi thin films, according to Equation (1): where R is the resistance, ρ is the resistivity, A is the area of the resistor, and l is the length of the resistor. In this study, the measured temperatures were 25, 50, 75, 100, and 125˝C. The resistivity measured at those temperatures was used to find the TCR values of the NiCr-based and NiCrSi-based thin-film resistors and the two bi-layer thin-film resistors. As already mentioned, we found that the NiCr-based thin-film resistors had positive TCR values whereas the NiCrSi-based thin-film resistors had negative TCR values, so we investigated novel NiCr-NiCrSi bi-layer thin-film resistors, the structures of which are shown in Figure 2. We hoped to develop a thin-film resistor with a TCR value close to 0 ppm/˝C, so we created the bi-layer thin-film resistors using two different stacking methods: (i) NiCr thin films were deposited for 10 min as the upper layer, and NiCrSi thin films were deposited for 10, 30, or 60 min as the lower layer ( Figure 2a); or (ii) NiCr thin films were deposited for 10 min as the lower layer and NiCrSi thin films were deposited for 10, 30, or 60 min as the upper layer ( Figure 2b). The resistances of the NiCr and NiCrSi thin-film resistors were measured using the four-point probe method, and the resistivity was calculated with the measured resistances and thicknesses of the NiCr and NiCrSi thin films, according to Equation (1): where R is the resistance, ρ is the resistivity, A is the area of the resistor, and l is the length of the resistor. In this study, the measured temperatures were 25, 50, 75, 100, and 125 °C. The resistivity measured at those temperatures was used to find the TCR values of the NiCr-based and NiCrSi-based thin-film resistors and the two bi-layer thin-film resistors. As already mentioned, we found that the NiCr-based thin-film resistors had positive TCR values whereas the NiCrSi-based thin-film resistors had negative TCR values, so we investigated novel NiCr-NiCrSi bi-layer thin-film resistors, the structures of which are shown in Figure 2. We hoped to develop a thin-film resistor with a TCR value close to 0 ppm/°C, so we created the bi-layer thin-film resistors using two different stacking methods: (i) NiCr thin films were deposited for 10 min as the upper layer, and NiCrSi thin films were deposited for 10, 30, or 60 min as the lower layer ( Figure 2a); or (ii) NiCr thin films were deposited for 10 min as the lower layer and NiCrSi thin films were deposited for 10, 30, or 60 min as the upper layer ( Figure 2b). (a) NiCr thin films were deposited for 10 min as the upper layer, and NiCrSi thin films were deposited for 10, 30, or 60 min as the lower layer; (b) NiCr thin films were deposited for 10 min as the lower layer, and NiCrSi thin films were deposited for 10, 30, or 60 min as the upper layer.
To measure each layer's and bi-layer's thicknesses, at least eight samples were deposited at the same time. After deposition, at least three samples were used to measure the thickness of the lower layer, which was also obtained using α-step equipment and confirmed with FESEM. The other five samples were used to deposit the upper layer, and the total thickness of the bi-layer thin films was also measured by the same process. The thickness of the upper layer was calculated using the total thickness minus the thickness of the lower layer and was confirmed by FESEM. The resistance of the bi-layer thin-film resistors was also measured using the four-point probe method, and the To measure each layer's and bi-layer's thicknesses, at least eight samples were deposited at the same time. After deposition, at least three samples were used to measure the thickness of the lower layer, which was also obtained using α-step equipment and confirmed with FESEM. The other five samples were used to deposit the upper layer, and the total thickness of the bi-layer thin films was also measured by the same process. The thickness of the upper layer was calculated using the total thickness minus the thickness of the lower layer and was confirmed by FESEM. The resistance of the bi-layer thin-film resistors was also measured using the four-point probe method, and the resistivity was calculated from the resistance and thickness of the bi-layer structures' thin films. Finally, we measured the TCR of the bi-layer structures' thin-film resistors.
Results and Discussion
The effect of deposition time on the thicknesses of the NiCr-based and NiCrSi-based thin films was investigated, and the results are shown in Figure 3. The thicknesses of the NiCr thin films deposited over 10, 30, and 60 min were about 64.3, 170.7, and 327.9 nm, and the thicknesses of the NiCrSi thin films deposited over 10, 30, 60, and 150 min were about 30.8, 90.7, 140.1, and 334.7 nm, respectively. The results in Figure 3 suggest that the deposition rate of the NiCr thin films was higher than that of the NiCrSi thin film. In both instances, however, as the deposition time increased, there was a linear increase in the thicknesses of the NiCr-based and NiCrSi-based thin films. Nanomaterials 2016, 6, 39 4 of 10 resistivity was calculated from the resistance and thickness of the bi-layer structures' thin films. Finally, we measured the TCR of the bi-layer structures' thin-film resistors.
Results and Discussion
The effect of deposition time on the thicknesses of the NiCr-based and NiCrSi-based thin films was investigated, and the results are shown in Figure 3. The thicknesses of the NiCr thin films deposited over 10, 30, and 60 min were about 64.3, 170.7, and 327.9 nm, and the thicknesses of the NiCrSi thin films deposited over 10, 30, 60, and 150 min were about 30.8, 90.7, 140.1, and 334.7 nm, respectively. The results in Figure 3 suggest that the deposition rate of the NiCr thin films was higher than that of the NiCrSi thin film. In both instances, however, as the deposition time increased, there was a linear increase in the thicknesses of the NiCr-based and NiCrSi-based thin films. XRD was used to investigate the crystalline properties of the NiCr and NiCrSi thin films at room temperature. All of the XRD patterns ( Figure 4) of the NiCr and NiCrSi thin films revealed the amorphous structure, and no crystalline phases were apparently observed, and only the Ag and Al2O3 phases were observed (not shown here). These results suggested that the thickness (or deposition time) had no effect on the crystallization of the as-deposited NiCr and NiCrSi thin films. Because the NiCr and NiCrSi thin films were deposited using a sputtering method in a pure Ar atmosphere, we believe that oxidation did not occur during the deposition process. We used FESEM to observe the surface morphologies of the NiCr thin-film resistors (see Figure 5a for deposition time of 10 min and Figure 5b for that of 60 min) and of the NiCrSi thin-film resistors (see Figure 5c for deposition time of 10 min and Figure 5d for that of 60 min). Only nano-crystalline grains were observed, and the surface morphologies were almost unchanged, regardless of the deposition time. XRD was used to investigate the crystalline properties of the NiCr and NiCrSi thin films at room temperature. All of the XRD patterns ( Figure 4) of the NiCr and NiCrSi thin films revealed the amorphous structure, and no crystalline phases were apparently observed, and only the Ag and Al 2 O 3 phases were observed (not shown here). These results suggested that the thickness (or deposition time) had no effect on the crystallization of the as-deposited NiCr and NiCrSi thin films. Because the NiCr and NiCrSi thin films were deposited using a sputtering method in a pure Ar atmosphere, we believe that oxidation did not occur during the deposition process. We used FESEM to observe the surface morphologies of the NiCr thin-film resistors (see Figure 5a for deposition time of 10 min and Figure 5b for that of 60 min) and of the NiCrSi thin-film resistors (see Figure 5c for deposition time of 10 min and Figure 5d for that of 60 min). Only nano-crystalline grains were observed, and the surface morphologies were almost unchanged, regardless of the deposition time.
Nanomaterials 2016, 6, 39 4 of 10 resistivity was calculated from the resistance and thickness of the bi-layer structures' thin films. Finally, we measured the TCR of the bi-layer structures' thin-film resistors.
Results and Discussion
The effect of deposition time on the thicknesses of the NiCr-based and NiCrSi-based thin films was investigated, and the results are shown in Figure 3. The thicknesses of the NiCr thin films deposited over 10, 30, and 60 min were about 64.3, 170.7, and 327.9 nm, and the thicknesses of the NiCrSi thin films deposited over 10, 30, 60, and 150 min were about 30.8, 90.7, 140.1, and 334.7 nm, respectively. The results in Figure 3 suggest that the deposition rate of the NiCr thin films was higher than that of the NiCrSi thin film. In both instances, however, as the deposition time increased, there was a linear increase in the thicknesses of the NiCr-based and NiCrSi-based thin films. XRD was used to investigate the crystalline properties of the NiCr and NiCrSi thin films at room temperature. All of the XRD patterns ( Figure 4) of the NiCr and NiCrSi thin films revealed the amorphous structure, and no crystalline phases were apparently observed, and only the Ag and Al2O3 phases were observed (not shown here). These results suggested that the thickness (or deposition time) had no effect on the crystallization of the as-deposited NiCr and NiCrSi thin films. Because the NiCr and NiCrSi thin films were deposited using a sputtering method in a pure Ar atmosphere, we believe that oxidation did not occur during the deposition process. We used FESEM to observe the surface morphologies of the NiCr thin-film resistors (see Figure 5a for deposition time of 10 min and Figure 5b for that of 60 min) and of the NiCrSi thin-film resistors (see Figure 5c for deposition time of 10 min and Figure 5d for that of 60 min). Only nano-crystalline grains were observed, and the surface morphologies were almost unchanged, regardless of the deposition time. Figure 6 shows the effects of thickness (deposition time) on resistance and resistivity for NiCr and NiCrSi thin-film resistors measured at 25 °C. The resistances of the NiCr and NiCrSi thin-film resistors were recorded by the four-point probe method, and resistivity was derived from resistance using a measurement of the thin films' thicknesses, shown in Figure 3. As the temperature increased from 25 to 125 °C, the resistance of the NiCr thin-film resistors slightly decreased and that of the NiCrSi thin-film resistors slightly increased. The NiCr and NiCrSi thin-film resistance values at 25 and 125 °C were similar. Figure 6a shows that the resistance of the NiCr thin-film resistors monotonously decreased as the thin films' thickness increased. Figure 6b also shows that the thinner NiCrSi thin-film resistors showed higher resistance, and the resistance reached a saturation value as the thin films' thickness became equal to or greater than 140.1 nm (i.e., when the deposition time was equal to or greater than 60 min). If we suppose that the thicknesses of the NiCr and NiCrSi thin-film resistors were independent of the measured temperature, then the resistivity of the NiCr and NiCrSi thin-film resistors at 25 and 125 °C were similar and the variations in resistivity were not apparently observed.
In the past, only a few papers have discussed the effects of thin films' thickness on their resistance and resistivity, as it has been difficult to discern a correlation between these values. In the free-electron model of metallic thin-film resistors with hard-wall boundary conditions, the discretization of energy levels makes it impossible to treat both the Fermi energy and the electron density as independent of thickness [14]. In this study, as the thin films' thicknesses increased, the log(resistivity) of the NiCr (Figure 6a) and NiCrSi (Figure 6b) thin-film resistors linearly decreased. Katumba and Olumekor found that the log(ρ) of Cu-MgF2 cermet thin-film resistors' thickness exhibited an approximately linear decrease from about 110 to 300 nm. Ultimately, they found the relationship between resistivity and thickness for thin-film Cu-MgF2 cermets to be as follows [2]: where ρf is the resistivity of thin films, ρo is the limiting resistivity of very thick cermets, t is the film thickness, and S is a measure of the separation between the metallic islands embedded in the insulator matrix of the cermets. The present study not only sought the qualitative effects of the thicknesses of the NiCr and NiCrSi thin-film resistors but also attempted to quantify the relationships between resistivity and thickness. We found that the log(resistivity) values of the NiCr and NiCrSi thin-film resistors in Figure 6 decreased in an approximately linear mode as their thicknesses increased-similar to Cu-MgF2 cermet thin-film resistors. Figure 6 shows the effects of thickness (deposition time) on resistance and resistivity for NiCr and NiCrSi thin-film resistors measured at 25˝C. The resistances of the NiCr and NiCrSi thin-film resistors were recorded by the four-point probe method, and resistivity was derived from resistance using a measurement of the thin films' thicknesses, shown in Figure 3. As the temperature increased from 25 to 125˝C, the resistance of the NiCr thin-film resistors slightly decreased and that of the NiCrSi thin-film resistors slightly increased. The NiCr and NiCrSi thin-film resistance values at 25 and 125˝C were similar. Figure 6a shows that the resistance of the NiCr thin-film resistors monotonously decreased as the thin films' thickness increased. Figure 6b also shows that the thinner NiCrSi thin-film resistors showed higher resistance, and the resistance reached a saturation value as the thin films' thickness became equal to or greater than 140.1 nm (i.e., when the deposition time was equal to or greater than 60 min). If we suppose that the thicknesses of the NiCr and NiCrSi thin-film resistors were independent of the measured temperature, then the resistivity of the NiCr and NiCrSi thin-film resistors at 25 and 125˝C were similar and the variations in resistivity were not apparently observed.
In the past, only a few papers have discussed the effects of thin films' thickness on their resistance and resistivity, as it has been difficult to discern a correlation between these values. In the free-electron model of metallic thin-film resistors with hard-wall boundary conditions, the discretization of energy levels makes it impossible to treat both the Fermi energy and the electron density as independent of thickness [14]. In this study, as the thin films' thicknesses increased, the log(resistivity) of the NiCr (Figure 6a) and NiCrSi (Figure 6b) thin-film resistors linearly decreased. Katumba and Olumekor found that the log(ρ) of Cu-MgF 2 cermet thin-film resistors' thickness exhibited an approximately linear decrease from about 110 to 300 nm. Ultimately, they found the relationship between resistivity and thickness for thin-film Cu-MgF 2 cermets to be as follows [2]: where ρ f is the resistivity of thin films, ρ o is the limiting resistivity of very thick cermets, t is the film thickness, and S is a measure of the separation between the metallic islands embedded in the insulator matrix of the cermets. The present study not only sought the qualitative effects of the thicknesses of the NiCr and NiCrSi thin-film resistors but also attempted to quantify the relationships between resistivity and thickness. We found that the log(resistivity) values of the NiCr and NiCrSi thin-film resistors in Figure 6 decreased in an approximately linear mode as their thicknesses increased-similar to Cu-MgF 2 cermet thin-film resistors. Resistances for materials at any temperature other than standard temperature (usually taken to be 20 °C) on the specific resistance can be determined using the following formula: where R is the material resistance at temperature T; Rref is the material resistance at temperature Tref, usually 20 °C or 0 °C; α is the TCR value for the material, symbolizing the resistance change factor per degree of temperature change; T is the material temperature in degrees Celsius; and Tref is the reference temperature that α is specified at for the material.
The TCR values of as-deposited NiCr and NiCrSi thin-film resistors are shown in Figure 7 as a function of the thin films' thicknesses, using the measured results shown in Figure 6. For most pure metals, these TCR values are positive. Dhere et al. found that the NiCr thin-film resistors with low positive TCRs (i.e., fewer than 100 ppm/°C) had been obtained at all thicknesses studied when the total atomic content of chromium, oxygen, and carbon reached 50%-55% [15]. The TCR value of Ni metal is 0.00017 and of Cr metal is 13 × 10 −8 , meaning that resistance increases with increasing measured temperature. Nevertheless, the TCR value of as-deposited NiCr thin-film resistors was positive in the range of 197.2 to 230.1 ppm/°C, which are larger than those of Ni and Cr metals. Single crystalline Si has a TCR value of about −0.04 (depending strongly on the presence of impurities in the material), so Si could be added to change the TCR value of NiCr-based thin-film resistors. As Figure 7 shows, the TCR values of as-deposited NiCrSi thin-film resistors were negative, in the range of −106.4 to −153.3 ppm/°C, meaning that the resistance decreased as the measured temperature increased. The TCR value of the NiCrSi thin-film resistors, shown in Figure 7, had no significant change as the thin-film's thickness increased from 30.8 nm to 334.7 nm. Ni and Cr are metals, and Si is a semiconductor, and the XRD patterns in Figure 4 show that the NiCrSi thin films were in the amorphous phase. Those results suggest that as the NiCrSi thin films are deposited, the Ni and Cr will form alloys and then NiCr alloys will form a NiCrSi compound with Si. The electrical properties of a compound are the sum total of each component in it. Hence, the electrical properties of Si would affect the TCR value of the NiCrSi thin-film resistors. Ni and Cr, as well as NiCr thin-film resistors, have the positive TCR values. We believe that the negative TCR value of the NiCrSi thin-film resistors was caused by the addition of Si into the NiCr alloy. Resistances for materials at any temperature other than standard temperature (usually taken to be 20˝C) on the specific resistance can be determined using the following formula: where R is the material resistance at temperature T; R ref is the material resistance at temperature T ref , usually 20˝C or 0˝C; α is the TCR value for the material, symbolizing the resistance change factor per degree of temperature change; T is the material temperature in degrees Celsius; and T ref is the reference temperature that α is specified at for the material.
The TCR values of as-deposited NiCr and NiCrSi thin-film resistors are shown in Figure 7 as a function of the thin films' thicknesses, using the measured results shown in Figure 6. For most pure metals, these TCR values are positive. Dhere et al. found that the NiCr thin-film resistors with low positive TCRs (i.e., fewer than 100 ppm/˝C) had been obtained at all thicknesses studied when the total atomic content of chromium, oxygen, and carbon reached 50%-55% [15]. The TCR value of Ni metal is 0.00017 and of Cr metal is 13ˆ10´8, meaning that resistance increases with increasing measured temperature. Nevertheless, the TCR value of as-deposited NiCr thin-film resistors was positive in the range of 197.2 to 230.1 ppm/˝C, which are larger than those of Ni and Cr metals. Single crystalline Si has a TCR value of about´0.04 (depending strongly on the presence of impurities in the material), so Si could be added to change the TCR value of NiCr-based thin-film resistors. As Figure 7 shows, the TCR values of as-deposited NiCrSi thin-film resistors were negative, in the range of´106.4 to´153.3 ppm/˝C, meaning that the resistance decreased as the measured temperature increased. The TCR value of the NiCrSi thin-film resistors, shown in Figure 7, had no significant change as the thin-film's thickness increased from 30.8 nm to 334.7 nm. Ni and Cr are metals, and Si is a semiconductor, and the XRD patterns in Figure 4 show that the NiCrSi thin films were in the amorphous phase. Those results suggest that as the NiCrSi thin films are deposited, the Ni and Cr will form alloys and then NiCr alloys will form a NiCrSi compound with Si. The electrical properties of a compound are the sum total of each component in it. Hence, the electrical properties of Si would affect the TCR value of the NiCrSi thin-film resistors. Ni and Cr, as well as NiCr thin-film resistors, have the positive TCR values. We believe that the negative TCR value of the NiCrSi thin-film resistors was caused by the addition of Si into the NiCr alloy.
As Figure 6 shows, the resistance of NiCr thin-film resistors linearly decreased as their thickness increased with deposition time, and the resistance of the NiCrSi thin-film resistors was almost unchanged as the thickness became equal to or greater than 140.1 nm (10 min deposition time). To simplify the fabrication process, the thickness (deposition time) of the NiCr thin films was fixed at 64.3 nm (10 min), and the thickness (deposition time) of the NiCrSi thin films was set at 30.8 nm (10 min), 90.7 nm (30 min), and 140.1 nm (60 min), respectively. Cross-section images of the as-deposited bi-layer thin-film resistors with their various structures and with the different NiCrSi thin films' thicknesses are presented in Figure 8, where the bi-layer structure is easily observed. Figure 8 shows that the thickness of the NiCr thin films was in the range of 65.5-67.0 nm, which is similar to the value obtained from the single-layer thin films shown in Figure 3. Whether it was being used as the upper layer or the lower layer, it almost had the same value. As Figure 6 shows, the resistance of NiCr thin-film resistors linearly decreased as their thickness increased with deposition time, and the resistance of the NiCrSi thin-film resistors was almost unchanged as the thickness became equal to or greater than 140.1 nm (10 min deposition time). To simplify the fabrication process, the thickness (deposition time) of the NiCr thin films was fixed at 64.3 nm (10 min), and the thickness (deposition time) of the NiCrSi thin films was set at 30.8 nm (10 min), 90.7 nm (30 min), and 140.1 nm (60 min), respectively. Cross-section images of the as-deposited bi-layer thin-film resistors with their various structures and with the different NiCrSi thin films' thicknesses are presented in Figure 8, where the bi-layer structure is easily observed. Figure 8 shows that the thickness of the NiCr thin films was in the range of 65.5-67.0 nm, which is similar to the value obtained from the single-layer thin films shown in Figure 3. Whether it was being used as the upper layer or the lower layer, it almost had the same value. As Figure 6 shows, the resistance of NiCr thin-film resistors linearly decreased as their thickness increased with deposition time, and the resistance of the NiCrSi thin-film resistors was almost unchanged as the thickness became equal to or greater than 140.1 nm (10 min deposition time). To simplify the fabrication process, the thickness (deposition time) of the NiCr thin films was fixed at 64.3 nm (10 min), and the thickness (deposition time) of the NiCrSi thin films was set at 30.8 nm (10 min), 90.7 nm (30 min), and 140.1 nm (60 min), respectively. Cross-section images of the as-deposited bi-layer thin-film resistors with their various structures and with the different NiCrSi thin films' thicknesses are presented in Figure 8, where the bi-layer structure is easily observed. Figure 8 shows that the thickness of the NiCr thin films was in the range of 65.5-67.0 nm, which is similar to the value obtained from the single-layer thin films shown in Figure 3. Whether it was being used as the upper layer or the lower layer, it almost had the same value. In (a,b), a NiCr thin film deposited for 10 min was used as the upper layer, and a NiCrSi thin film deposited for (a) 10 min or (b) 60 min was used as the lower layer. In (c,d), a NiCr thin film deposited for 10 min was used as the lower layer, and a NiCrSi deposited thin film deposited for (c) 10 min or (d) 60 min was the upper layer.
The results in Figure 8 show that the thickness of the NiCrSi thin films in the bi-layer structure also increased with the increase of deposition time, but the thin films had different deposition rates, depending on whether they were used as upper or lower layers. We also investigated how the deposition time of the NiCrSi thin films would affect the thicknesses of the two NiCr-NiCrSi bi-layer structures' thin films, and the results are shown in Figure 9. When the NiCr thin film deposited for Nanomaterials 2016, 6, 39 8 of 10 10 min was used as the upper layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as lower layer, the thickness of the bi-layer thin-film resistors (or the NiCrSi thin films) was about 100.2 nm (33.8 nm), 192.2 nm (125.9 nm), and 335.5 nm (270 nm). When the NiCr thin film deposited for 10 min was used as the lower layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as upper layer, the thickness of the NiCrSi thin films deposited for 10, 30, or 60 min (or of the NiCrSi thin films) was about 98.5 nm (31.6 nm), 166.6 nm (100.8 nm), and 303 nm (236 nm). We expected that as the deposition time of the NiCrSi thin films increased, so too would the thickness of the bi-layer structures' thin films. Our results also suggest that when the NiCr thin films were used as the lower layer, the NiCrSi thin films had a lower deposition rate. The results in Figure 8 show that the thickness of the NiCrSi thin films in the bi-layer structure also increased with the increase of deposition time, but the thin films had different deposition rates, depending on whether they were used as upper or lower layers. We also investigated how the deposition time of the NiCrSi thin films would affect the thicknesses of the two NiCr-NiCrSi bi-layer structures' thin films, and the results are shown in Figure 9. When the NiCr thin film deposited for 10 min was used as the upper layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as lower layer, the thickness of the bi-layer thin-film resistors (or the NiCrSi thin films) was about 100.2 nm (33.8 nm), 192.2 nm (125.9 nm), and 335.5 nm (270 nm). When the NiCr thin film deposited for 10 min was used as the lower layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as upper layer, the thickness of the NiCrSi thin films deposited for 10, 30, or 60 min (or of the NiCrSi thin films) was about 98.5 nm (31.6 nm), 166.6 nm (100.8 nm), and 303 nm (236 nm). We expected that as the deposition time of the NiCrSi thin films increased, so too would the thickness of the bi-layer structures' thin films. Our results also suggest that when the NiCr thin films were used as the lower layer, the NiCrSi thin films had a lower deposition rate. We also recorded the bi-layer thin-film resistors' resistance using the four-point probe method and derived the resistivity from the resistance using a measurement of the thin films' thicknesses, as shown in Figure 9. In addition, we examined the effect of the NiCrSi deposition time on the resistance and resistivity of the bi-layer thin-film resistors as a function of the NiCrSi thin films' thickness, as Figure 10 shows. The results in Figure 10 indicate two important points: (i) regardless of whether the NiCrSi thin films were used as the upper or the lower layer, the resistance of the bi-layer structure decreased as the NiCrSi thin films' thickness (or the deposition time) increased; (ii) the resistivity of the bi-layer structure remained stable even if the NiCrSi thin films' thickness increased. These results suggest that thin-film resistors with stable resistances can easily be achieved by using a bi-layer structure. We also recorded the bi-layer thin-film resistors' resistance using the four-point probe method and derived the resistivity from the resistance using a measurement of the thin films' thicknesses, as shown in Figure 9. In addition, we examined the effect of the NiCrSi deposition time on the resistance and resistivity of the bi-layer thin-film resistors as a function of the NiCrSi thin films' thickness, as Figure 10 shows. The results in Figure 10 indicate two important points: (i) regardless of whether the NiCrSi thin films were used as the upper or the lower layer, the resistance of the bi-layer structure decreased as the NiCrSi thin films' thickness (or the deposition time) increased; (ii) the resistivity of the bi-layer structure remained stable even if the NiCrSi thin films' thickness increased. These results suggest that thin-film resistors with stable resistances can easily be achieved by using a bi-layer structure. The results in Figure 8 show that the thickness of the NiCrSi thin films in the bi-layer structure also increased with the increase of deposition time, but the thin films had different deposition rates, depending on whether they were used as upper or lower layers. We also investigated how the deposition time of the NiCrSi thin films would affect the thicknesses of the two NiCr-NiCrSi bi-layer structures' thin films, and the results are shown in Figure 9. When the NiCr thin film deposited for 10 min was used as the upper layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as lower layer, the thickness of the bi-layer thin-film resistors (or the NiCrSi thin films) was about 100.2 nm (33.8 nm), 192.2 nm (125.9 nm), and 335.5 nm (270 nm). When the NiCr thin film deposited for 10 min was used as the lower layer and the NiCrSi thin films deposited for 10, 30, or 60 min were used as upper layer, the thickness of the NiCrSi thin films deposited for 10, 30, or 60 min (or of the NiCrSi thin films) was about 98.5 nm (31.6 nm), 166.6 nm (100.8 nm), and 303 nm (236 nm). We expected that as the deposition time of the NiCrSi thin films increased, so too would the thickness of the bi-layer structures' thin films. Our results also suggest that when the NiCr thin films were used as the lower layer, the NiCrSi thin films had a lower deposition rate. We also recorded the bi-layer thin-film resistors' resistance using the four-point probe method and derived the resistivity from the resistance using a measurement of the thin films' thicknesses, as shown in Figure 9. In addition, we examined the effect of the NiCrSi deposition time on the resistance and resistivity of the bi-layer thin-film resistors as a function of the NiCrSi thin films' thickness, as Figure 10 shows. The results in Figure 10 indicate two important points: (i) regardless of whether the NiCrSi thin films were used as the upper or the lower layer, the resistance of the bi-layer structure decreased as the NiCrSi thin films' thickness (or the deposition time) increased; (ii) the resistivity of the bi-layer structure remained stable even if the NiCrSi thin films' thickness increased. These results suggest that thin-film resistors with stable resistances can easily be achieved by using a bi-layer structure. Figure 11 presents the TCR values of our bi-layer thin-film resistors as a function of the NiCrSi thin films' thickness. We found that the deposition time of the NiCrSi thin films in the two different structures had a large effect on the TCR values of the NiCr-NiCrSi bi-layer thin-film resistors. When the NiCrSi thin films were used as the upper layer and their thickness increased from 31.6 nm to 236 nm, the TCR value of the bi-layer thin-film resistors dropped from 118.1 to 35.1 ppm/˝C, coming close to zero ppm/˝C as the NiCrSi thin films' thickness increased. When the upper layer was the NiCr thin films and the thickness of NiCrSi thin films increased from 33.8 to 270 nm, the TCR value of the bi-layer thin-film resistors shifted from 110.8 to´72.4 ppm/˝C. Hence, the TCR changed from close to 0 ppm/˝C to a negative value as the NiCrSi thin films' thickness increased. Compared with the thickness of the bi-layer structure shown in Figure 9, when the NiCrSi thin films were used as the lower (upper) layer, their thickness increased to 33.8 (31.6) nm, 125.9 (100.8) nm, and 270 (236) nm, respectively. Nanomaterials 2016, 6, 39 9 of 10 Figure 11 presents the TCR values of our bi-layer thin-film resistors as a function of the NiCrSi thin films' thickness. We found that the deposition time of the NiCrSi thin films in the two different structures had a large effect on the TCR values of the NiCr-NiCrSi bi-layer thin-film resistors. When the NiCrSi thin films were used as the upper layer and their thickness increased from 31.6 nm to 236 nm, the TCR value of the bi-layer thin-film resistors dropped from 118.1 to 35.1 ppm/°C, coming close to zero ppm/°C as the NiCrSi thin films' thickness increased. When the upper layer was the NiCr thin films and the thickness of NiCrSi thin films increased from 33.8 to 270 nm, the TCR value of the bi-layer thin-film resistors shifted from 110.8 to −72.4 ppm/°C. Hence, the TCR changed from close to 0 ppm/°C to a negative value as the NiCrSi thin films' thickness increased. Compared with the thickness of the bi-layer structure shown in Figure 9, when the NiCrSi thin films were used as the lower (upper) layer, their thickness increased to 33.8 (31.6) nm, 125.9 (100.8) nm, and 270 (236) nm, respectively. Except the effect of the thickness of the NiCrSi thin films, the material in contact with the Ag electrode is another possible factor affecting the TCR value in these bi-layer thin-film resistors. Many scattering effects are believed to influence the resistivity of bi-layer thin-film resistors, including the surface scattering effect, the grain boundaries (or interface) scattering effect, the uneven or rough surfaces scattering effect, and the impurities scattering effect [16]. In a thin-film material, if, as proposed, the thin films have smooth or even surfaces, then surface scattering is believed to be the main factor affecting their electrical properties. Even the splitting is not really observed in the bi-layer thin films shown Figure 8, the variations of TCR values are apparently influenced by the stacking method and thickness of NiCrSi thin films. If the lower layer is conducted with the Ag electrode, the ohmic conduction mechanism will dominate due to the contact between the lower-layer materials (NiCr or NiCrSi thin films) and the Ag electrode. We believe that an interface layer exists between the upper-layer and lower-layer thin-film materials, so an interface scattering effect and a rough surface scattering effect will happen at the contact boundaries, causing the variations in the TCR values of the bi-layer thin-film resistors. Those results suggest that the thickness of the NiCrSi thin films will dominate the TCR values in such bi-layer thin-film resistors. Those results also suggest that the bi-layer structure is an important technology for developing thin-film resistors with TCR values close to 0 ppm/°C.
Conclusions
In our bi-layer structure, regardless of whether NiCr thin films were used as the lower layer or the upper layer, their thickness was around 65 nm. As the deposition time was 10 min, the thickness of the NiCrSi thin films in these bi-layer structures was similar to the thickness in a single-layer structure. Their resistances decreased as the deposition time of the NiCrSi thin films increased. The TCR values of our as-deposited NiCr and NiCrSi thin-film resistors were in the range of 197.2 to 230.1 ppm/°C and −106.4 to −153.3 ppm/°C, respectively. For the bi-layer thin-film resistors, as the Except the effect of the thickness of the NiCrSi thin films, the material in contact with the Ag electrode is another possible factor affecting the TCR value in these bi-layer thin-film resistors. Many scattering effects are believed to influence the resistivity of bi-layer thin-film resistors, including the surface scattering effect, the grain boundaries (or interface) scattering effect, the uneven or rough surfaces scattering effect, and the impurities scattering effect [16]. In a thin-film material, if, as proposed, the thin films have smooth or even surfaces, then surface scattering is believed to be the main factor affecting their electrical properties. Even the splitting is not really observed in the bi-layer thin films shown Figure 8, the variations of TCR values are apparently influenced by the stacking method and thickness of NiCrSi thin films. If the lower layer is conducted with the Ag electrode, the ohmic conduction mechanism will dominate due to the contact between the lower-layer materials (NiCr or NiCrSi thin films) and the Ag electrode. We believe that an interface layer exists between the upper-layer and lower-layer thin-film materials, so an interface scattering effect and a rough surface scattering effect will happen at the contact boundaries, causing the variations in the TCR values of the bi-layer thin-film resistors. Those results suggest that the thickness of the NiCrSi thin films will dominate the TCR values in such bi-layer thin-film resistors. Those results also suggest that the bi-layer structure is an important technology for developing thin-film resistors with TCR values close to 0 ppm/˝C.
Conclusions
In our bi-layer structure, regardless of whether NiCr thin films were used as the lower layer or the upper layer, their thickness was around 65 nm. As the deposition time was 10 min, the thickness of the NiCrSi thin films in these bi-layer structures was similar to the thickness in a single-layer structure. Their resistances decreased as the deposition time of the NiCrSi thin films increased. The TCR values of our as-deposited NiCr and NiCrSi thin-film resistors were in the range of 197.2 to 230.1 ppm/˝C and´106.4 to´153.3 ppm/˝C, respectively. For the bi-layer thin-film resistors, as the deposition time of the NiCrSi thin films increased from 10 to 60 min, the thickness increased from 31.6 to 236 nm when we used them as the upper layer, and from 33.8 to 270 nm when we used NiCr thin films as the upper layer. As the deposition time of the NiCrSi thin films increased from 10 to 60 min, the TCR value changed from 118.1 to 35.1 ppm/˝C when we used NiCrSi thin films as the upper layer, and from 110.8 to´72.4 ppm/˝C when we used NiCr thin films as the upper layer. | 10,078.6 | 2016-02-25T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
The effect of spatial randomness on the average fixation time of mutants
The mean conditional fixation time of a mutant is an important measure of stochastic population dynamics, widely studied in ecology and evolution. Here, we investigate the effect of spatial randomness on the mean conditional fixation time of mutants in a constant population of cells, N. Specifically, we assume that fitness values of wild type cells and mutants at different locations come from given probability distributions and do not change in time. We study spatial arrangements of cells on regular graphs with different degrees, from the circle to the complete graph, and vary assumptions on the fitness probability distributions. Some examples include: identical probability distributions for wild types and mutants; cases when only one of the cell types has random fitness values while the other has deterministic fitness; and cases where the mutants are advantaged or disadvantaged. Using analytical calculations and stochastic numerical simulations, we find that randomness has a strong impact on fixation time. In the case of complete graphs, randomness accelerates mutant fixation for all population sizes, and in the case of circular graphs, randomness delays mutant fixation for N larger than a threshold value (for small values of N, different behaviors are observed depending on the fitness distribution functions). These results emphasize fundamental differences in population dynamics under different assumptions on cell connectedness. They are explained by the existence of randomly occurring “dead zones” that can significantly delay fixation on networks with low connectivity; and by the existence of randomly occurring “lucky zones” that can facilitate fixation on networks of high connectivity. Results for death-birth and birth-death formulations of the Moran process, as well as for the (haploid) Wright Fisher model are presented.
Introduction Fixation is the replacement of an initially heterogeneous population with the offspring of just one individual. The probability of fixation and the average time that is required for a mutant to take over the population are two fundamental quantities in ecology and evolution. Both fixation probability and average fixation time have been widely studied by physicists and mathematicians for almost a century, starting with the early works by Haldane [1], Fisher [2], Wright [3], and the series of seminal papers by Kimura [4][5][6].
A number of stochastic models have been used to study evolution in finite populations, of which the Moran process and the Write Fisher process are perhaps the best known. The Moran process [7] assumes the existence of N individuals, and dynamics are modeled as a sequence of updates, such that each time one individual is chosen to be removed, and another is chosen for reproduction (thus keeping the total population size constant). N such elementary updates correspond to one generational update. In the Wright Fisher process (see e.g. [8]), the next generation is populated by randomly drawing (with replacement) copies of individuals from the current population.
One of the central questions that has attracted attention of researchers in the last several decades is the role of the population structure in the evolutionary dynamics. This research was pioneered by Kimura and Weiss who were the first to include spacial structure in population models [6]. Maruyama analyzed the fixation behavior of a Moran process on regular spatial structures and discovered that the fixation probability is independent of the spatial structure of the population (for example, fixation probability on regular graphs is the same as that on unstructured graphs) [9,10]. Liberman et. al extended the analysis to arbitrary graphs (networks) [11]. They showed that some networks may act as amplifiers, and others as suppressors of selection. Namely, amplifier graphs increase (decrease) the fixation probability of advantageous (disadvantageous) and mutants; suppressors, on the contrary, decrease (increase) the fixation probability for advantageous (disadvantageous) mutants [12][13][14]. In [15], the role of the order of the update events (birth and death) for evolutionary dynamics in the Moran process was studied. It was discovered that for 1D and 2D spatial lattices, the fixation probabilities for birth-death and the death-birth formulations are significantly different.
Apart from fixation probability, the average fixation time is an important characteristic of birth and death processes. Much research has been devoted to studying mathematical properties of this quantity in various contexts. Frean and Baxter analyzed the mean fixation time of a mutant for two homogeneous and heterogeneous graphs [16]. They have considered four different update rules of birth-death (BD) and death-birth (BD) processes for star and complete graphs: B-FD (birth depends on fitness and death is uniform), B-DF (birth is uniform and death depends on the unfitness), D-BF (uniform death and fitness dependent birth), and DF-B (fitness dependent death and uniform birth). They have shown that the star is a suppressor in both DB processes and an amplifier in both BD cases. For further developments in the studies of the evolutionary dynamics on graphs, one can refer to the review by Shakarian et al. [17], where the authors describe the original models for evolutionary graph theory and its extensions, as well as the calculation of the fixation probability and time to fixation. Broom et al. [18] have studied the evolutionary game theory of finite structured populations with invasion process updating rules. The exact solutions are presented for the fixation probability and time for the case that mutants have fixed fitness and the case where the fitness of individuals depends on games played among the individuals, on the star, circle and complete graphs. [19] studied the importance of fixation time for the rate of evolution and showed that in star-structured populations, evolution can slow down even while selection is amplified. Hindersin and Traulsen used analytical calculations to find the fixation time of a single mutant for small graphs [20]. They showed that, interestingly, there is no obvious relation between fixation probability and time. More recently, Askari and Aghababaei-Samani introduced an exact analytical approach in order to calculate the mean time fixation of a mutant for circle and star graphs [21].
In a number of previous studies, the evolutionary properties of mutants have been investigated under the assumption that fitness values of different types were kept constant. It has been recently recognized, however, that fluctuating fitness values can have important effects on the fixation probability and time [22,23]. In [24], the authors considered two different types of heterogeneity, a heterogeneous voter model where each voter has an intrinsic rate to change state and a partisan voter model where each voter has an innate and fixed preference for one opinion state (0 or 1). Using a mean-field approximation, they compared the time to fixation for each of these two models and studied the population-size dependency of the time to fixation (i.e. the time to ultimately reach consensus). Rivoire et al. [25] have used mathematical modeling and stochastic control theory to quantify phenotypic variation schemes, which are inherited, randomly produced, or environmentally induced, and have analyzed the adaptation towards such variations. Moreover, Melbinger and Vergassola [26] considered the effect of environmental alterations on the fitness of species. They showed that variability in the growth rates played an important role in neutral evolution and that the fixation time was reduced in the presence of time dependent environmental fluctuations. In addition, Cvijović et al. In general, studies of aspects of evolutionary dynamics related to heterogeneity have given rise to a number of interesting papers. In paper [27], a different type of population heterogenity is studied: the individuals differ by the number of connections they have with others. Paper [28] studies the fixation times in both death-birth and birth-death processes under different assumptions on the underlying network structure, the game, and fitness definition. [29] found that temporal fluctuations in environmental conditions could influence the fate of mutation and subsequently the efficiency of natural selection. They have shown that temporal fluctuations can reduce the efficiency of natural selection and increase the fixation probability of mutants, even if they are strongly deleterious on average.
In [22], we used a number of models (several versions of the Moran model and the haploid Wright-Fisher model) to examine fixation probabilities for a constant size population, where the fitness was a random function of both allelic state and spatial position. Namely, it was assumed that the fitness values of wild type cells and mutant cells were drawn from probability distributions, and were fixed for each location. Different scenarios were examined, including correlated and uncorrelated fitness values of wild type cells and mutants, and different underlying population structures (circles and complete graphs). In the case where the probability distribution of mutant and wild type cells were identical, our model of spatial heterogeneity redefined the notion of neutrality for a newly arising mutation, as such mutations fixed at a higher rate than that predicted under neutrality. In particular, it was found that the probability of mutant fixation (in the case when the mutants were initially a minority) was significantly larger than their initial fraction, and this effect increased with N. In other words, mutants behaved as if they were selected for, although on average their fitness values were the same as the the fitness values of wild type cells.
In the current paper we investigate the question of the timing of mutant fixation in a similar setting. Does spatial randomness of this sort influence the rate of mutant fixation, and if so, in what way? We use both analytical and numerical methods to answer this question under a variety of different assumptions on the probability distributions of wild type and mutant fitness values, and examine different biologically relevant scenarios. In particular, we investigate if spatial arrangement of cells plays a role in the timing of mutant fixation.
Constant population models with random fitness
Suppose a population consists of two types of individuals (or cells), A (the wild type) and B (the mutant). Consider a death-birth (DB) formulation of the Moran process. At each update, a death event is followed by a division event, where the removed individual is replaced by the offspring of one of its neighbors. While individuals are chosen for death randomly with equal probabilities, reproduction probability of each is proportional to its division rate, or fitness. Traditionally, fitness of individuals is defined by their type, such that the wild type and mutant cells have fitness values r A and r B respectively. The notion of neighborhood is defined by a graph connecting the individuals. For example, one may consider the complete graph, where any cell could be chosen for division to replace any eliminated cell. Another example of a graph is a circle, where each cell only has two neighbors.
In constant population processes with two sub-populations, in the absence of mutations, there are only two absorbing states: the state where all cells are wild type, and the state where all the cells are mutants. If starting from a nonzero number of mutants, the system reaches the all-mutant state, we say that the mutants have reached fixation (and otherwise we say that they have gone extinct). The expected time of fixation conditioned on the event of mutant fixation, t i , has been calculated for several graphs. Antal and Scheuring [30] have used an evolutionary game model to calculate fixation of strategies in finite populations. Recently, Hindersin and Traulsen [20] have obtained the fixation time for all possible connected networks with four nodes (6 graphs).
Unlike the traditional Moran process, here we consider the case where the fitness of individuals depends on their environment, in the following sense. In a system of N individuals, the fitness depends not only on whether an individual is wild type or mutant, but also on the location of each individual on the graph. We assume that each vertex of a graph is associated with a (fixed) wild type fitness parameter, which is generated randomly according to a given distribution, and does not change in time. Similarly, each spot is also characterized by a mutant fitness parameter. These parameters are also chosen randomly from a distribution, they characterize the fitness of mutants at different locations, and remain constant in time. The wild type and mutant fitness probability distributions may in general be the same or different, and the choice of mutant and wild type fitness values could be uncorrelated or correlated to different degrees [22]. Here, for illustration purposes, we will consider a discrete symmetric bimodal fitness distributions, such that fitness parameters are randomly selected to be 1 + σ or 1 − σ, with 0 < σ < 1.
In addition to the DB formulation of the Moran process described above, we also studied the birth-death (BD) formulation of the Moran process, and the haploid Wright Fisher model. In the BD Moran process, for each update, first a cell is selected for reproduction (based on the cells' fitness values), and then a neighboring cell (excluding the reproducing cell itself) is chosen to be removed, such that all candidate cells have the same probability to be chosen. The offspring of the reproducing cell replaces the removed cell. In the haploid Wright Fisher model, each new generation of cells is created by randomly sampling (with replacement) the cell types from the current population. The probability to be picked is proportional to the cells' fitness. Most of the ideas of this paper are illustrated by using the DB Moran model. The results obtained from the other models are similar and are presented in S1 Text.
Calculating the mean conditional fixation time
To find the mean conditional fixation time, we use Chapman-Kolmogorov equations to find the fixation properties of a mutant. In a population of N individuals, there are M = 2 N distinct states. This can be shown by observing that in the presence of m mutants, 0 m N, there are N!/m!(N − m)! distinct configurations; summing those up gives the value 2 N . For each fixed fitness realization, γ, let us denote by T g i!j the probability of transitioning from state i to state j, 1 i, j M. Let us denote the absorbing state (or the set of absorbing states) of interest as E. In our particular case, E is the state where each location contains a mutant (mutant fixation). The other absorbing states comprised of the set E 1 (in our case this is the state of all wild type cells, that is, mutant extinction). Following [30], denote by r g i ðtÞ the probability to get absorbed in state E starting from state i after t steps, under fitness configuration γ. The total probability of absorption in E is then given by We have for this quantity, where the summation in j goes over all the states, and This is a linear system of M − 2 equations for r g i (the Chapman-Kolmogorov equation [31]). Next, let us denote by t g i the mean conditional time it takes to get absorbed in state E starting from state i under configuration γ, given that absorption happens: and further denote We have Changing the summation index, we obtain Multiplying eq (3) by t and summing up from 0 to infinity, we obtain or equivalently, where the summation in j goes over all the states, and Eqs (1) and (4) comprise a closed system that can be solved for r g i and t g i for all i = 2 E, E 1 . Then, the conditional mean time of absorption under configuration γ is given by We are interested in the expectation of this quantity over all realizations of the fitness realization, γ: As an example, we apply this theory to the circle graph in the context of the death-birth Moran process. For illustration purposes, we use the population size N = 3 (note that in this case, the circle and the complete graph are the same). Denote the states of the Markov chain by vector (n 1 , n 2 , n 3 ), where n i = 1 if the site is occupied by a mutant and n i = 0 otherwise. There are two absorbing states: (000), the state occupied with all wild-type cells, and (111), the state filled entirely with mutants. The states (100), (010) and (001) are the states of Markov chain with one mutant and (101), (110) and (011) are the states with two mutants (Fig 1). We use the notation (a, b, c) to denote wild type fitness values at locations (1, 2, 3); the mutant fitness values at these locations are denoted by ðã;b;cÞ. For each fitness configuration, the probability of reaching the state E (the state of all mutant cells) starting from state (n 1 n 2 n 3 ) is defined by r n 1 n 2 n 3 . Using formulas (1) and (2), one can write Chapman-Kolmogorov equations for the fixation probability under a fixed set of fitness values, Þr 100 ; Þr 001 ; Denote byr m the fixation probability starting with m mutants, averaged over all realizations of the fitness value sets. For N = 3, if the mutant and wild type fitness values are generated from the same distribution, the average fixation probability is a constant independent of the probability distribution [22]:r which coincides with the result ρ m = m/N for neutral mutants, in the absence of randomness. Note that for larger values of N (namely, all N > 3), the mutant fixation probability is no longer m/N, and it depends on the distribution. In particular, for minority mutants (that is, for m < N/2), the probability of fixation increases with the variance of the underlying distribution [22]. Denoting by t n 1 n 2 n 3 ¼ t n 1 n 2 n 3 =r n 1 n 2 n 3 the mean fixation time needed for going from state (n 1 n 2 n 3 ) to state (111), under a fixed fitness realization, we obtain the following six linear Chapman-Kolmogorov equations (see eqs (4) and (5)): The mean conditional fixation time averaged over all realizations of the fitness landscapes is then obtained from eq (6). For N = 3, we can solve the above equations analytically, and then the mean conditional fixation time (averaged over all realizations of the fitness values) is obtained from eq (6). We have written a Mathematica code that generates the set of equations for any N (the equations for N = 4 are presented in S1 Text). However, this approach is only practical for small values of N (due to the large number of equations and configurations). For larger networks, we use the canonical matrix method [32] (see S1 Text) or stochastic simulations to find the fixation probability and the mean conditional fixation time.
Stochastic simulations
The equations described above only have practical applicability for relatively small networks.
As N increases, the number of equations grows as 2 N − 2 for the complete graph and N(N − 1) for the circle. Instead of solving the algebraic equations, stochastic numerical simulations have to be implemented.
In the numerical simulations, we consider the population on a graph, where each vertex is an individual (wild type or mutant). For each simulation, we generate wild type fitness values and mutant fitness values from their respective probability distributions. These values are associated with their vertices on the graph and are kept constant until the end of the simulation. Since the fitness values are randomly selected from a bimodal distribution (1 + σ and 1 − σ) for each individual at every node of the graph, there will be 2 2N fitness configurations.
We start with an initial condition of one mutant (type B) and N − 1 wild types (type A). At each time step, as long as the mutant population has not yet become fixated or gone extinct, one individual is randomly removed and one of its neighbors is chosen for division with a probability proportional to its fitnesses. The simulation is stopped when the mutants become extinct or reach fixation. If mutant fixation is reached, we record the number of updates until fixation; this gives the time to fixation for a particular (successful) run. This process is repeated a number of times for each configuration. The mean conditional time for each configuration is the sum of all individual fixation times divided by the number of successful samples (that is, the number of runs where mutant fixation was observed). The overall mean conditional fixation time is the average of the mean conditional fixation times over all possible configurations.
A computational difficulty with this approach arises from the fact that for some fitness configurations, the probability of mutant fixation is very low. For such configurations, after running the simulation a fixed number of times, it may happen that fixation never occurs, in which case the configuration will not contribute into the calculated mean conditional fixation time. The configurations with low fixation probabilities are less likely to be fixed, which may skew the numerical results. In order to avoid this problem, we executed over 10 6 independent realizations for each configuration. As a result, the simulation is very costly.
For larger networks, instead of the exhaustive calculation described above, we used a sampling method similar to the method that was implemented in [22]. Since listing all the configurations becomes computationally impossible, we only looked at a subset of possible realizations of random fitness values. For each such realization, we ran simulations starting from one mutant cell, until the mutants reached fixation. At this point, the time it took to fixate was recorded, and we moved on to the next randomly chosen configuration. The mean conditional fixation time was then approximated as an average over fixation times obtained for these realizations.
Results
The main results are presented for the DB Moran process. The BD Moran and the Wright Fisher model show similar trends and are presented in S1 Text.
Small circles: Complex parameter dependencies
We use Chapman-Kolmogorov equations presented above to calculate the mean conditional fixation time. We start by examining the case of circular networks.
Dependence on the standard deviation. Suppose that fitness values for mutants and wild types are drawn from the same distribution, and use the example of a two-valued distribution function, where values 1 − σ and 1 + σ are equally likely. In Fig 2 we perform calculations for N = 3 through N = 6 (panels (a)-(d)), and plot the dependence of mean conditional fixation time on the standard deviation, σ, which can be interpreted as the amount of randomness in the system. We start the discussion with the case where fitness values of wild types and mutants are chosen independently from the same distribution (blue lines in Fig 2). We notice immediately that the mean conditional time to fixation depends on the amount of randomness, even for N = 3. This is in contrast with the mean probability of fixation for N = 3 (see eq (8)), which is independent of σ for the DB Moran process. The effect of spatial randomness on the average fixation time of mutants Interestingly, the type of dependence changes with system size, N. The following patterns are observed. At σ = 0, the non-random values 4, 10, 20, and 35 are obtained for N = 3, 4, 5, 6 respectively. For nonzero values of σ, the probability distribution of fixation times are presented in S1 Text. As expected, for larger values of N fixation takes longer, and the probability distribution of the fixation times becomes larger. Finally, we notice in Fig 2 that as the value of σ increases, the mean conditional time to fixation increases for N = 3, decreases for N = 4, then it is nonmonotonic for N = 5, and it increases rapidly for N = 6. As will be shown below, for larger values of N, the mean conditional fixation time is an increasing function of σ.
The effect of correlations. In Fig 2, the effect of correlations between mutant and wild type fitness values is also studied. As mentioned above, the blue lines show the case where the wild type and mutant fitness values are assigned independently. The green lines represent the case of anti-correlated fitness values (such that the mutant fitness at a given location is 1 − σ if the wild type fitness is 1 + σ, and vice versa). Finally, the orange lines correspond to fully correlated fitness values, where mutants have the same fitness values as the wild types at each location. We observe that the mean conditional fixation time is always the largest for the fully correlated case and the smallest for anti-correlated case.
Interestingly, the effect of randomness does not disappear for the fully correlated case, and in fact for N = 3 and N = 6 it is the largest for the fully correlated case. It was shown in [22] that the probability of mutant fixation in the fully correlated case equals 1/N for all N with N ! 3. With time to fixation, we can see that randomness plays a role even if the mutants behave exactly as the wild types.
The role of skewness. Next we investigate the effect of skewness of the fitness probability distributions on fixation times. Again, we illustrate the results by using a two-valued fitness probability distribution with values x 1 and x 2 . We assume that p(x 1 ) = p, p(x 2 ) = 1 − p, where skewness is simply Quantities p, x 1 , x 2 can be expressed in terms of the mean μ, variance σ 2 , and skewness S, as For zero skewness, x 1 and x 2 are equidistant from the mean (Fig 3(a)), while for positive skewness, x 1 is near the mean and x 2 is large (and unlikely), and for negative skewness x 2 is near the mean and x 1 is small (and unlikely). We assume that both wild types and mutants have identical fitness distribution functions, and calculate the mean conditional fixation times for such distributions. Fig 3(b) presents results for N = 3. It turns out that in this system, mean conditional fixation time in a decreasing function of skewness. It follows that the effect of randomness (delay in fixation) is the largest for negative skewness values. Panels Fig 3(
c) and 3(d)
show the dependence of fixation time on skewness in the case of N = 4 and N = 6. Superficially, the result for N = 4 is in some sense the opposite of the N = 3 and N = 6 cases: the fixation time is an increasing function of skewness for N = 4. One trend remains unchanged for uncorrelated and anti-correlated cases: as in the N = 3 case, the effect of randomness (whether it is acceleration of fixation, as observed for N = 4, or deceleration of fixation, as observed for N = 3 and N = 6 in the uncorrelated and fully correlated cases) is felt the most for negative skewness values. A more detailed study of the effect of skewness is presented in S1 Text.
Small complete graphs: Mean conditional time decreases with randomness
So far, all the calculations were performed for small circular networks. Next, we turn to complete graphs. In Fig 4 we perform calculations on a complete graph for N = 4 through N = 6 (panel (a)-(c)) and show the dependence of mean conditional time on σ. In contrast with the results for the circular graphs, the type of dependence does not change with system size, N. As expected, for σ = 0 the non-random values 9, 16, 25 are obtained. As the value of σ increases, the mean time to fixation decreases. As will be shown below, for larger values of N, the mean conditional time is also a decreasing function of σ. This result is quite general. In S1 Text, we extended our calculations for the mean conditional fixation time on complete graphs to the BD formulation of the Moran process and to the (haploid) Wright Fisher model [22]. In these cases, the mean conditional fixation time is also a decreasing function of σ. The effect of spatial randomness on the average fixation time of mutants The effect of correlations between mutant and wild-type fitness values is studied in Fig 4(a)-4(c). As for circular graphs, the mean conditional fixation time is always the largest for the fully correlated case and the smallest for the anti-correlated case. Fig 4(d) shows the dependence of fixation time on skewness in the case of N = 6 (other values of N show similar trends); the behavior is again qualitatively similar to that of the circular graphs. Finally, in S1 Text we studied the probability distribution of the time to fixation for nonzero σ. As in the case of small circles, the distribution of fixation times becomes wider with N.
Fitness probability distributions of mutants and wild types differ
So far we have considered the scenario where the probability distributions of the wild type and mutant fitness values were the same. Here we allow them to be different, and study two interesting cases: (a,b) only one of the two types has random fitness values, while the other is deterministic, keeping the same mean fitness; (c,d) both types are random, but one of the types is advantageous. The effect of spatial randomness on the average fixation time of mutants What we find is that all the four situations exhibit similar dependencies, and the main factor determining the behavior is the type of network.
In Fig 6, we increase the population size to N = 6 and N = 7 and study cases where only one of the species (mutants or wild types) have random fitness, whereas the other species has a fixed fitness value, with both fitness values having the same mean. Panels (c) and (d) correspond to complete graphs, and it is clear that randomness accelerates fixation. Panels (a) and (b) show the results for circular graphs. While some non-monotonicity is present for the case of random wild types and deterministic mutants, it disappears for larger N, and the general result for circles is that randomness delays fixation. Fig 7 studies the case where either wild types or mutants are advantageous, while both cell types have random fitness. Again, randomness delays fixation for circular graphs (a) and accelerates it for complete graphs (b).
These results are consistent with the rest of the findings for circular networks and complete graphs. We delay an intuitive explanation for this phenomenon until the next section. The effect of spatial randomness on the average fixation time of mutants
Larger networks
To investigate the effect of random environment on the mean fixation time for larger networks, we turn to stochastic simulations. Again, we consider the complete graph and circle arrangement with different population sizes. The simulated results are given for the DB Moran process in the case where the fitnesses of both kinds of individuals are selected from random (binomial) distribution with average one, i.e. 1 + σ or 1 − σ.
First, we investigate the impact of random fitness on the mean conditional fixation time of a mutant for circle and complete graph with different population sizes (Fig 8). We observe that, as expected, the larger the population size N, the larger the fixation time of the mutants. Further, for equal values of N, the fixation happens faster on a complete graph than on a circle. This is also expected, as there are more pathways for mutants to spread on a complete graph, compared to a circle where a (one-dimensional) mutant patch can only grow through its two boundaries. Next, we explore the dependence of the mean fixation time on randomness. Recall that in the case of circles (see Fig 2), for N = 3, the mean conditional fixation time increased with σ, it decreased with σ for N = 4, was non-monotonic for N = 5, and increased again with N = 6. It turns out that the trend observed for N = 6 persists for larger values on N, see Fig 8(a), where we can see that the mean conditional fixation time increases with standard deviation.
Interestingly, the result for the complete graph is very different (Fig 8(b)). There, the mean conditional fixation time is a decreasing function of the standard deviation for all population sizes. The explanation for this phenomenon is quite intuitive. As mentioned above, on a circle, the mutant population spreads out from a single mutant as a connected patch. This patch must expand to the whole circle to reach fixation, and the presence of a fitness "dead simulations, dots, and analytical results obtained by the matrix method, lines) and N = 10, 12, 15, 20 (stochastic sampling simulations). For each value of σ, 10 6 random configurations were used, and the calculations repeated 6 times, to obtain the standard deviation, presented as error bars. https://doi.org/10.1371/journal.pcbi.1005864.g008 The effect of spatial randomness on the average fixation time of mutants zone" (a sequence of several consecutive low fitness values for the mutants on the random fitness landscape) serves as a hurdle that can significantly increase fixation time, as there is no way around those dead zones. On the other hand, a complete graph allows many "paths" to fixation, because every spot is everyone's neighbor, and the presence of several low fitness spots does not preclude the mutants from spreading in the same way as it does in a 1D geometry. Moreover, for a fully connected graph, the presence of randomness actually creates opportunities, increasing the likelihood of "lucky" paths to fixation, where several "neighboring" spots have an elevated mutant fitness. This explains a decrease in the expected fixation time as randomness on a complete graph increases , Fig 8(b).
We note that for small circles, the dependence of the mean conditional fixation time on randomness is less straightforward, because for very small networks the difference between the number of pathways to fixation on a circle and on a complete graph is not as drastic as it is for larger N.
This explanation of the effect of randomness on fixation time holds also for the scenarios where the fitness probability distributions are different for mutants and wild type cells. The following scenarios of interest were studied in the previous section. (i) The expected fitness value is the same for the wild types and the mutants, but either the wild type or mutant fitness values are constant (non-random and equal to their expectation), while the other type's fitness values have a nonzero variance. (ii) Both types have random fitness values, but the mean fitness of mutants is larger or smaller than that of the wild types. In all these cases, it was observed that for sufficiently large values of N, the mean conditional mutant fixation time is an increasing function of randomness for circles and a decreasing function of randomness for complete graphs.
To understand the drastic changes in the behavior for small networks (Fig 5) and larger networks (Figs 6 and 7), let us first turn to Figs 6(b) and 7(a), which describe larger circles and contain all 4 cases (a,b,c,d) listed in Fig 5. They all show an increase in the fixation time as randomness increases, which coincides with our prediction for circles. Next, consider Figs 6(a) and 7(b), both of which describe larger complete graph networks. Again, these two figures contain all 4 cases (a,b,c,d), and they all show a decrease in the fixation time as randomness increases. This is the result that we predict for complete graphs. The way one could intuitively understand the behavior of the N = 3 network of Fig 5 is to remember that this network is simultaneously a circle and a complete graph. It turns out that when it comes to deterministic or advantageous mutants, the N = 3 network behaves as if it was a circle. In the case of deterministic or advantageous wild types, it behaves as if it were a complete graph. For larger networks this "confusion" disappears and the results are consistent with our intuition.
The ideas presented here are illustrated further when we compare the results of mean fixation time for regular graphs with different degrees (see Fig 9). Define the variable z as the half of the number of neighbors for each node. For a regular graph with size N = 9, the parameter z = 1 corresponds to the circle structure with the nearest neighborhood for each node, z = 2 corresponds to the circle with the second nearest neighborhood and so on, until z = 4 which corresponds to the complete graph. Increasing the value of z, the mean fixation time decreases, and as a function of σ, it also switches from an increasing to a decreasing function.
We have also studied the behavior of mutant fixation behavior as a function of the initial number of mutants. We have calculated the unconditional absorption time, which is the expected time to get into either of the two absorbing states (all mutants or all wild-type cells). The results are presented in S1 Text and are consistent with the rest of the findings: unconditional mean absorption time grows with randomness for circles and decreases for complete graphs.
Discussion
In summary, we have studied several constant population models (two formulations of the Moran model and the Wright Fisher model), where the fitness values of cells depend not only on their types (mutant or wild type) but also on their spatial locations, representing environmental factors. Fitness values of mutants and wild type cells at different locations are drawn from fixed probability distributions and remain constant in time. We ask how mutant (conditional) fixation times are influenced by this type of environmental randomness.
Before we summarize the results, we want to emphasize the applications that motivated this study and to show that our approach is rooted in real biological problems. Let us think of a spatially distributed population, say, a number of plants in an expanse of soil. It would be quite natural to assume that some spots can be more favorable and others less favorable. Examples of factors that contribute to fitness are sunlight, proximity of water, soil quality, the presence of rocks etc. Now, let us suppose that a mutant reacts differently to the same variations in the environment. Assume that a plant species does not well tolerate the presence of rocks in the soil, and that a mutant plant is more tolerant to the presence of rocks, but is very sensitive to the sunlight. Then the fitness values of the two subspecies on a spatial grid will be different, and defined by the location and by the mutation status, as assumed in our model. In the extreme case, wild type fitness is only defined by the absence of rocks, and mutant fitness is only defined by sunshine. In this case, the two fitness value sets are uncorrelated. If both factors play a role, but to different degrees, in the plants' fitness values, then the fitness values will exhibit a degree of correlation, as described in this study.
Similar considerations apply to a large variety of biological settings. The effects of spatial structure and heterogeneity are important in biological models such as bacterial growth, where fitness can be a function of the spatial distribution of nutrients and microenvironment. In [33] it was demonstrated clearly that in biofilms, there are significant spatial microscale The effect of spatial randomness on the average fixation time of mutants heterogeneities, both in chemical and physical parameters of the biofilm interstitial fluid. For example, complex patterns of water flow with different velocities and directions were observed throughout the biofilms. Further, heterogeneity in solute chemistry that is present within a biofilm was reported including concentration gradients of metabolic substrates and products. It was also reported that microorganisms within the biofilms can and do respond to these local environmental conditions in a variety ways, such as altering gene-expression patterns or physiological activities. Mutations arising and spreading in bacterial populations lead to high levels of genotypic and phenotypic heterogeneity in biofilms. It has been proposed that such diversification of bacterial populations may be considered an adaptation to the microscopic scale heterogeneity of the environment [34,35], as different phenotypes respond differently to the changes in the environment. Diverse populations have been described as more robust; the "insurance hypothesis" states that the presence of diverse subpopulations increases the range of conditions in which the community as a whole can survive. From the theoretical prospective it is therefore essential to understand the evolutionary dynamics of mutations in the environment characterized by microscale heterogeneities.
Another important application of our theory is dynamics of cancer cell populations. It is well known that solid cancers are characterized by a highly complex and heterogeneous microenvironment [36], which includes stroma, necrotic cells, blood vessels, etc. The distribution of oxygen and hypoxic regions is highly non-homogeneous [37], the nutrients are distributed in a complex fashion, and in general, no two tumors are the same [38]. Tumours have been compared to unhealed wounds [39], in that they produce large amounts of inflammatory mediators (cytokines, chemokines, and growth factors). These molecules attract the so called tumor infiltrating cells that include macrophages, myeloid-derived suppressor cells, mesenchymal stromal cells, and TIE2-expressing monocytes. Together, these populations of non-imalignant cells contribute to the formation of a rich and heterogeneous tumor microenvironment [40]. In order to understand selection and mutation dynamics of cancer cell populations in such an environment, it is not enough to restrict the modeling efforts to the classic problems, where all the wild type cells are exactly (phenotypically) the same and all the mutant cells have the same constant fitness value. In this study, we make a step towards a more realistic view of cancer dynamics, where fitness values of different genotypes are subject to microenvironmental variations.
In our study, we consider several different scenarios, where we vary assumptions on the probability distributions underlying the mutant and wild type fitness values. In particular, we investigate the cases where • mutant and wild type fitness probability distributions are identical (focusing on the scenarios where they are symmetric or skewed); • mutant and wild type fitness realizations come from the same distribution and can be correlated, uncorrelated, or anti-correlated; • the fitness distributions are different, such that one of the types is deterministic and the other random, but the mean fitness values are the same; • the fitness distributions are different, such that one of the types is advantageous (has a larger mean fitness).
All scenarios are investigated in the context of two types of networks: circles and complete graphs. We find that the results are very different for these two choices of the underlying network.
It turns out that environmental randomness has a significant effect on the conditional fixation time of mutants. A clear trend was observed when studying the behavior of mutants on the two different networks: randomness delayed fixation of mutants on circles (at least for values of N larger than a threshold), and it accelerated fixation on complete graphs. The reason is that for 1D-type structures (circles), "dead zones" that form randomly in the presence of environmental influences, can significantly delay fixation by blocking the paths to fixation. For fully connected graphs, "lucky paths" form at random, that facilitate fixation. These trends have been observed for all the scenarios above, except for very small circular networks (N 5) some additional complexity was present. Otherwise, this pattern was universal and included scenarios with identical and different fitness probability distributions for mutants and wild type cells, in the presence of a deterministic type, and in the presence of advantageous/disadvantageous types.
Next, when studying the effects of correlations, we observed that in the case of both circles and complete graphs, the mean conditional fixation time was the largest in the fully correlated case and the smallest in the anti-correlated case.
Finally, if the probability distribution underlying the fitness realizations is skewed, large negative skewness values increase the effect of randomness on the mean conditional fixation time. This is observed in all the scenarios, whether the effect was accelerating or decelerating. Large negative skewness implies the existence of rare but very disadvantageous spots. These spots can serve as a serious impediment in the constrained circular geometry. In complete graphs, they do not present a problem because of the existence of multiple paths to fixation; at the same time having most spots with a slightly elevated fitness facilitates fixation.
The difference between fixation probabilities in circular and complete graphs exemplifies the general phenomenon of well-mixedness and its role in evolutionary mutant dynamics; it is further related to the role of dimensionality in system dynamics. Circular graphs are onedimensional systems, where the geometric or spatial constraints are the most rigid. The opposite scenario is presented by the complete graphs, which correspond to the mass-action or complete mixing assumption. The spatially arranged two-and three-dimensional grids are somewhere between these two extreme scenarios, with the three-dimensional arrangement being closer to mass-action. We have studied evolutionary dynamics in these different settings in several contexts. For example, it has been shown that inactivation of a tumor-suppressor gene (a two-hit evolutionary process in which the cells must first become less fit before becoming more fit) happens faster in 1D (a row of cells) [41,42], than in 2D (a layer), and this is in turn faster than in a fully mixed system with no spatial constraints [41,[43][44][45]. By contrast, in two-step processes in which the intermediate mutant confers a slight selective advantage, the relationship is the opposite, and a non-spatial, fully mixed environment promotes the fastest pace of evolution [45]. As was commented in [46], these phenomena seem less surprising if one notes how reminiscent they are of other fundamental laws of nature in which space dimensionality changes how things work, such as the different fundamental solutions of Poisson's equations in 1D and 2D.
We have presented results for both small N and large populations. We anticipate that there are interesting applications even for small N. For instance, an important biological application of the ring geometry is the model of a human colonic crypt, where the relatively small (of the order of ten cells or less) population of the stem cells is situated along circular bands [47], which can be viewed as cross-sections of three-dimensional crypts. In this context, fixation is referred to as monoclonal conversion. Although the details of the exact composition of the colonic crypt is still being debated, many researchers believe that the active stem cells occupy a narrow layer, and divide mostly symmetrically. The two division types, proliferation and differentiation, are mathematically equivalent to divisions and deaths in our models. The origins of colon cancer can be studied by examining selection dynamics of mutants in such a system. Very interesting and non-trivial is the connection between fixation time and fixation probability. These two measures of mutant success may be positively or negatively correlated for different graph structures. In particular, it appears that for a circle, randomness makes fixation longer, but it also makes it more likely; for the complete graph, randomness makes fixation both faster and more likely [22]. Counterintuitive properties of the fixation time in network structured populations were already noticed in [20]. This paper suggested that there was no obvious relation between the fixation probability and fixation time for a mutant on a network. Based on small networks, the authors analytically showed that: (i) Although the fixation probability was the same for all regular graphs (for example, circle and diamond), it would take different times for a single mutant to get fixated on these kinds of networks. (ii) The graphs that were amplifiers of selections (for example, star or line), increased the fixation probability of the mutant, but at the same time they could slow down the fixation process. | 11,146.8 | 2017-11-01T00:00:00.000 | [
"Mathematics"
] |
Quantum contextuality, causality and freedom of choice
, causality
Introduction
The theme of this issue is 'Quantum contextuality, causality and freedom of choice.'Contextuality, including non-locality as its special case, has long since become one of the central topics in foundations of quantum mechanics, also making inroads in computer science, psychology and linguistics.Contextuality (or lack thereof, non-contextuality) is a property of a system of measurements, broadly understood.The measurements have generally random outcomes, and, as a preliminary intuition, a system of measurements is contextual if one cannot find a joint distribution for their outcomes subject to certain constraints.A precise definition depends on how a system of measurements is represented, which can be done in several different ways.
The second notion in the theme, causality, considers measurements as events with explicit causal structure.This structure includes the dependence of measurement Figure 1.A schematic representation of the logic of the CHSH experiment (after Clauser et al. [1]).Two spin- 1 2 particles are generated by a single source and move away from each other.The experiment involves two dichotomous measurements (with outcomes usually denoted {0, 1} or {−1, 1}): the left particle's spin is measured by Alice along one of two axes, 1 or 3, and the right particle's spin is measured by Bob along the axis 2 or 4. Thus the possible pairs of the axes used in one experimental run are {1, 2}, {2, 3}, {3, 4}, {4, 1}.
events on their causal past, as well as the dependence of measuring a property on other properties being measured in the same experimental run.These dependences mean that the No-Signalling or No-Disturbance principle usually adopted in applications of the notion of contextuality in physics must be modified, and the notion of contextuality itself extended correspondingly.
The third notion of the theme, freedom of choice, may look like a philosophical concept outside the purview of rigorous science.However, freedom of choice and its possible violations play an important and surprisingly tangible role in the treatments of contextuality and causality.The intuitive meaning of the freedom of choice is that one's choice of experimental settings to study a system of measurements and the properties of this system to be measured are independent of each other.
The papers included in this issue deal with these three theme notions in different ways, using different terminology.This terminological multiplicity is often lamented by readers of contextuality literature.For instance, non-signalling is also known under a variety of other names: non-disturbance, parameter independence, simple locality, local causality, marginal selectivity, consistent connectedness, etc.The situation is further complicated by the fact that the authors preferring a particular term usually mention conceptual reasons or semantic nuances that make other terms wanting.To partially remedy this situation, we thought it might be useful to present different theoretical accounts of the same two well-known experimental paradigms that have played an important role in contextuality research.Their descriptions are given in the captions for figures 1 and 2.
(a) CHSH and KCBS in the sheaf-theoretic approach
A basic tenet of the sheaf-theoretic approach is to describe contextuality in a theory-neutral fashion, without presupposing Quantum Mechanics or any other theory.One can then consider which contextual systems can be realized according to Quantum Mechanics, or can arise in some other way.
There are two levels of description: the scenario and the empirical model.The scenario gives the 'shape' of the system, or in logical terms the type.It is specified as S = (X, O, C), where -X is a set of measurements.
-O = {O x } x∈X is the set of possible outcomes for each measurement.
-C is a family of subsets of X whose union is X, which represent the contexts or compatible sets of measurements-those which can be performed together.In the case of the CHSH system under consideration, we have Thus there is a very direct correspondence with the items given in the concrete description of the experiment.
The next level of description is that of an empirical model for a given scenario, which describes the actual behaviour of a specific system.This itself can be seen as having two levels.
Firstly, in a given context C ∈ C, there is the representation of the possible joint outcomes of performing the measurements in that context.Such a joint outcome is represented by a function which assigns, to each x ∈ C, an outcome in O x . 1 In the case of the CHSH example, we can list the set of such functions, e.g. for the context {1, 2}-there are four of them We can similarly list the joint outcomes for the other contexts.We write E(C) := x∈X O x for the set of such functions for a context C.
An important operation is that of restriction from a larger context to a smaller one.For example, if we have the joint outcome {2 → +1, 3 → −1} for the context {2, 3}, then we can restrict this to {2 → +1} for the sub-context {2}.This is important because it allows us to compare the behaviour of the system in different contexts at the points where they overlap.For example, we can also restrict the joint outcome {1 → +1, 2 → +1} to the sub-context {2}, and we see that this also yields the same outcome {2 → +1}.We shall use the notation s| C where s ∈ E(C ) and C ⊆ C , for the restriction of s to C. The E construction only allows us to speak of deterministic outcomes of measurements.To represent more general behaviours, we consider distributions over events.Given a context C, we consider DE(C), the set of distributions over joint outcomes for the measurements in the context. 2 In the CHSH example, let us consider the context C = {1, 2}.We can exhibit a distribution in DE(C) as follows: A Here, the context is shown on the left-hand side of the table.By the usual convention, measurement 1 is associated with Alice, and 2 with Bob.The outcomes are used to label the columns; for brevity we write +− instead of +1 − 1, etc. Thus the second entry in the main part of the table says that the joint outcome {1 → +1, 2 → −1} is assigned probability 3/8 by the distribution.Note that the probabilities sum to 1, as they should for a normalized distribution.
We can now say what an empirical model is.It specifies, for each context C ∈ C, a distribution e C ∈ DE(C).We can display an empirical model for the CHSH experiment as follows: Each row of the table gives a context, and the distribution on joint outcomes for the context specified by the model.The reason for the strange-looking arrangement of the contexts will be explained in the final paragraph.This particular empirical model can be realized in quantum mechanics, using a suitable entangled state, and appropriate choices of angles for the spin measurements.Moreover, such realizations have been experimentally verified.
To ensure compatibility with relativity, or for other reasons, a standard requirement made of empirical models is that they satisfy a condition variously known in different settings as nonsignalling, non-disturbance, etc.We prefer to view it as a local consistency condition.Each context provides a partial view or window on the system, and local consistency is the requirement that these views agree on their overlap.This is expressed in terms of the restriction maps we have already mentioned.For the deterministic case of single assignments of joint outcomes, where we have an assignment s C ∈ E(C) for each context C ∈ C, the local consistency condition is expressed as follows: for each pair of contexts C and C , s C | C∩C = s C | C∩C .Thus two functions defined on different sets of variables are locally consistent if they assign the same outcomes to each of the variables in the intersection of their domains.
The same idea can be carried over to probabilistic models.Given contexts C ⊆ C and a distribution e C ∈ DE(C ), we can define the restriction e C | C ∈ DE(C).This can be understood as the marginal of the distribution e C , where we project onto the smaller context C. 3 The local consistency condition for empirical models is then expressed in exactly the same fashion as in the deterministic case: an empirical model {e We can check that the CHSH empirical model displayed above satisfies this property.We now come to how contextuality, or in this case, non-locality, is defined in our approach.We have just defined the local consistency property: pieces of the model fit together locally.From a classical point of view, these views or snapshots afforded to us by contexts are views of a single underlying reality, which is independent of our observations of it.How can this 'single underlying reality' be expressed?By a single distribution d ∈ DE(X) on all the variables, without regard to the compatibility of the corresponding measurements.This distribution should allow us to recover all the observable phenomena from the empirical model {e C } C∈C .This is expressed by saying that for all contexts C, the marginal d| C = e C .We can think of the global distribution as 'having all the answers'.Whichever question (context) we ask it, it provides answers indistinguishable from those we would observe if we performed the measurements in that context.The existence of such a global distribution is equivalent to the more traditional formulation in terms of hidden variables.
Thus we say that an empirical model is non-contextual or globally consistent exactly if there exists such a global distribution.If no such distribution exists, then we say that the model is contextual.Thus contextuality arises exactly when we have a system which is locally consistent, but globally inconsistent. 4he CHSH system is famous because it gives an example of contextuality (more specifically, non-locality).It can be shown, using the CHSH inequality, that no global distribution satisfying the above property can exist, so the system is globally inconsistent.
A very similar account can be given for the KCBS system.In this case, the scenario is To make the contextuality argument for this system, it turns out that only one outcome need be considered, say +−.There is a quantum realization of an empirical model, using a qutrit state and suitably chosen observables, which witnesses contextuality for this system.This empirical model {e C } C∈C assigns the following probabilities to the +− outcome for each context: These assignments violate the KCBS inequality, showing that there is no global distribution for this empirical model, so the system is globally inconsistent/contextual.One should note an important difference between the CHSH and KCBS systems, which is visible at the level of the scenarios.At first sight, they look similar, one being a 4-cycle and the other a 5-cycle.However, the CHSH system can be decomposed into two sets of measurements, {1, 3} and {2, 4}, such that {1, 3} can be assigned to Alice, and {2, 4} to Bob.The contexts are exactly those that arise by choosing one Alice measurement and one Bob measurement.This fits with the physical picture where Alice and Bob are spatially separated, and each performs one measurement.No such decomposition is possible for the KCBS system.The corresponding experiment is performed on a single system at one location.
(b) CHSH and KCBS in the graph-theoretic approach
In physics, a measurement scenario is characterized by a set of observables, the outcomes of each observable, and the relations of compatibility between the observables.Two observables x and y are compatible (or jointly measurable or co-measurable) if they have a common refinement z.That is, if there exists an observable z such that x and y can be jointly measured by measuring z.A context is a set of compatible observables.For a given scenario, a correlation (behaviour or empirical model) is a set of probability distributions, one for each context.
The graph-theoretic approach to correlations focuses on two types of scenarios: -Kochen-Specker (KS) scenarios, defined as those in which all the measurements are ideal (i.e.minimally disturbing-they only disturb incompatible observables-and repeatablethey yield the same outcome when they are measured again on the same physical system).-Bell scenarios, defined as those in which there are two or more spatially separated physical systems and all the observables are local, i.e. refer to only one of the physical systems.
Correlations in KS and Bell scenarios are non-disturbing since the marginal probabilities for the outcomes of any subset of compatible observables do not depend on the context.By appealing to classical physics, in KS and Bell scenarios, it is justified to assume that there exists a single joint probability distribution that, by marginalization, produces the probability distributions of the contexts.If this is the case, then the correlations are said to be KS non-contextual or Bell local, depending on the type of scenario.
The set of KS non-contextual (Bell local) correlations for a scenario defines a polytope called the non-contextual (local) polytope.Each of its facets can be expressed as a weighted sum of probabilities of some events equals a constant.An event is characterized by the results of the measurements of the observables of a context.Each of the facets defines a tight KS non-contextuality (Bell) inequality and the correlations that violate one of them are called KS contextual (Bell non-local) correlations.
The compatibility graph of a scenario represents observables by nodes and compatibility relations by edges.A node may be divided into d parts to indicate that the corresponding observable has d outcomes.Vorob'ev's theorem states that KS contextuality and Bell non-locality are impossible in any scenario whose compatibility graph does not contain induced cycles of size larger than three.As an illustration, figure 3a1,b1 shows the compatibility graphs of the CHSH and KCBS scenarios, respectively.
Two events are mutually exclusive if they contain an observable that has different outcomes in the two events.The exclusivity graph of a given KS or Bell scenario represents the events of that scenario by nodes and their exclusivity relationships by edges.As an illustration, figure 3a2,b2 shows the exclusivity graphs of the events of the CHSH and KCBS scenarios, respectively.
The interest in exclusivity graphs stems from the observation that the set of non-contextual (quantum) correlations for a KS or Bell scenario is a subset of the stable set polytope (theta body) of the exclusivity graph.Since any non-contextuality and Bell inequality can be expressed as a weighted sum of probabilities of a subset of events, the non-contextual (local) bound of a non-contextuality (Bell) inequality is the independence number of the vertex-weighted induced subgraph G of the exclusivity graph of these events.Similarly, the maximum quantum value is upper bounded by the Lovász theta number of G.As an illustration, figure 3a3,b3 shows the exclusivity graphs associated with the CHSH Bell inequality and the KCBS non-contextuality inequality, respectively, which are the only non-trivial tight non-contextuality inequalities in the CHSH and KCBS scenarios, respectively.
(c) CHSH and KCBS through negative probabilities
Quantum contextuality can be explained through the Hilbert space formalism.However, there is an alternative explanation using a single distribution approach.As mentioned above, contextuality implies lack of a single probability distribution over all measurements.However, it is still possible to construct a single joint quasi-probability distribution for contextual scenarios.Such quasi-probabilities admit negative values, hence it is customary to simply call them negative probabilities.Therefore, if a system exhibits contextuality, then the corresponding joint distribution is negative.What is important, these negative probabilities are unobservable since in experiments one is only able to detect marginal distributions, which are proper non-negative probability distributions.
For example, in the CHSH scenario discussed above a joint distribution is of the form p(a, b, c, d|A, B, C, D) (p(a, b, c, d) to simplify notation), where A, B, C and D correspond to the axes 1, 2, 3 and 4 from figure 1, respectively, and a, b, c, d Here, nodes represent events.ab|xy is the event in which outcomes a and b are obtained when (compatible) observables x and y are jointly measured.Here, edges connect mutually exclusive events.The colours of the edges indicate the reason of the exclusivity.For example, a red edge indicates that the exclusivity is due to the fact that the observable represented in red (observable 1) has different outcomes in the two events.(a3) Exclusivity subgraph G of the eight events whose sum of probabilities S appears in the CHSH inequality.The CHSH inequality states that, for every KS non-contextual (Bell local) theory, S ≤ 3, where 3 is the independence number of G.The maximum value of S in quantum theory is 2 + √ 2 ≈ 3.414, which is the Lovász theta number of G. (b1) Compatibility graph of the KCBS scenario.(b2) Exclusivity graph of the 20 events of the KCBS scenario.(b3) Exclusivity subgraph H of the 10 events whose sum of probabilities K appears in the KCBS inequality.The KCBS inequality states that, for every KS non-contextual theory, K ≤ 4, where 4 is the independence number of H.The maximum value of K in quantum theory is 2 √ 5 ≈ 4.472, which is the Lovász theta number of H.
(d) CHSH and KCBS in the Contextuality-by-Default approach
In the theory called Contextuality-by-Default (CbD), the scenarios CHSH and KCBS can be represented, respectively, by the following systems of double-indexed random variables: In each variable R c q , the value of q is referred to as the content of the variable (in this case, the measurement defined by the axis chosen), and the value of c indicates the context of the variable.In our systems (1.6), all the variables are binary, with values we will label −1 and 1.For any context c, the variables (R c q , R c q ) form a jointly distributed bunch of variables.By contrast, variables belonging to different contexts are stochastically unrelated, i.e. they possess no joint distribution (e.g.there is no joint probability of R 1 1 = 1 and R 2 3 = 1).In particular, this holds for any distinct variables with the same content, {R c q , R c q }.In CbD, the set of all variables sharing a content q is said to form a connection (the intuition being that they connect stochastically unrelated 'islands' of different bunches).
CbD uses several other terms derived from the term connection.Thus, if the random variables within each connection are identically distributed, the system is called consistently connected. 5nother use of connectedness relates to couplings of systems of variables.A coupling of a system in (1.6) is a system of identically double-indexed random variables S c q such that its bunches have the same distribution as the corresponding bunches in the system; unlike in the latter, however, all variables in the coupling, even across contexts, are jointly distributed.A coupling is called (multi)maximally connected if, in each of its connections, any two random variables coincide with the maximal possible probability. 6A system is considered non-contextual if it has such a (multi)maximally connected coupling.Otherwise the system is contextual.
If the system is consistently connected (in CbD, it does not have to be), the maximal probability in the definition of non-contextuality is one.In this case, we can simply say that the systems in (1.6) are non-contextual if they have couplings of the following structure: The existence or non-existence of a multimaximal coupling for any system with the finite number of variables is effectively established by means of linear programing.In the simplest cases, such as the CHSH and KCBS experiments, one can also derive a set of inequalities (referred to as Bell-type inequalities) that are satisfied if and only if the couplings in question exist.The Bell-type inequalities for these systems are as follows: max where • • • denotes expected value, ⊕1 and 1 are clockwise and counterclockwise shifts on the dial with 1, 2, . . ., N (with N = 4 in CHSH and N = 5 in KCBS), and ι c = ±1.The systems in (1.6) are not the only way to represent the CHSH and KCBS experiments.CbD allows for other ways of defining contents and contexts, some of which, however, are less interesting for contextuality analysis.There is, however, a representation that is contextually equivalent to (1.6).We show it here for the CHSH experiment only q ‡ = 11 q = 21 q = 22 q = 32 q = 33 q = 43 q = 44 q = 14 R ‡ CHSH (1.9) Here, the new contents q ‡ and new contexts c ‡ are defined by relating them to the contents q and contexts c in R CHSH .Namely, q ‡ = ij corresponds to the content q = i and context c = j; the bunch in context c ‡ = •j is a distributional copy of the bunch in the context c = j; and the two variables R i• ij and R i• ij in a context c ‡ = i• are distributional copies of, respectively, R j i and R j i in R CHSH .In the latter type of context, c ‡ = i•, the joint distribution is created by making the two variables coincide with the maximal possible probability.The system R ‡ CHSH is called the consistification of the system R CHSH , because R ‡ CHSH is (strongly) consistently connected even if R CHSH is not, although they are otherwise interchangeable in all considerations regarding contextuality.
Figure 3 .
Figure 3. (a1) Compatibility graph of the CHSH scenario.Nodes represent observables and edges connect compatible observables.Nodes are divided into two parts to indicate that each observable has two outcomes.(a2) Exclusivity graph of the 16 events of the CHSH scenario.Here, nodes represent events.ab|xy is the event in which outcomes a and b are obtained when (compatible) observables x and y are jointly measured.Here, edges connect mutually exclusive events.The colours of the edges indicate the reason of the exclusivity.For example, a red edge indicates that the exclusivity is due to the fact that the observable represented in red (observable 1) has different outcomes in the two events.(a3) Exclusivity subgraph G of the eight events whose sum of probabilities S appears in the CHSH inequality.The CHSH inequality states that, for every KS non-contextual (Bell local) theory, S ≤ 3, where 3 is the independence number of G.The maximum value of S in quantum theory is 2 + √ 2 ≈ 3.414, which is the Lovász theta number of G. (b1) Compatibility graph of the KCBS scenario.(b2) Exclusivity graph of the 20 events of the KCBS scenario.(b3) Exclusivity subgraph H of the 10 events whose sum of probabilities K appears in the KCBS inequality.The KCBS inequality states that, for every KS non-contextual theory, K ≤ 4, where 4 is the independence number of H.The maximum value of K in quantum theory is 2 √ 5 ≈ 4.472, which is the Lovász theta number of H. | 5,261.2 | 2024-01-29T00:00:00.000 | [
"Philosophy",
"Physics"
] |
High-efficiency method for recycling lithium from spent LiFePO4 cathode
The extraction of Li from the spent LiFePO4 cathode is enhanced by the selective removal using interactions between HCl and NaClO to dissolve the Li+ ion while Fe and P are retained in the structure. Several parameters, including the effects of dosage and drop acceleration of HCl and NaClO, reaction time, reaction temperature, and solid–liquid ratio on lithium leaching, were tested. The Total yields of lithium can achieve 97% after extraction process that lithium is extracted from the precipitated mother liquor, using an appropriate extraction agent that is a mixture of P507 and TBP and NF. The method also significantly reduced the use of acid and alkali, and the economic benefit of recycling is improved. Changes in composition, morphology, and structure of the material in the dissolution process are characterized by inductively coupled plasma optical emission spectrometry, scanning electron microscope, X-ray diffraction, particle size distribution instrument, and moisture analysis.
Introduction
LiFePO 4 is one of the main cathode materials in rechargeable lithium-ion batteries in China [1]. However, one of the values of LiFePO 4 , compared with other cathode materials such as Li 2 CoO 2 [2], is that it does not contain high recovery value elements such as Mn, Ni, and Co [3]. Given this, China will have 9,400 tons of scrap LiFePO 4 batteries by 2021 [4]. The lithium content accounts for about 200 tons, and projections of possible lithium scarcity point to clear environmental and economic benefits from lithium recycling in the future [5].
Currently, there are several methods for treating waste cathode materials [6], including the fire method, the wet method, and a combined method [7]. Many companies used high-temperature method to recycle lithium battery cathode material, such as Toxco Inc. (USA), SONY Corp (Japan), and Umicore (Belgium). These methods are rather inefficient for recovery as mostly lithium is lost in metallurgical slag during refinement and is not recycled [8,9]. These high-temperature processes have high energy consumption and produce waste gas and waste residue pollution and have pushed methods toward wet extraction of lithium in the industry and academic research [10].
Through a direct regeneration process, Li et al. [11] obtained high purity cathode material mixture (LiFePO 4 þ acetylene black), anode material mixture (graphite þ acetylene black), and other by-products (shell, Al foil, Cu foil, electrolyte solvent, etc.) from scrapped LiFePO 4 batteries. Subsequently, recycled cathode material mixture without acid leaching is further regenerated with Li 2 CO 3 directly. Batteries using recycled LiFePO 4 confronted many problems, such as safety and cycling stability. Based on the concept of zero waste, Dutta et al. [12] proposed a complete process for the recycling of LIBs, in which the metals and materials as value added products were obtained. Although the leaching rate of Li + ions were 99.9%, the recovery rate of lithium indeed was not high, because the consumption of oxidant H 2 O 2 is relatively high, except that the battery grade Li 2 CO 3 cannot be obtained. Zhang et al. [13] studied the selective recovery of lithium from spent LiFePO 4 batteries. With the help of sodium persulfate (Na 2 S 2 O 8 ), the LiFePO 4 were oxidized to FePO 4 , and more than 99% of Li in the material can be selectively leached in 20 min at room temperature. But the sodium persulfate (Na 2 S 2 O 8 ) that has the chemical properties of highly oxidizable and toxic is expensive, and this method is difficult to realize industrial production.
Yang et al. [14] also proposed a method to extract lithium from spent LiFePO 4 cathode; the wet extraction of Li from LiFePO 4 cathode typically involves H 2 SO 4 and H 2 O 2 to dissolve the cathode material, adjusting pH to precipitate Fe 3+ , filtering the precipitate, and precipitating Li with Na 2 CO 3 solution. This approach is marred by relatively low lithium recovery, excessive use of acid and alkali, and high energy consumption with relatively poor Li 2 CO 3 purity. Economically, the market value is limited without a more efficient method for the recovery of lithium [15].
The method shown herein is based on the wet process [16]. The refined approach studies the structural change of LiFePO 4 by hydrometallurgy to achieve preferred dissolution of lithium, thereby minimizing the reaction volumes and retaining the solid structure of FePO 4 [17].
Careful control of HCl and NaClO additions establishes a chemical equilibrium, and lithium is efficiently extracted from the structure with a higher yield [18]. This dramatically reduces the consumption of acid and alkali [19,20]. Recycling lithium of the mother liquor by extraction technology reduces energy consumption [21]. The unique reverse sedimentation process prepares batterygrade Li 2 CO 3 for the increased economic benefit [22,23]. Keeping in view of stringent environmental regulations, limited natural resources, and energy crisis, adopting recycling will not only protect the environment and pacify the gap between demand and supply but also conserve the natural resources [24,25].
Materials
The waste batteries used in this experiment were provided by the Lithium Battery Power Research Institute of Jiangxi University of Science and Technology. All batteries were discharged in saturated NaSO 4 solution and disassembled by hand [26]. The removed positive electrode was washed with DMC (dimethyl carbonate) and was crushed mechanically with a grinder and screened to obtain LiFePO 4 material, with a mass fraction of lithium in the LiFePO 4 positive electrode material of 3.96% [27]. All the chemicals used in the experiment were analytically pure, and the solutions were prepared with ultrapure water.
Lithium preferential extraction
LiFePO 4 was mixed with water, adding HCl and NaClO slowly over time with heated stirring and careful pH monitoring. Controlled drip rates for HCl and NaClO prevent the production of Cl 2 gas [28,29].
High-efficiency lithium recovery 2.3.1 Purification
The pH of lithium-containing liquid for a certain volume was adjusted to a range of 9-10 with alkali, stirred at 85℃ for 1 h to remove impurities of Fe, Cu, Al, and others in the solution [30]. After filtration, the lithium-containing liquid was purified further by the addition of a sodium salt at a mass ratio of 0.3% and EDTA at a mass ratio of 0.1% as a complexing agent to adjust Ca 2+ to 60 μg mL −1 [31].
Precipitation of lithium
The lithium solution was heated to 70-80℃, and Na 2 CO 3 solution was added to the reaction dropwise to produce sediments of Li 2 CO 3 [32]. This precipitate was vacuum filtered, washed with high purity water several times, and dried under vacuum to obtain high purity battery grade Li 2 CO 3 [33].
Extraction
The solubility of lithium is 1.2-2.0 g L −1 in the precipitated mother liquor and accounts for 20-25% of the total lithium content [34]. Lithium extraction was performed using a mixed extraction agent (P507:TBP:NF = 10:2:1) in this experiment. The residual lithium was less than 50 μg L −1 in the final remaining liquid, giving a recovery rate of 98% [35,36].
Analytical equipment and characterization of materials
The solid samples were digested in concentrated hydrochloric acid for analysis using inductively coupled plasma optical emission spectrometry (ICP-OES, Thermo Fisher ICAP 7400). X-ray diffraction (XRD, D8 Advance, Bruker, Germany) was used to characterize the relative crystal phases during the experimental process and identify any structural changes in the samples in the reaction [37]. Scanning electron microscope (SEM) (Flex SEM 1000, Hitachi, Japan) was used to image the morphology of the products [38,39]. Particle size and the water content of the lithium products were measured by a particle size analyzer and moisture meter to determine if the Li 2 CO 3 product meets the standards for cathode material synthesis [40,41]. The addition of NaClO oxidizes Fe 2+ → Fe 3+ and facilitates the extraction of lithium from its structure. From simple stoichiometry, the moles of NaClO should equal the moles of lithium extracted, and the moles of HCl should be less than the moles of lithium if Fe 3+ remains were intact. The rate of addition for HCl and NaClO should maintain a pH of 1.5-2; otherwise, Fe 3+ will leach into the solution. Figure 1(a) shows the influence that the amount of NaClO and HCl has on the lithium dissolution rate, and "oxidant/Li and acid/Li molar ratio" expresses the ratio of NaClO and HCl to the molar amount of lithium, respectively. The concentration of Li + ion in solution was measured by ICP. The calculated recovery rate of lithium was 99% using excess NaClO. The recovery rate was 100% with excess HCl; however, unwanted Fe 3+ also dissolved. Figure 1(b) clearly shows that when the molar ratio of HCl and Li is greater than 0.8 (pH ∼ 0.5), the solution rate of Fe 3+ increases linearly and further introduces impurities during the extraction of lithium. "Fe leaching efficiency" refers to the mass of the dissolved Fe ion to total iron in LiFePO 4 , again measured in solution by ICP. Figure 1(c) shows the influence of temperature and time on extraction. Exceeding 60 minutes, the temperature has no effect and the leaching effect of lithium is the best. Figure 1(d) shows that with optimal conditions, the size of the solid-liquid ratio has little influence on extraction above a 1:1 ratio. Figure 1(e) shows the influence of pH during the experiment by HCl addition and the effect on lithium yield. It can be seen from Figure 1(f) that when pH ≤ 2.0 (the value measured with a pH strip and related to the amount of HCl added), marking a small amount of HCl, the lithium yield is greater than 99% [40]. Figure 2 shows the diffraction patterns phase after the extraction of lithium. LFP refers to LiFePO 4 , LFP-1 refers to the slag that NaClO is used only 70% in the process of extracting lithium, LFP-2 and LFP-3 refers to the slag that the addition amount of NaClO and HCl is optimal. The two reflections (121) and (131) are intense peaks seen in LFP, whereas the leached samples show a significant reduction or absence. The outlined sections A and B of complete patterns correspond to Figure 2(a and b). There are new peaks near the (131) peaks in the XRD pattern of LFP-1 and still retains intensity at (131). However, there are no (131) peaks in the XRD pattern of LFP-2/3.
Results and discussion
These patterns indicate that a small amount of LiFePO 4 remains in the LFP-1 but is completely nonexistent in the LFP-2/3 samples. It can be seen that the structure of LiFePO 4 changed in the process of preferential extraction of lithium in the XRD pattern. The main peak (011) and (111) of LiFePO 4 did not change with the dissolution of lithium, indicating that only part of the raw material structure changed in the optimal solution process to realize the preferential extraction of lithium. Figure 3(a-f) shows the morphology of the LiFePO 4 cathode material and FePO 4 residues. It comprises many primary and secondary particles (Figure 3a), and magnified images in Figure 3(b and c) show spherical agglomerates and much smaller spherical primary particles. Figure 3d shows the particle morphology of post-extraction FePO 4 residues. There is an absence of the previously seen secondary agglomerated particles. In Figure 3(e and f), the primary particles are much smaller irregular shapes with some agglomeration. The extraction caused the secondary particles of LiFePO 4 to break apart, possibly explained by the structural stress as Li + ion is extracted and the crystal structure has volumetric changes from LiFePO 4 → FePO 4. Figure 4 shows the XRD diagrams of battery-grade Li 2 CO 3 products; compared with the standard card of Li 2 CO 3 , each peak of the two products is perfectly consistent with the standard peak, indicating that Li 2 CO 3 -1 and Li 2 CO 3 -2 are both Li 2 CO 3 products. Figure 5 shows the morphology of battery-grade Li 2 CO 3 -1 and Li 2 CO 3 -2 of Figure 4. The lithium-extracted solution was reacted with NaCO 3 after impurity removal. The dropwise addition of the Na 2 CO 3 solution permits particle ripening over nucleation to produce larger robust particles. This can be seen from the two batches of batterygrade Li 2 CO 3 products in Figure 5 (Li 2 CO 3 -1 and Li 2 CO 3 -2). Figure 6 plots the impurity fractions in the Li 2 CO 3 products measured by the ICP analysis. Line A represents the impurity standard for battery-grade Li 2 CO 3 , and Line B and Line C represent the impurity content of two batches of Li 2 CO 3 products respectively. Table 1 presents the purity, moisture content, and particle size ranges for battery-grade Li 2 CO 3 compared with the products from our Li extraction and precipitation methods. The products meet or exceed the standards for Li 2 CO 3 in battery material synthesis.
The lithium concentration ranged from 1.2 to 2.0 g/L in the mother liquid, and this concentration accounts for 20-25% of the total lithium. The extraction of lithium from the mother liquid was performed using a mixed extraction agent (P507:TBP:NF = 10:2:1). Several mother solutions (numbers 1-6) with different lithium content were selected [41], and the residual lithium after extraction was measured with ICP, as presented in Table 2. The remaining lithium content was less than 50 µg/mL in the mother liquor, yielding a 98% recovery [42]. Combining these results with a leaching efficiency near 99% for lithium from LiFePO 4 , the total yields for lithium recycling averaged near 97% [43].
Conclusion
A refined method has been tested for lithium extraction from the peridotite structure of LiFePO 4 by controlling the addition of NaClO and HCl. Mole ratios of NaClO and HCl to moles of lithium at 1 and 0.8 produced the best leaching with minimized impurities. Compared with the traditional total solution process, the total acid and alkali needed are reduced to improve the economic and environ mental factors in the recycling process. A pH of 1.5-2 gave an optimal dissolution recovery rate (99%) and limited the dissolution of Fe 3+ to below 2%.
High-quality battery-grade Li 2 CO 3 was prepared by extraction using a mixed extraction agent (P507:TBP:NF = 10:2:1) with a recovery that gave a total yield of lithium above 97%. The economic benefits such as the efficiency of energy savings and consumption reduction also increased. This modified method showed an obvious advantage compared to traditional lithium extraction processes. It was helpful to establish recycled lithium as a viable pipeline for future resource and energy demands. | 3,431.4 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Neural Network Quantum States Analysis of the Shastry-Sutherland Model
,
Introduction
The neural network quantum states (NQSs) [1][2][3][4][5][6][7][8][9][10] have recently emerged as a promising alternative to common trial states in variational Monte Carlo (VMC) studies of quantum many-body problems, especially lattice spin models.This research is driven by the fact that neural networks (NNs) are universal function approximators [11] as well as by the astonishing progress in the field of machine learning (ML) in general.These advancements already led to a number of effective ML applications suitable for the basic research of quantum systems and technologies [12][13][14][15].For example, even simple NQSs, such as the restricted Boltzmann machine (RBM), allow us to investigate the ground-state properties of various quantum spin models.It was already shown that RBM can outperform standard trial states in the variational search of the ground-state energies of the antiferromagnetic Heisenberg model [1].Very promising results have also been obtained for frustrated spin systems, such as the J 1 − J 2 model [16][17][18][19].
Here NQSs can be trained to capture the nontrivial sign structure of the ground state and in some cases have even achieved state-of-the-art accuracy [20] that delivers cutting edge results.Nevertheless, two-dimensional frustrated quantum spin models continue to be a challenge for NQSs as well as for other methods [21].For example, it is not clear yet how to choose an optimal neural network architecture for a particular frustrated system, how important is the role of the trial state symmetries in the learning process, or if an NQS with favorable variational energy also encodes a physically correct state.Not all of these issues are specific to NQSs.Results of any VMC calculations are dictated to a large extent by the properties and limitations of the trial states used.An inappropriately chosen variational state, that is, one with a small overlap with the ground state, can still give a good estimate of the ground-state energy [22].If some additional information is known about the ground state, e.g., its symmetries, one can pick a more restrictive variational state function.However, this is often not an optimal strategy if the goal is to find new phases or to locate a phase boundary.In principle, NQSs could be a remedy for such problems.It is reasonable to expect that a single, but expressive enough, NQS can be used to approximate distinct phases.This assumption is supported by the results of Sharir et al. [23] who showed that NQSs can have even higher expressive power than matrix product states [24] and projected entangled pair states [25] as these can be efficiently mapped to a subset of NQSs.In other words, NQSs can be effectively utilized to a larger class of quantum states than these powerful formalisms which are known primarily from their usage in Density Matrix Renormalization Group (DMRG) but are also utilized as variational states in VMC [1,22,26].
In practice, it is not yet clear how to achieve this in a general case.Despite tremendous progress, the research of frustrated quantum spin magnets is still in the stage of testing and developing NQS architectures for simple models, often focusing primarily on reaching the best variational energy in particular regimes [6,16,17,19].In the present work, we aim for a different target.We want to demonstrate that even shallow NQSs can be sufficient for the investigation of qualitatively different ground-state orderings including states forming only in a finite magnetic field.To this goal, we focus on the ground state of antiferromagnetic Heisenberg Hamiltonian on Shastry-Sutherland lattice known as the Shastry-Sutherland model (SSM) which we introduce in more detail in Sect. 2. To our knowledge, this model of frustrated quantum spin system has not been previously addressed within the NQS context, yet it seems to be an ideal testbed for our purposes.
SSM was already investigated by a number of methods, including exact diagonalization (ED) techniques [27][28][29][30][31], quantum Monte Carlo [32], various versions of DMRG [33][34][35][36][37][38], perturbation theory [39][40][41][42] and even quantum annealing [43].These studies have shown that SSM has a rich ground-state phase diagram.In a zero magnetic field these include regions such as singlet spin dimer phase, antiferromagnetic Néel state, spin plaquette singlet phase and probably other phases.The introduction of a finite magnetic field further complicates the picture.Consequently, it is challenging to find a single variational function that can correctly approximate the whole ground-state phase diagram.
In addition, there are still open questions related to the ground-state phase diagram in zero as well as in the finite magnetic field, even in some experimentally relevant regimes of the model.This is important because several magnetic materials have a structure topologically equivalent to SSM.The most notable examples are SrCu 2 (BO 3 ) 2 , BaNd 2 ZnO 5 and rare earth tetraborides RB 4 (R − − Dy, Er, Tm, Tb, Ho) [44][45][46][47][48].All exhibit an intriguing step-like dependence of the overall magnetization on the external magnetic field, which has been found to be inherent to SSM [49,50].Here, each plateau reflects a stable nontrivial spin ordering.The magnetic behavior of these materials is not yet fully understood.This together with other open problems, e.g., the prospect of a narrow spin liquid phase in a zero magnetic field, further motivates the investigation of SSM and its generalizations [42,[51][52][53].
Therefore, SSM presents a model system that has the right combination of properties that are well understood and can be used to benchmark various NQSs, and of open problems that can be potentially illuminated by these variational techniques.This includes the possibility to address the rather complex behavior of a system in relation to a changing magnetic field.
The present work consists of two main parts.In the first one we explore SSM by employing a number of NQS architectures and we test them against ED results for small lattices in zero magnetic field.Here the primarily goal is to find one or few networks that are able to capture the main well-understood ground-state orderings of SSM.Simultaneously, we require these NQSs to have a high chance to describe the magnetization plateaus as well.This means that the ideal network has to give a solid approximation of the ground-state orderings even when no conditions on the total magnetization are imposed.Consequently, we do not focus on getting the best possible variational energy for a particular set of parameters.Rather, we require a good approximation of the energy in distinct regimes of the model, a correct description of the particular orderings, and reasonable computational complexity that allows the usage of the NQS on larger lattices.We argue that when precision, generality, and computational costs are taken into account, a shallow RBM with complex parameters is still a good choice.
In the second part, we introduce a refined learning protocol for RBM NQS and test it for a wide range of model parameters and different network sizes.We then utilize this protocol in the study of larger systems.We first investigate the zero magnetic field scenario and demon- strate that RBM is expressive enough to capture all main phases of the system.We then move to the model in a finite magnetic field and show that, with the right learning strategy, RBM is able to capture the magnetization plateaus crucial for the description of real materials.This opens a possibility that NQSs could be used to investigate several open problems, such as the existence of still opaque spin-liquid phase and other orderings predicted but not yet confirmed in SSM.
Shastry-Sutherland model
SSM is described by the Hamiltonian where Ŝi = 1 2 σi is the spin-1/2 operator at the i-th site with σi being the vector of Pauli matrices.The first term represents the exchange coupling between the nearest neighbors on a square lattice (solid lines in Fig. 1).The second term is a sum over specific diagonal bonds arranged in a checkerboard pattern (dashed lines in Fig. 1).Note that these sums are interpreted in terms of nodes, i.e., there is no double counting.Both coupling constants are antiferromagnetic (J, J ′ > 0) and we set J ′ as the unit of energy in the whole paper.The last term describes the influence of the external magnetic field h pointing to the z-direction.
Basic properties of the ground state
The basic structure of the SSM ground state phase diagram is well understood.As illustrated in Fig. 2, the SSM at h = 0 has at least three distinct ground-state orderings.These are the dimer singlet (DS) state for (J ′ ≫ J), the Néel antiferromagnetic (AF) ordering (J ′ ≪ J) and the plaquette singlet (PS) state in between.The phase transition from the DS to the PS state is of the first order [29], but the nature of the transition from PS to AF is still in debate.The ED study of Nakano and Sakai [30] suggests that the supposed PS phase actually consists of at least two distinct phases.In addition, some recent studies argue that there is a so-called deconfined quantum critical point (DQCP), which separates a line of first-order transitions or, potentially, a narrow gapless spin liquid (SL) phase [37,38,54].[38].There is a first-order transition at J/J ′ ≈ 0.675 between DS and PS phases.The gray squares in PS depict the plaquette singlets.The nature of the transition between the PS and AF phases remains unresolved.It is not clear whether there is a narrow spin liquid phase, a DQCP or just a second order transition in the region labeled with a question mark.
Nevertheless, even without focusing on the possible DQCP and SL phase, the three main orderings, namely DS, PS, and AF, already pose a sufficient challenge for a single variational state because of their distinctive character and symmetries.
The DS phase is formed by an exactly (analytically) accessible state [55].Numerous analytical and numerical methods have verified that it remains the ground state up to J/J ′ ≈ 0.675 [29,30,37].In the limiting case of J ≪ J ′ , the system is equivalent to an ensemble of independent spin dimers, each of which forms a singlet ground state.The DS ground state is thus a direct product of dimer singlet states As such, it is antisymmetric with respect to the exchange of two intradimer spins and symmetric with respect to transformations rearranging only the spin pairs without swapping the intradimer spins.The energy of the ground state of the dimer is where N D is the number of dimers and N D = N /2 for lattice with periodic boundary conditions.
The PS phase can be understood as weakly coupled plaquette singlet states illustrated in Fig. 2. Plaquette singlet is a ground state of an isolated 4-spin Heisenberg cluster with four bonds arranged in a cycle [29].The pattern of the plaquette singlets in Fig. 2 indicates that the PS state is two-fold degenerate.
It is important to stress again that the relevant range J/J ′ discussed here (0.675 ≲ J/J ′ ≲ 0.82) could be much more complex.As mentioned above, it has been argued that at J/J ′ ≈ 0.70 the PS phase splits into two distinct regions with quantitatively different behaviors [30,37,38,54].For the sake of simplicity, we omit this possibility in most of our discussion.Nevertheless, this might be important for more detailed future studies.
Figure 3: A simplified illustration of magnetization as a function of the external magnetic field h and coupling constant J inspired by Ref. [39].A more detailed illustration would contain additional steps (e.g., supersolid phase); however, their actual position and width are not clear yet.Singlet and triplet arrangements are displayed for some of the plateaus (namely, m z = 1, 1/2, 1/3 and 1/4 plateaus are shown).
The AF phase stabilizes when J/J ′ ≳ 0.82.When J ′ becomes negligible, the ground state of SSM is approaching the ground state of the antiferromagnetic Heisenberg model with only nearest-neighbor bonds on a square lattice.Although this state is not analytically accessible, it has previously been explored by Monte Carlo (MC) simulations [27].Using the first-order correction to these quantum MC results, the energy of the SSM in the AF phase was estimated [27] to be where N is assumed to be large.
A more detailed discussion of the symmetries of these three states is postponed to the Appendix C. Note that the three main phases DS, PS, and AF are reasonably understood, and simultaneously, they differ qualitatively.This is one of several qualities of the model that make the SSM a suitable testbed for NQSs.
So far we have discussed the h = 0 case.When we introduce a finite magnetic field to the DS phase in Eq. ( 2), some dimers can morph into triplet states.These triplets are formed in repeating patterns, e.g., checkerboard, stripes, or more complex configurations (for illustration, see Fig. 3), giving rise to stable plateaus of constant magnetization in increasing magnetic field.
Because each plateau signals a distinct stable ordering, it also presents a challenge for the NQSs.Particularly so because a finite magnetic field does not allow for a simple restriction of the Hilbert space to its zero magnetization part.This restriction was heavily utilized in previous NQS investigations of quantum spin models.Note that it is mostly these plateaus that make SSM interesting experimentally.Good examples are SrCu 2 (BO 3 ) 2 , BaNd 2 ZnO 5 , CaCo 2 Al 8 and rare-earth tetraborides RB 4 (R − − Dy, Er, Tm, Tb, Ho)) [44][45][46][47][48] which all exhibit the intriguing step-like dependence of the overall magnetization on the external magnetic field or show magnetic frustration and can be modeled by SSM or its generalizations.
as is typical in NQS studies [56].The variational energy in Eq. ( 5) is, in the jargon of ML, a loss function.Using this loss function, the parameters θ are optimized to obtain the lowest energy state that the chosen variational function can represent.In our calculations, we use the VMC implementation from the NetKet NQS toolbox [9,56].
In general, the form of the trial wave function ψ θ (σ z ) restricts the optimization process to a subset of the Hilbert space.An improper choice of the ansatz can bias the approximation towards a wrong phase or even can make the approach to the correct state impossible.Clearly, this is where one can expect that NQSs could outperform standard variational states due to their high expressiveness.
Neural network quantum states
Here, we explore several NQS architectures [6,9].We chose these particular networks due to their successful application in previous studies of other Heisenberg models.
Restricted Boltzmann machine (RBM) is a generative artificial NN constituted of a visible layer with N nodes (one for each lattice site) fully connected with a single hidden layer with M = αN nodes (hidden degrees of freedom) where α is the hidden layer density [1].It can be used to define an NQS where the vector θ contains the variation network parameters θ = {a, b, W}.This NQS can be interpreted as a one-layered fully-connected neural network with log cosh activation function followed by a summation of the outputs and additional summation of visible biases [1].Note that complex-valued parameters are necessary in order to represent generally complex-valued wave function outputs.The size of the visible layer N is fixed by the size of the investigated spin system.However, the expressive power of RBM can be modified by changing α.The number of variation parameters of RBM is O(αN 2 ).
Modulus-phase split real-valued RBM (rRBM):
Complex parameters, which generally make the learning process harder, can be avoided by introducing two independent real-valued NNs [18,57] to represent the modulus A(σ z ) and the phase Φ(σ z ) of the wave function separately log ψ θ (σ z ) = A(σ z ) + iΦ(σ z ) .( 8) Unlike in the Ref. [57] where rRBM architecture proved to be advantageous in the investigation of transverse-field Ising model, we have experienced that for SSM, the rRBM shows worse results than complex-valued RBM.This is in accord with the recent study of other frustrated systems, namely the J 1 − J 2 model [19].Consequently, we discuss the results of this network only briefly in Chapter 4.1 and focus predominately on complex-valued architectures.
Symmetric variant of RBM (sRBM):
Carleo and Troyer [1] used translational symmetries to reduce the number of variational parameters in RBM.They replaced the fully connected layer with a convolutional layer and set the visible biases to the constant value a f across each convolutional filter f .The resulting expression for its output is Here T g denotes a symmetry transformation of a spin configuration according to an element g from the symmetry group G of order |G|.The index f denotes different feature filters.The number of these filters F determines the size of the network M = F |G|.The resulting sRBM has fewer variational parameters than the RBM by a factor of |G|.We can view this approach as binding the values of some of the O(αN 2 ) parameter making the total asymptotic number of parameters O (αN ).Carleo and Troyer [1] also showed that this approach significantly improves the convergence and accuracy of the ground states of the antiferromagnetic Heisenberg model on a square lattice.However, this approach suffers from two crucial disadvantages in more general circumstances.The first drawback is that visible biases are inherently constant for each filter f which significantly lowers the expressiveness of the network as discussed later in this section.As we show in Appendix B, the sRBM architecture cannot be modified to ease this condition while preserving symmetries.The second drawback is that sRBM is not applicable if the ground state does not transform under the trivial irreducible representation (irrep) of a given symmetry group.
To illustrate the problem, let us consider a single spin dimer (i.e., a single bond of SSM with J = 0, J ′ = 1 and h = 0).Its ground state is a singlet |ψ 0 〉 = (|↑↓〉 − |↑↓〉) / 2. The symmetry group of the single-dimer Hamiltonian contains just two operations -an identity and a swap of both spins G = {g 12 , g 21 }.If we apply the swap operation to the ground state, we obtain Tg 21 |ψ 0 〉 = (|↓↑〉 − |↓↑〉) / 2 = − |ψ 0 〉.Although this state is a multiple of the ground state, we see that it does not transform under the trivial irrep because one of its characters is χ g 21 = −1.Since sRBM represents only states with Tg |ψ〉 = |ψ〉; ∀g ∈ G, this symmetry should not be used in sRBM.Note that we do not strictly follow this rule and sometimes use all available lattice symmetries.The reason is that this leads to NQS with a small number of parameters that are easy to optimize.The resulting variational energy can then be compared with the energy obtained with RBM with the same α to check how well the full network is optimized, i.e., if it leads to lower energy than sRBM.If not, this signals that the variational energy of RBM can be lowered by better learning.
Projected RBM (pRBM): Recently, Nomura [58] introduced an alternative way to symmetrize RBM (or any other NN) using a quantum-number projection (also called incomplete symmetrization operator) where g is an element of the given symmetry group G and χ g is its character from the irrep in question.The wave function on the right-hand side may be arbitrary and it can be shown that the function on the left-hand side satisfies the desired transformation property Unfortunately, pRBM makes the learning process of NN much more expensive than sRBM.The computational time increases by a factor of |G| producing a computational cost O(αN 2 |G|).On the other hand, pRBM implementation does not suffer from the problems mentioned for sRBM and it can be generalized by setting mutually independent visible biases (see Appendix B).
Group-convolutional NN (GCNN):
Group equivariant convolutional NNs represent a promising class of NNs built inherently on symmetries.They were proposed by Cohen and Ni [59] as a natural extension of the well-known convolutional neural networks.While convolutional networks preserve invariance under translations, GCNN are equivariant under the action of an arbitrary group G (which may contain a subgroup of translations).Roth and MacDonald [60] further improved GCNNs so that they can transform under an arbitrary irreducible representation of G, which is more suitable for NQSs for SSM.GCNN can be composed of any number of hidden layers.The first and subsequent layers are given by where f is a nonlinear activation function (the output is typically a vector since GCNN can have multiple parallel feature filters) and f 1 g is a 1st-layer feature vector corresponding to group element g.The result of the last layer , where ( j) denotes the individual features of the layer, is then projected in a fashion similar to that of pRBM The main advantage over symmetrizing an arbitrary deep network by the formula form Eq. ( 10) is that we do not need to evaluate the forward pass of the nonsymmetric wave function |G| times.This is achieved because each layer of the GCNN fulfills equivariance.GCNN with K layers and a typical number of feature filters F in each layer has O(F N + K F 2 |G|) parameters.
Jastrow network:
As a baseline, we also use a Jastrow network based on the standard Jastrow ansatz [61,62] where the variational parameters θ = W i, j form a matrix of size N × N .The Jastrow ansatz is physically motivated by two-body interactions and assigns trainable parameters W i, j to pairwise spin correlations.The number of its parameters scales as O(N 2 ).
The complicated sign structure of the complex phases of the basis coefficients that form the ground-state wave function presents a major challenge in optimizing the parameters of a variational function of a frustrated spin system.In case of Heisenberg model on a bipartite lattice consisting of sublattices A and B (i.e., SSM with J ′ = 0), this can be solved using the Marshal sign rule (MSR) [63].The MSR states that the sign of ψ(σ z ) is given by (−1) where N ↑ A (σ z ) is the total number of up-spins on a sublattice A. Because this alternates with a spin-flip, it can be difficult for NN to learn the correct signs.However, it is possible to circumvent this problem in two analogous ways.
If the sign structure is dictated by MSR, the Hamiltonian can be gauge transformed by changing the signs of some terms to make all wave function coefficients positive in the transformed basis.In particular, we change σ x → −σ x and σ y → −σ y ; for ∀σ ∈ A. The same result can be also obtained by setting the visible biases to a i = iπ/2 for i ∈ A and a i = 0 for i ∈ B as this exactly reconstructs the Marshall sign factor (up to an overall constant factor).In other words, the biases can be set to play the role of a Marshall basis.What is important here is that in the general case the simple Marshall sign rule is not always applicable.Especially problematic are systems with strong frustration [18,19,64].The advantage of using the visible biases instead is that their setting does not have to be known ahead, as it can be, despite possible technical difficulties, learned.Therefore, it is beneficial to include visible biases whenever allowed by the architecture.An additional bonus is that free visible biases also allow one to overcome an improper initialization of weights.
Comparison of different NQSs architectures
It is too expensive to apply all NQSs introduced above to investigate the ground-state phase diagram of SSM at large lattices.Therefore, in the first part of our investigation, we benchmark these NQSs against the exact results on smaller lattices obtained by the Lanczos ED method.The aim is to identify a network that is both expressive enough to cover various phases and computationally tractable even for large lattices.We focus on a regular lattice with N = 4 × 4 = 16 points and an irregular lattice with N = 20 (see Appendix A).Throughout this paper, we apply periodic boundary conditions for all lattices used, unless explicitly stated otherwise.The irregular N = 20 is considered because N = 16 lattice has some undesirable properties, e.g., some extra symmetries with trivial irrep which favor symmetric networks.It also suffers from stronger finite-size effects and does not exhibit the PS phase.On the other hand, it is regular and easy to calculate.
We initially focus on the cases represented by J/J ′ = 0.2 (DS phase), J/J ′ = 0.9 (AF phase), and J/J ′ = 0.63.Case J/J ′ = 0.63 was chosen because it represents a realistic case, namely, it is the exchange parameter ratio for SrCu 2 (BO 3 ) 2 at ambient pressure [34].However, because its results are qualitatively in agreement with the case J/J ′ = 0.2, we discuss them together as the DS results.Note that we investigate the model with and without MSR.Since the goal here is to compare different networks, we estimate the accuracy of each architecture by comparing the average energy of the last 50 learning iterations E 50 with the exact result E ex .Note that this means that we are not using just the lowest obtained energies but also test stability of the learning method.Consequently, the value of E 50 is typically greater than zero even when the network is able to reproduce the state exactly.The same computational protocol is used for each architecture.In particular, we used 2000 MC samples 1 and 1000 training iterations for three values of fixed learning rates (0.2, 0.05, 0.01).Each particular combination of architecture and basis (MSR or direct) was computed four times for each learning rate (yielding 12 independent runs for each case of interest).This is to eliminate occasional events when NN gets stuck in a local energy minimum too far from the ground state.Zero magnetization was not implicitly assumed (i.e., we used local single-spin-flip Metropolis updates in VMC).We summarize our results in , where E i 50 is the average energy of the last 50 iterations of the i-th run.A number of variational parameters are also shown for each architecture.The difference between GCNN and GCNNt is that for GCNN we used all the symmetries and the correct characters for the expected ground state, whereas GCNNt utilized only the translation symmetry.The error 0.0 here means a relative error less than 10 −7 which we consider as a "numerical precision" due to the standard MC errors which are typically larger even for L = 16.
with RBM, one can see that networks with α = 2 (560 parameters for N = 16 and 860 for N = 20) and 8 (3380 parameters for N = 20) and 16 (4398 parameters for N = 16) show similar precision, where the significantly larger networks are notably better (approximately three times) only in the AF phase.For the general case, considering the computational costs, this favors the computationally less demanding network with α = 2. Also interesting is the comparison with the Jastrow network.Both architectures have comparable precision in the DS phase for N = 16, however, in the AF phase and in the DS phase for N = 20 with MSR, RBM is one or even two orders of magnitude more precise than the Jastrow ansatz.
For N = 16, the sRBM architecture demonstrates superior performance.The full automorphism group of the finite lattice has been used in its implementation.Despite the resulting small number of variational parameters, it shows excellent precision.In fact, a significant increase of α is not that advantageous (compare the cases α = 4 and α = 128).In the DS phase, the use of symmetries allowed sRBM to find the ground-state energies within the numerical precision (hence the zero error).Since sRBM can be thought of as RBM with additional constraints on the values of the weights, this already suggests that the learning protocol for RBM can be improved, which we demonstrate in the next section.However, it is important to stress that the excellent results are a consequence of the special symmetries of the N = 16 lattice.Both the DS and AF states transform under the trivial irreducible representation, and the automorphism group is therefore applicable without special treatment.This is not true for the DS ground state in different tiles, including regular ones such as N = 6 × 6 (for a more detailed discussion of the symmetries, see Appendix C).This is illustrated in the second part of Table 1 where sRBM with α = 4 gives very poor results in the DS phase of N = 20 due to the improper treatment of symmetries.In short, using symmetries in sRBM for states that do not transform under a trivial irrep can make the variational energy significantly worse than for simple RBM.For N = 20, sRBM also fails in the AF phase, but only when adopting a direct basis.This implies that sRBM has trouble learning the correct sign structure of the state for larger lattices, which can be attributed to the fixed visible biases.
The remaining architectures, namely pRBM and GCNN, show excellent accuracy for N =16.They clearly outperform all other networks in the AF phase.However, the results at N = 20 are less convincing, especially when one takes into account that these networks are more computationally demanding than RBM even for cases when RBM contains more parameters.Furthermore, the precision reached required the usage of correct symmetries of the expected state, i.e., the proper line form Table 2 in Appendix C. If one uses an improper one, i.e., if different state is expected, as illustrated by the last two lines in Table 1, the precision can drop by several orders of magnitude.Similarly, precision decreases significantly for both GCNN and pRBM when we use only the group of translations instead of the full symmetry group, as illustrated by GCNNt in Table 1.Note that for this case, the precision in the AF phase drops to the level of a simple RBM with α = 2.The network is much better in the DS phase, but in the following chapter, we will demonstrate that even RBM with α = 2 and modified learning protocol can reach the numerical precision in this phase.Although we cannot exclude that much better results could be obtained for the symmetrized pRBM and GCNN networks with a different learning protocol, considering their much higher computational demands and the necessity to identify a priori the correct irrep symmetries for each lattice type to make the learning efficient, the presented results favor RBM for the study of larger clusters.
The last question to be addressed here is whether using MSR would be beneficial.Table 1 shows several cases where MSR is favorable in the AF phase (e.g., for sRBM and N = 16), but this is not a general rule.In addition, its usage comes with a price as well.We have noticed that the MSR basis seems to strongly favor the AF ordering even for J/J ′ where PS is already the ground state in exact results.We will discuss this briefly when addressing larger lattices.
To wrap it up, in general, the usage of MSR basis does not lead to significantly better results.With some exceptions, the networks presented here are able to approximate the ground-state energy quite well even without MSR.Therefore, we will mostly omit the MSR from further discussion.Furthermore, if the symmetry of the ground state is known, it is worth using this information in building the NN.If not, then the usage of just translations does not lead to a significant improvement of the precision.Fortunately, the complex-valued RBM with visible biases can give a very good approximation of the ground-state energy without any restrictions.Its clear advantage is that no preliminary information about the ground-state properties is needed.As such, it is suitable for problems where the character of the ground state or position of the phase boundary is unknown.In addition, the precision of RBM for SSM can be significantly improved using a different learning strategy discussed in the following section.
Investigation of the ground-state phase diagrams
Focusing solely on RBM allowed us to test several learning strategies and employ more precise MC calculations.What follows is a description of the best learning protocol we have found, which we used to produce all the results discussed below.It proved to be beneficial to use more precise MC calculations already during training.We typically generate 4000-12000 MC samples at every sampling step.It was also more advantageous to run 10-30 independent learnings (with random initial variational parameters) with shorter learning times than to use few runs with a lot of learning iterations.We used approximately 2000 training iterations in each run.During learning, we have been lowering the learning rate η by several discrete steps.Typically, we started with η = 0.08 (≈200 iterations), then changed it to η = 0.04 (≈1600 iterations), followed by η = 0.02 (≈100 iterations), η = 0.01 (≈100 iterations) and η = 0.003 (≈50 iterations).The trained RBM was then used to calculate the expectation values of the energy and order parameters, introduced in the next section, where we used 12-60 thousand evaluation steps.Consequently, the Monte Carlo error bars in all the figures presented are negligible for small lattices.The relevant absolute error comes from the learning process or limitations of the NQS used.The state with the lowest energy (evaluated more precisely after training) of all independent runs was kept as the final result in the following discussion.Due to the stochastic fluctuations in the learned parameters, it was for some cases advantageous to refine the results by fine-tuning the final state multiple times with a high number of MC samples but a small number (5-10) of iterations and a small learning rate (η ≤ 0.001) keeping the result with the lowest energy.Moreover, transfer learning was employed in some problematic regimes, as described below.
Ground-state orderings
As already discussed, good agreement of the variational energy with the exact one does not guarantee that the variational state correctly captures the character of the exact ground state, i.e., that it reflects the correct phase.To examine this and with the aim to see if RBM NQS can correctly describe the transitions between the phases, we calculate the order parameters for the three main expected orderings.They are constructed to be large (close to one) whenever the state is in the respective phase and small in other domains.
In particular, we define the order parameter for the DS phase as which reflects the fact that operator Ŝ1 • Ŝ2 has for isolated dimer the expectation value − 3 4 (singlet state).Therefore, P DS is one in the DS phase and strictly lower in other phases.
For the PS order parameter, we use a definition based on order parameter from Ref. [38] where the order parameter is given by the difference Qr = 1 2 Pr + P−1 r , with Pr being the permutation operator.This operator performs a cyclic permutation of four spins on a plaquette (a square on the lattice without the diagonal bond J ′ ) at position r .Here, the first sum in Eq. ( 15) runs over the subset of squares A (see Fig. 1) and the second sum runs over the subset B. The meaning of this construction can be understood by looking at Fig. 2. Note that in the investigation of the plaquete ordering we utilized in addition to periodic boundary conditions (torus geometry) also a lattice with mixed ones.For periodic boundary conditions, we have N = N /4 as all squares are used.For mixed ones, we followed Ref. [38] and use regular lattices with open boundary conditions in the x-direction with L x = 2L and periodic in the y-direction with L y = L so that N = 2L 2 .However, the order parameter is calculated only in the central L × L square to mitigate the boundary effects.Hence, N = L 2 /4.The operator Qr gives a large mean value in the plaquette singlet (gray square) and a value close to zero in the empty square between four plaquette singlets.For periodic lattices, we do not know which set of squares will become singlets, as the state is degenerate, therefore, we use the absolute value.
For the AF phase we employ the standard structure factor where r i j denotes the difference in discrete coordinates of spin i and j, and we take q = (π, π) which measures the antiferromagnetic checkerboard ordering.Finally, in the case of finite magnetic field we use the normalized magnetization in the z-direction to identify the expected plateaus in the magnetization.These expectation values are calculated using VMC for trained RBM NQS.
Zero magnetic field
We first investigate the phases of SSM in a zero magnetic field.Unlike the procedure used to compare different network architectures, here we restrict the Hilbert space by the condition M = 0. Before moving to larger lattices, we test the RBM for N = 20 in a wide range of J/J ′ .We use the irregular lattice N = 20 because it shows an onset of the PS ordering (see the black dashed line in Fig. 4(a)) not present for smaller regular lattices.We also readdress the role of the parameter α within the new learning protocol, but start our discussion with the case α = 2.
As is clear from the comparison of the ground-state energies in panels Fig. 4(b) and Fig. 4(c), the RBM variational energy agrees very well with the ED.The updated learning protocol ensures that the relative error in the J/J ′ < 0.68 region, i.e., for the DS phase, is on the order of the numerical precision already for α = 2 despite not using any symmetries except for the condition M = 0.The largest error is in the vicinity of the expected first-order phase transition from the DS to PS phases, but only from the side of the expected PS phase.Nevertheless, even here, the largest observed relative error in energy was approximately 1% for α = 2.
Given the focus of our study, even more important than the energy error is the nature of optimized variational states.Panel (a) in Fig. 4 shows that a shallow network, i.e., RBM with complex parameters and α = 2 is expressive enough to correctly capture the formation of the distinct DS (blue diamonds) and AF ordering (red crosses), as well as the onset of the PS phase (black circles).The agreement is far from perfect, though.Consistent with the results for the energy, the largest differences in order parameter values between RBM and ED are in the right vicinity of the expected phase transition.Here an error of 1% and less in the estimation of the ground-state energy translates into an error of tens of percents in the order parameters.Still, even here the RBM gives a correct qualitative picture.The position of the abrupt change of phase matches the exact result and there is a clear onset of the PS ordering.With increasing J/J ′ , the RBM results align again with the exact ones.
This benchmark shows that RBM with α = 2 can easily capture the correct state in the DS phase, but gives worse results above the critical J/J ′ ≈ 0.68.What is not clear is if the relative errors in panel (c) represent some inherent limitation of the RBM with small α, e.g., a difficulty to set the correct sign structure of the frustrated state, or are related to the learning process.Gradually increasing α from 2 (blue circles) to 4 (red pluses), 8 (green pluses) and 16 (black Here blue solid (ED), pure blue diamonds (RBM with α = 2) and blue diamonds with red edge (RBM with α = show the DS order parameter; black dashed line (ED), black circles (RBM with α = 2) and black circles with yellow edge (RBM with α = 16) show the PS order parameter; and red dot-dashed line (ED), red crosses (RBM with α = 2) and red crosses with blue edge (RBM with α = 16) show the AF order parameter.The results of symmetric variants of RBM are not shown, as they were comparable to the results presented for J/J ′ > 0.68 and well off the exact results for J/J ′ ≤ 0.68.(b) The exact (red line) and RBM α = 2, 16 ground-state energies.(c) Relative error in ground-state energy for the RBM with α = 2 (blue circles), 4 (red pluses), 8 (green crosses) and 16 (black-yellow diamonds).Note that the relative error in the DS phase for RBM α = 2 is at the level of numerical precision.diamonds with yellow cores) in the problematic region lowers the relative error in energy.However, this significant improvement in energy leads only to a small improvement for the order parameters near the critical point.This is shown in panel (a) where the results calculated with RBM with α = 16 are marked with the same symbols as for α = 2 but highlighted via differently colored edges.
Using symmetric NQS symmetries did not significantly improve the results.We have tested the sRBM architecture with α = 4 in direct as well as MSR basis using the same protocol as for RBM.The sRBM results have been comparable to RBM for J/J ′ > 0.68 and much worse than the RBM results below this critical value.This suggests that the issue is not entirely due to insufficient learning.On the other hand, the learning was the most difficult in the vicinity of the observed discontinuity.A significant fraction (often more than half) of the independent AF (c) and variational energy (d) as a function of J/J ′ for h = 0 and various lattice sizes.All results in panels (a)-(d) have been obtained using RBM NQS with α = 2 and VMC with exchange updates (simultaneous flip of two opposite spins in the basis state) for the Hilbert subspace restricted to M = 0.The black dashed lines in panels (d) and (e) show the asymptotic energies for the DS (horizontal) and AF phase (tilted).The black crosses represent the results with N = 64 for which we have utilized transfer learning.The inset (e) shows the details of the variational energy for N = 64 in the vicinity of the phase transition calculated using RBM (green diamonds), sRBM in direct base (blue stars), sRBM with MSR (red empty diamonds) and three points calculated with RBM utilizing transfer learning (black crosses).The empty purple squares show the RBM results for N = 100, and the orange triangles are infinite DMRG results taken graphically from Ref. [37].runs for 0.69 ≤ J/J ′ ≤ 0.72 ended either in the wrong phase (DS) or even in a state with an energy much higher than the real ground state.This was not true for the rest of the J/J ′ interval, where most of the independent runs with the same α showed very similar variational energies.Furthermore, the relative errors for all investigated RBM variants (including those not presented here) follow the same pattern.They are maximal just above the critical point and then, if we neglect some noise, they monotonically decrease with increasing J/J ′ .Yet, increasing α significantly lowers the variational energy even for J/J ′ > 0.74.This again suggests that the problem is indeed small α.Ultimately, both statements seem to be correct.Significantly larger α than α = 16 is needed to capture the critical region together with highprecision learning, that is, many independent runs.
After testing the RBM on small lattices and understanding its strength and limitations, we can now approach larger ones.We focus on α = 2 as the increase in the precision of the variational energy obtained with larger α's does not significantly improve the estimates of the order parameters.Although we can not easily compare the VMC results with the exact diagonalization for larger lattices, we can use the exact asymptotic results for the energy in DS Eq. ( 3) and AF phase Eq. ( 4) to guide us.Fig. 5 shows the evolution of the order parameters and energy for N = 20, 36, 64 and N = 100.The results agree very well with the exact result in the assumed DS phase and are between the exact energy of N = 20 and the asymptotic energy for large N in the supposed AF phase up to several points in a very narrow region near the discontinuous phase transition discussed later.Fig. 5 illustrates the usability of RBM for larger clusters.The presented results support the overall picture of the DS and AF phase separated by a narrow PS or at least its indication.Nevertheless, a much more thorough finite-size analysis would be necessary to assess the phase boundaries.For example, P AF decreases with increasing system size in the whole relevant range of J ′ which is in agreement with previous studies, e.g.[38].Consequently, a careful and precise extrapolation of P AF to the thermodynamic limit is needed to identify J above which the AF ordering prevails.However, even in this respect, there is an issue.The point of the discontinuous phase transition from the DS phase to the PS phase should be J/J ′ ≃ 0.675, but our results at larger lattices push it to J/J ′ ≃ 0.7.Besides finite-size effects, this could also be related to two technical problems.The first is the difficulty of training the NQS in the vicinity of the discontinuous phase transition.The second is the tendency of the direct base to prefer DS over AF ordering.Both these issues can be seen in panel (e) (inset of panel (d)) with details of the N = 64 (and N = 100) results.Here, green diamonds show the RBM data, red empty diamonds are sRBM data with MSR basis, and blue stars are sRBM data for direct basis, all with α = 2 for N = 64.Clearly, all these networks show (different) problems around the expected point of the phase transition.For J/J ′ = 0.7 and 0.72 sRBM with MSR gives energy lower than RBM and even lower than the energy of DS ordering.Therefore, the sharp transition must be placed below J/J ′ = 0.7.However, sRBM with MSR cannot correctly capture the onset of DS ordering.The sRBM network with direct basis illustrates the opposite problem.It overestimates the stability of the DS ordering.
Investigation of sRBM showed that the RBM results at J/J ′ = 0.7 are not yet fully converged.Because we have not been able to solve this problem using the direct approach, we utilized transfer learning.We used the RBM parameters trained for J/J ′ = 0.74 as a starting point to train the network at J/J ′ = 0.72, then used these results as a starting point for J/J ′ = 0.70, and finally these results for 0.69.That way we obtained lower variational energies for J/J ′ = 0.72 and 0.70 than in the direct approach or in the sRBM results, and the J/J ′ = 0.72 result dropped even below the DS energy.Interestingly, this also leads to an observable change in the order parameters (black crosses in all panels).In contrast to the N = 20 case, the PS ordering is especially sensitive to this change, as seen from the comparison of black crosses and green diamonds in panel (d).Even if it is suggested by the order parameters, the transfer learning technique has not reached the point of the expected phase transition below J/J ′ = 0.69.The reason is that the energy obtained at this point exceeds the DS energy already reproduced by the direct approach.This shows that, although useful, transfer learning has to be used with care.What is confusing is that the variational energies appear to be stable.They follow almost a straight line, with only small differences between various versions of the RBM and even lattice size, as illustrated by the N = 100 data.Yet, these energies are approximately 2% higher than the energy of infinite DMRG (iDMRG) results in the expected PS phase, which were taken graphically from Ref. [37] and are marked by the orange triangles.However, the iDMRG results were obtained using a different type of lattice.Namely, an infinite cylinder with a circumference of 10 lattice points.Therefore, they are not directly comparable due to the finite-size effects.Nevertheless, the predicted position of the DS-PS transition point just below J/J ′ = 0.69 is too high and presents a conundrum.
Another, but related issue is the plaquette order parameter.Our results for the lattices with periodic boundary conditions suggest that there might be some fundamental problem with accessing the PS phase using RBM, because not all lattices show a significant PS order parameter where expected.However, this might be related to the problem of degeneracy of plaquette ordering.To shed more light on this problem, we tested other variants of the SSM lattices.In particular, a version where a perfect PS is expected and SSM with mixed boundary conditions that break the degeneracy.b) and variational energy (c) for J/J ′ = 0.74 and 0.8 between different methods and lattice boundary conditions.The empty green diamonds show the results for complexvalued RBM with α = 2 for periodic boundary conditions (L = N ).The black squares and the red circles show the mixed boundary conditions (L = N /2), where the former have been randomly initialized and the latter in the ideal PS ordering.Blue stars are DMRG results taken graphically from Ref. [38].In panel (c), the upper half shows the J/J ′ = 0.74 results compared with the iDMRG result for an infinite cylinder with a circumference of L = 10 taken graphically from Ref. [37].The bottom half compares our results for J/J ′ = 0.8 with the DMRG results from Ref. [38].Panel (d) shows the evolution of the RBM variational energy as a function of J/J ′ when initialized in a ideal PS and with transfer learning utilized in the learning process.The horizontal blue line shows the exact DS energy for N = 20 × 10.The diagonal dashed gray line shows the asymptotic (large lattice) energy of AF ordering.Inset (e) shows the respective order parameter P PS .
PS and mixed boundary conditions:
We performed a simple numerical experiment.We took the SSM lattice from Fig. 1 but set all interactions to zero except around the squares of type A. That is, we constructed a lattice of interacting spins on otherwise independent squares A. Starting with random initial conditions, MC with complex RBM and α = 2 was able to converge and correctly capture the expected plaquette states on all accessible lattices.This means that there is no fundamental problem with PS ordering, and even a small RBM is expressive enough to describe this state.We then used these ideal plaquette states as an initial state for the MC calculations of the full SSM model at the respective lattices.Interestingly, for periodic boundary conditions, this did not lead to an improvement.PS ordering was strongly suppressed in the learning process, and we did not reach more favorable variational energies compared to those already obtained when starting from random initialization.
In accordance with, e.g., the recent work of Yang et al. [38], we decided to break the twofold degeneracy of expected PS ordering by changing periodic conditions to mixed ones.Following Ref. [38], we investigated cylinders with open boundary conditions in the xdirection with L x = 2L and periodic in the y-direction with L y = L so that N = 2L 2 .In this geometry, the SSM has a preferred singlet plaquette pattern, and significant PS ordering is expected in the PS phase.We show in Appendix D that this ordering can be learned by a complex RBM with α = 2 even for the lattice N = 20 × 10.In addition, using this geometry also allowed us to compare our result directly with the DMRG results of Ref. [38].Therefore, we first focus here on the parameters investigated there, although they are far away from the DS-PS boundary and, therefore, show smaller P PS .In particular, we study J/J ′ = 0.8, for which we used random initial conditions, and J/J ′ = 0.74 where both the random and ideal plaquette states were used as initial states in the variational MC.
A comparison of the finite-size scaling of the PS order parameter and the variational energy obtained for periodic and mixed boundary conditions and different RBM strategies with the results of DMRG [38] (or iDMRG [37]) are shown in panels (a), (b), and (c) of Fig. 6.In general, periodic boundary conditions lead to lower variational energies, as demonstrated in Fig. 6(c), where the green diamonds closely follow the finite-size scaling predicted by DMRG results for J/J ′ = 0.74.RBM for lattices with mixed boundary conditions proved to be more difficult to train.On the other hand, they show a significant PS ordering.Although the RBM variational energy is generally larger than that of the DMRG and iDMRG studies, their PS order parameters are in reasonable agreement.However, here we draw attention to two observations.For J/J ′ = 0.8, where P PS is low, a strategy with random initial conditions was sufficient to reproduce the DMRG results, as illustrated in Fig. 6(b).However, for N = 20 × 10 three independent learnings lead to almost identical variational energies with difference smaller than 0.2% and therefore imperceptible in Fig. 6(b).Yet these states showed a noticeable difference in P PS as visible in Fig. 6(b) (three black squares below each other).We attribute this problem to the combination of the overall low value of P PS and its sensitivity to fluctuation of plaquete ordering between the squares of the lattice.The situation worsened for weaker coupling J.For J/J ′ = 0.74, learning with random initial states worked only for small lattices.For larger ones, the strategy where we initialized the RBM in an ideal PS gave much better results.The same number of iterations lead to lower energies and the expected P PS .This suggests that although the RBM with α = 2 is capable of describing plaquette orderings, this state is difficult to learn without some help.Nevertheless, we utilized the strategy where an ideal plaquette state is used as an initial state to address another problem opened in the previous section.
We tested the position of the DS-PS phase transition point by focusing on N = 20 × 10 with mixed boundary conditions.We started from the ideal plaquette ordering (see Fig. 10 in Appendix D) at J/J ′ = 0.66, therefore still in the expected DS phase, and then used transfer learning by sequentially increasing J/J ′ for all points plotted in Fig. 6(d).Here, the blue horizontal line signals the exact energy of the dimer state.Although still slightly higher than the iDMRG result J/J ′ = 0.675, this lattice significantly reduced the estimate of J up to which DS survives to J/J ′ ≈ 0.68 compared to the above results with the periodic lattice J/J ′ ≈ 0.69.Fig. 6(d) also shows that, in contrast to the periodic lattices, the PS ordering is robust here (see inset (e)).Actually, when artificially initialized, it can survive the learning process even for J/J ′ < 0.68 where the DS is the true ground state, although the learning rate plays an important role in this process.We used η = 0.02 (≈ 300 iterations) followed by η = 0.003 (≈ (1000 iterations) at each step.
During our analysis, we have avoided the discussion of the possible SL and related DQCP which are compelling scenarios in part of a region here assigned to the PS.The reason is that due to several difficulties discussed above, e.g., the fact that our RBM results underestimate the PS order parameter even for N = 20 and large α, a reliable analysis of SL and DQCP is currently beyond our reach.Nevertheless, here demonstrated expressiveness of a simple RBM with α = 2 suggests that the problem can be indeed attacked by larger, more expressive, or specialized networks.A good candidate might be a composed GCNN that would combine networks for different characters of the symmetry group for particular lattice size and boundary conditions.
Magnetization plateaus
Historically, the most intriguing property of the SSM is its ability to describe fractional plateaus in magnetization as a function of an external magnetic field, which are also observed in real for N = 20 and J/J ′ = 0.45.Panels (a) and (b) show the magnetization and dimer state order parameter as functions of magnetic field.Panel (c) presents the relative error of the variational energy with respect to the ED result where blue dotted lines are just a guide to the eyes.Panel (d) shows the evolution of the normalized energy on external magnetic field.Blue filled diamonds represent the direct approach, empty red diamonds were obtained by utilizing the transfer learning discussed in the main text and the empty red squares by fixing MN to integer values from the vicinity of the direct approach.materials.To address this problem through VMC, one has to drop the restriction of fixed M = 0.In addition to significantly enlarging the Hilbert space, this also makes the optimization (learning) process a harder task.Moreover, each plateau represents a different ordering, and therefore, a challenge for NQS.However, as already demonstrated here, a simple RBM NQS with α = 2 is sufficiently expressive to capture the main plateaus.
We assume only periodic boundary conditions and focus on the case J/J ′ = 0.45, which is inside the DS phase (at h = 0), where several broad plateaus are expected to form.The most stable ones, if allowed by the lattice size, should be the M = 1/2 and 1/3 plateaus [28,42].We start the discussion by benchmarking the RBM NQS results (blue filled diamonds in all panels of Fig. 7) against the ED results for the N = 20 lattice (blue solid lines).Clearly, the variational energy in panel (d) is in very good agreement with the exact one.The relative error plotted in panel (c) is much lower than 1% in the whole range of h.In addition, it shows a structure which can be understood by comparing the profile of the relative error dependence on h with the normalized magnetization plotted in panel (a) and the DS order parameter in panel (b).Panel (a) shows that RBM NQS with α = 2 is able to capture all main steps of the magnetization observed in the ED curve.The most stable are M = 0, 1/2 and 1, followed by plateaus 1/5 and 3/10 that form in the range 0.7 ≲ h/J ′ ≲ 1.2.
The stability of these plateaus is also reflected in the relative error.Although we do not use any restriction on M, the relative error for h/J ′ < 0.7, where M = 0, is negligible.In this region, the system stays in the DS ordering as revealed by panel (b).A similar situation exists for h/J ′ ≥ 2.1.Here, the state is fully polarized (M = 1) and, therefore, easy to reproduce with variational techniques.Other regions with very small errors in the variational energy are the central parts of the stable plateaus discussed above, as best illustrated by the 1/2 one.Here RBM NQS gives a relative error below 0.1%.Consequently, the regions with the highest errors are related to the transitions between the stable plateaus.Here we also observe the largest deviations of the NQS magnetization (and P DS ) from the ED results.These problematic regions can be divided into two types.The first one includes the step edges, i.e., the abrupt changes of the magnetization for M ≤ 1/2.The related convergence problems are similar to the difficulties of correctly capturing the precise position of the discontinuous phase transition discussed for h = 0 and J/J ′ ≈ 0.69.As such, they can be also treated by the transfer learning.The red hollow diamonds in Fig. 7 were obtained by approaching the step edges from left and right using the RBM parameters learned in the centers of the neighboring plateaus as the initial input.Transfer learning clearly suppresses errors and gives the correct value of M even very close to discontinuities.
The second problematic region is at large magnetic field where the M = 1/2 plateau transits into saturation M = 1.It was shown only recently that this region can host exotic quantum states including several spin-supersolid phases [65].Only one additional step-like rise of M from 1/2 is expected here in the thermodynamic limit, which is followed by a continuous increase of magnetisation to M = 1 as h get larger.Nevertheless, the finite N = 20 lattice shows a number of very narrow transient steps in this region.This makes this region unsuitable for transfer learning, unless a much more refined grid of h's is applied.On the other hand, the small lattice allowed us to test the actual expressiveness of RBM by fixing MN to integer values taken from the vicinity of the direct RBM results for MN .The results with the lowest energies are depicted by the empty red squares, and they reproduce both M and the DS of the exact study.This proves that with correct learning strategy, RBM with α = 2 is sufficient for the description of this rather complex evolution of the SSM ground state in the increasing magnetic field.
The stability of the magnetization plateaus must be confirmed on large lattices because the magnetization could be always discrete on finite clusters, yet continuous in the thermodynamic limit.Moreover, the lattice N = 20 is not divisible by three, so it cannot hold the important 1/3 plateau.To show that RBM NQS can really capture these features, we address larger clusters.Fig. 8 presents, in addition to the exact (solid red line) and RBM (red diamonds) results for N = 20, the RBM results for N = 36 (blue squares) and N = 64 (yellow triangles).We stress here that these results were obtained with the direct approach.We have not used the transfer learning and fixed M to avoid the possibility that in this way we introduce a bias towards seemingly stable plateaus.Still, the results for N = 36 show stable flat steps in the magnetization which holds both the 1/2 and the 1/3 plateaus.Although the results for N = 64 are less stable, they confirm the 1/2 plateau and clearly signal the formation of two additional plateaus for h/J ′ < 1.2.These are very encouraging results, as they again show that a simple RBM with small number of parameters is expressive enough to correctly capture the complicated magnetization dependence reflecting the underlying complex ordering of the quantum spins.
Conclusion
We have investigated the ground-state properties of the Shastry-Sutherland model via variational Monte Carlo with NQS variational functions.Our main goal was to show that a single and relatively simple NQS architecture can be used to approximate a wide range of regimes of this model.We have first tested and benchmarked several NQS architectures that are known from the literature to be suitable for different variations of the Heisenberg model.We discuss the role, advantages, and drawbacks of the NQSs that incorporate lattice symmetries and biases on the visible layer.We conclude that when precision, generality and computational costs are taken into account, a good choice for addressing larger SSM lattices without as well as with external magnetic field is a restricted Boltzmann machine NQS with complex parameters.
Focusing on RBM NQS allowed us to refine the learning strategy.We discovered that if a more precise MC sampling is used, then it is advantageous to run several (tens) short independent optimizations instead of a few long learnings.Using this strategy for the lattice N = 20 with periodic boundary conditions, we have demonstrated that already an RBM NQS with α = 2 can accurately approximate the DS and AF phases and shows the onset of the PS ordering.It also gives a correct point of the discontinuous change of the DS regime to PS/AF.However, in its vicinity, there are the largest deviations from the exact results.Here, the variational energy can be significantly reduced by increasing α, but this leads only to a small improvement in the estimation of the order parameters.Consequently, we used RBM NQS with α = 2 to address larger lattices.Although reliable in the DS and AF phases in all lattices up to N = 100, PS proved to be more difficult to reach.However, this is partially a consequence of the degeneracy of the PS order.When broken by the usage of mixed boundary conditions, we were able to reproduce the DMRG result of Ref. [38] for the order DS parameter.Furthermore, we showed that the RBM NQS with α = 2 is expressive enough to hold the PS order, although it might be difficult to train from a random initial state.To overcome this limitation, we introduced a strategy in which the RBM NQS is first trained on a lattice that enforces PS ordering and then this state is used as an initial state for the network in the relevant regime of SSM.This strategy allowed us to estimate the position of DS-PS phase transition to be J/J ′ ≈ 0.68 for N = 20 × 10, which is, taking into account the finite-size effects, in good accordance with the iDMRG result J/J ′ = 0.675.However, even when this strategy was used together with transfer learning, the training of RBM NQS for lattices with mixed boundary conditions proved to be more challenging than for periodic ones.For example, the finite-size scaling of the variational energy at J/J ′ = 0.74 closely follows the DMRG result; however, for mixed boundary conditions and N = 20 × 10 the variational energy is still approximately 4% above the DMRG result [38].
A gradual increase of the magnetic field in SSM leads to formation of stable plateaus in the magnetization, each reflecting a different ground-state ordering.We have shown that RBM with α = 2 can capture the relevant plateaus that form for the lattice sizes studied here.Transfer learning can then be utilized to refine the results.
To wrap it up, we have demonstrated that SSM is a good system for benchmarking NQSs and that a simple RBM NQS can be used to address its ground state in a broad range of regimes.This opens the possibility for NQSs to be used to address some unresolved questions related to the SSM, e.g., the existence of the spin-liquid phase, DQCP and other exotic quantum phases expected in a finite magnetic field, or to precisely capture the size and character of additional steps in the magnetization for larger lattices.We, however, leave this for future more focused studies.
B Visible biases in sRBM and pRBM
Here we show by contradiction that allowing uneven biases for sRBM is equivalent to constant biases when we enforce enough symmetries.Let us suppose that visible biases are kept nonconstant a f → a f i in the Eq. ( 9).We further assume the condition that ∀i, j ∃g : gσ i = σ j .This condition holds for every SSM tile.
It follows that
, where C is the number of unique g that fulfill the condition above.The first term in Eq. ( 9), after the generalization a f → a f i , can be rewritten as Thus, non-constant biases can be replaced by a constant value without loss of generality.Therefore, visible biases cannot be built into the sRBM as independent variational parameters.On the other hand, pRBM is not limited in this way.This can be clearly seen after rewriting both ansätze into similar forms.First, the sRBM log ψ θ (σ z ) = log The sum (rather than the product) of exponentials makes it impossible to use an analogous reduction of visible biases as in Eq. (B.1).Note that the usage of visible biases does not typically lead to a significant increase of parameters (+N ).Yet, they usually improve the convergence of the learning process for frustrated systems because they help to set the correct sign structure of the approximated state.Therefore, it is beneficial to include visible biases in the NQS parameters whenever possible.
C Symmetries
An infinite Shastry-Sutherland lattice has a p4g wallpaper group symmetry whose point group is C 4v [67].The character table of C 4v is shown in Table 2.Each eigenstate of the SSM at infinite lattice must transform following one of the rows in the character table which, however, do not include the translations or glide reflections.
For finite lattices investigated in the paper, the table and the number of additional translations depend on the system size and shape (note that we are also using irregular lattices).Different small clusters can have different character tables with varying numbers of irreducible representations (irreps) [68,69].A detailed analysis of each lattice goes beyond the scope of our paper.In practical implementations, we used the automorphisms of the graph using routines implemented in NetKet [9,56] and a particular line from its character table.For illustrative purposes, it is still useful to discuss the irreps of individual phases of the SSM on the infinite lattice.
DS, described by Eq. ( 2), changes sign when we swap the spins in a dimer.More generally, the parity of the permutation determines the sign change.Consider a L × L square lattice, where L is even, and a reflection symmetry along its diagonal axis (σ v ) within the squares containing the J ′ -bonds.The number of J ′ -bonds intersected by the axis is L/2 (considering the toroidal periodicity).For each of these bonds, a sign change occurs during the reflection while the sign of other dimers does not change.A similar argument can be constructed for the C 4 rotation.This has an important implication even for finite lattices.Namely, for regular lattices, the ground state of DS transforms under the trivial irrep A 1 if L is divisible by 4, and under the antisymmetric irrep (corresponding to B 2 ) otherwise.This has some important consequences for the use of symmetries of some finite lattices, as discussed in the main text.
PS is twofold degenerate.Leaving out the translations, this means that it transforms under irrep E, which is the only irrep of dimension 2.
AF state analysis for finite lattices is rather complicated [68,69].If needed, we have assumed that AF transforms under trivial irrep A 1 (with and without the application of MSR).
D DS and PS in the RBM
DS: In principle, the complex-valued RBM is capable of representing a DS.For example, it can take advantage of visible biases (first term in Eq. ( 7)) and set them to reproduce the correct sign structure according to MSR.Since all nonzero weight coefficients have the same absolute value, the dense layer (second term in Eq. ( 7)) then needs only to identify these zero configurations and return a constant otherwise.An example of such construction is b j = 0 and W i j = i π 2 , spin i ∈ dimer j , 0 , otherwise .(D.1) We number the dimers by index j and cosh i W i j σ z i + b j is then one if the spins in dimer j are antiparallel and zero otherwise.The size of the hidden layer corresponds to the number of dimers, specifically N /2, in this construction.By substituting Eq. (D.1) into Eq.( 7), the DS state from Eq. ( 2) is reproduced.Notably, W i j nullifies all basis states that are not present in the DS state, while a i ensures the correct sign and b j can be adjusted to give a correct normalization.This shows that the RBM is, in theory, able to represent the DS state exactly.Whether, however, such a state can be learned, is in principle a different question.Nevertheless, the results in Fig. 8 clearly show that it can.
PS:
A complex RBM with α = 2 is expressive enough to encompass plaquette ordering even for large lattices.We demonstrate this for the N = 20 × 10 lattice with mixed boundary conditions (open in the x direction and periodic in y).We start with a toy model, namely an SSM lattice with J A = 1 and J B = J ′ = 0, where J A (J B ) is the coupling strength at the edges surrounding the squares of type A (B), see Fig. 1.Starting from random initial state, the VMC converged to the plaquette ordering illustrated in the left panel of Fig. 10.We then use this state as an initial condition in the learning process for finite J = J A = J B and J ′ = 1 in the range of values where plaquette ordering is expected.These results are shown in Fig. 6 and are discussed in the main text.In the central and right panels of Fig. 10 we show how the increase in J suppresses the ordering of plaquettes in SSM.
Figure 1 :
Figure1: (a) The Shastry-Sutherland lattice.Bonds with coupling strength J are represented by solid lines, while bonds with J ′ by dashed ones.The letters A and B divide the "empty" squares into two subsets, which are used to define the plaquette order parameter.
Figure 2 :
Figure2: Illustration of the SSM phase diagram for small h based on the results from Ref.[38].There is a first-order transition at J/J ′ ≈ 0.675 between DS and PS phases.The gray squares in PS depict the plaquette singlets.The nature of the transition between the PS and AF phases remains unresolved.It is not clear whether there is a narrow spin liquid phase, a DQCP or just a second order transition in the region labeled with a question mark.
Figure 4 :
Figure 4: Comparison of exact (lines) and various RBM variational results (symbols) at irregular lattice N = 20.(a) Evolution of the order parameters.Here blue solid(ED), pure blue diamonds (RBM with α = 2) and blue diamonds with red edge (RBM with α = show the DS order parameter; black dashed line (ED), black circles (RBM with α = 2) and black circles with yellow edge (RBM with α = 16) show the PS order parameter; and red dot-dashed line (ED), red crosses (RBM with α = 2) and red crosses with blue edge (RBM with α = 16) show the AF order parameter.The results of symmetric variants of RBM are not shown, as they were comparable to the results presented for J/J ′ > 0.68 and well off the exact results for J/J ′ ≤ 0.68.(b) The exact (red line) and RBM α = 2, 16 ground-state energies.(c) Relative error in ground-state energy for the RBM with α = 2 (blue circles), 4 (red pluses), 8 (green crosses) and 16 (black-yellow diamonds).Note that the relative error in the DS phase for RBM α = 2 is at the level of numerical precision.
Figure 5 :
Figure5: Evolution of order parameters for DS (a), PS (b), AF (c) and variational energy (d) as a function of J/J ′ for h = 0 and various lattice sizes.All results in panels (a)-(d) have been obtained using RBM NQS with α = 2 and VMC with exchange updates (simultaneous flip of two opposite spins in the basis state) for the Hilbert subspace restricted to M = 0.The black dashed lines in panels (d) and (e) show the asymptotic energies for the DS (horizontal) and AF phase (tilted).The black crosses represent the results with N = 64 for which we have utilized transfer learning.The inset (e) shows the details of the variational energy for N = 64 in the vicinity of the phase transition calculated using RBM (green diamonds), sRBM in direct base (blue stars), sRBM with MSR (red empty diamonds) and three points calculated with RBM utilizing transfer learning (black crosses).The empty purple squares show the RBM results for N = 100, and the orange triangles are infinite DMRG results taken graphically from Ref.[37].
Figure 6 :
Figure6: Comparison of finite size scaling of the order parameters for PS (a),(b) and variational energy (c) for J/J ′ = 0.74 and 0.8 between different methods and lattice boundary conditions.The empty green diamonds show the results for complexvalued RBM with α = 2 for periodic boundary conditions (L = N ).The black squares and the red circles show the mixed boundary conditions (L = N /2), where the former have been randomly initialized and the latter in the ideal PS ordering.Blue stars are DMRG results taken graphically from Ref.[38].In panel (c), the upper half shows the J/J ′ = 0.74 results compared with the iDMRG result for an infinite cylinder with a circumference of L = 10 taken graphically from Ref.[37].The bottom half compares our results for J/J ′ = 0.8 with the DMRG results from Ref.[38].Panel (d) shows the evolution of the RBM variational energy as a function of J/J ′ when initialized in a ideal PS and with transfer learning utilized in the learning process.The horizontal blue line shows the exact DS energy for N = 20 × 10.The diagonal dashed gray line shows the asymptotic (large lattice) energy of AF ordering.Inset (e) shows the respective order parameter P PS .
Figure 7 :
Figure 7: Comparison of ED (blue solid lines) and RBM with α = 2 (symbols) resultsfor N = 20 and J/J ′ = 0.45.Panels (a) and (b) show the magnetization and dimer state order parameter as functions of magnetic field.Panel (c) presents the relative error of the variational energy with respect to the ED result where blue dotted lines are just a guide to the eyes.Panel (d) shows the evolution of the normalized energy on external magnetic field.Blue filled diamonds represent the direct approach, empty red diamonds were obtained by utilizing the transfer learning discussed in the main text and the empty red squares by fixing MN to integer values from the vicinity of the direct approach.
Figure 8 :
Figure 8: Comparison of the exact (red solid line) magnetization (a) and variational energy (b) results with VMC calculations utilizing RBM NQS with α = 2 for lattices N = 20, 36 and N = 64 as functions of external magnetic field.
(σ z ) i + b f , and then pRBM log ψ G θ (σ z ) = log g∈G χ g −1 exp N i=1 a i T g (σ z ) i + T g (σ z ) i + b j .
QrFigure 10 :
Figure 10: ordering Q r in the central part of the SSM lattice N = 20 × 10 with mixed boundary conditions.Here Q r = Qr where Qr = 1 2 Pr + P−1 r , with Pr being the cyclic permutation operator in square r .The left panel shows a toy model with J A = 1 and J B = J ′ = 0 where J A (J B ) is the coupling strength at the edges surrounding the squares of type A (B).The center and right panels show Q r for SSM with J = 0.68 (just below the DS-PS transition) and J = 0.8.The values of Q r for squares with diagonal bonds are not shown.
Table 1
where the values are min i |E i 50 −E ex| E ex , with i enumerating the twelve independent runs.There are several results in Table 1 which were important for our decision on which network should be used in the detailed study of the phase diagram in larger lattices.Starting
Table 1 :
Comparison of the precision (lower is better) of NQS variational results on lattices N = 16 and N = 20.The listed values were calculated as min i |E i 50 −E ex| E ex
Table 2 :
Character table of the C4v point group describing symmetries of Shastry-Sutherland lattice. | 18,356.8 | 2023-03-24T00:00:00.000 | [
"Physics"
] |
Quantum entanglement maintained by virtual excitations in an ultrastrongly-coupled-oscillator system
We study the effect of quantum entanglement maintained by virtual excitations in an ultrastrongly-coupled harmonic-oscillator system. Here, the quantum entanglement is caused by the counterrotating interaction terms and hence it is maintained by the virtual excitations. We obtain the analytical expression for the ground state of the system and analyze the relationship between the average excitation numbers and the ground-state entanglement. We also study the entanglement dynamics between the two oscillators in both the closed- and open-system cases. In the latter case, the quantum master equation is microscopically derived in the normal-mode representation of the coupled-oscillator system. This work will open a route to the study of quantum information processing and quantum physics based on virtual excitations.
www.nature.com/scientificreports/ considered in the quantum Rabi model 8 . In addition, the role of the CR terms in the creation of entanglement between two atoms has been investigated in Ref. 49 . We consider an ultrastrongly-coupled two-harmonic-oscillator system. We study the ground state entanglement of the two oscillators and analyze the average excitation numbers in the system. We also study the entanglement dynamics of the system when it is initially in the zero-excitation state and hence all the excitations are created by the CR terms. The influence of the environment dissipations on the system is analyzed based on a microscopically derived quantum master equation in the normal-mode representation. The rest of this paper is organized as follows. Firstly, we present the physical model of two coupled harmonic oscillators and the Hamiltonian, we also analyze the property of the parity chain in this system. Secondly, we obtain the exact analytical eigensystem of the coupled two-oscillator system. Thirdly, the average virtual excitation numbers are calculated analytically and the quantum entanglement of the ground state is analyzed by calculating the logarithmic negativity. Fourthly, we study the dynamics of the average virtual excitation numbers and quantum entanglement between the two oscillators in both the closed-and open-system cases. Finally, we present a brief conclusion.
Results
Model and hamiltonian. We consider an ultrastrong coupling system, in which two harmonic oscillators are ultrastrongly coupled to each other through the so-called "position-position" type interaction (Fig. 1). This system is described by the Hamiltonian where x 1 ( x 2 ) and p 1 ( p 2 ) are, respectively, the coordinate and momentum operators of the first (second) oscillator with the resonance frequency ω 1 ( ω 2 ) and mass µ , the parameter η is the coupling strength between the two oscillators. By expanding the interaction term, Hamiltonian (1) can be expressed as where we introduce the renormalized frequencies and coupling strength as By introducing the following creation and annihilation operators Hamiltonian (2) becomes with C = ( ω a + ω b )/2 being a constant term. Here a † (a) and b † (b) are, respectively, the creation (annihilation) operators of the two oscillators with the corresponding resonance frequencies ω a and ω b . In Eq. (5), the first two terms and the constant term represent the free Hamiltonian of the two oscillators. The parameter g = −ξ/(2µ √ ω a ω b ) denotes the coupling strength between the two oscillators. We note that this interaction includes both the rotating-wave and CR terms. In general, in the case of weak coupling and near resonance, the rotating-wave approximation can be made by discarding the CR terms. In this paper, we consider the ultrastrongcoupling case in which the CR terms cannot be discarded. In the presence of the CR terms, the ground state of the system will include excitations and hence quantum entanglement will exist in the ground state. Note that an ultrastrongly-coupled two-mode system has recently been realized in superconducting circuits 50 . Figure 1. Schematic diagram of the coupled two-harmonic-oscillator system. Two harmonic oscillators with resonance frequencies ω a and ω b are coupled to each other via a "position-position" type interaction with the coupling strength g. The parameters γ a and γ b are the decay rates associated with the heat baths in contacted with the oscillators a and b, respectively. www.nature.com/scientificreports/ In this two-oscillator system, we introduce the parity operator as P = (−1) a † a+b † b , which has the standard properties of a parity operator, such as P 2 = I , P † P = I , and P † = P 7, 51 . The Hamiltonian H in Eq. (5) remains invariant under the transformation P † HP = H , based on the relations P † aP = −a , P † a † P = −a † , P † bP = −b , and P † b † P = −b † . The Hilbert space of the system can be divided into two subspaces with different parities: odd and even. The basis states of the odd-and even-parity subspaces are, respectively, given by and The eigenvalues of the parity operator P corresponding to the odd and even parity states are −1 and 1, respectively.
Eigensystem of the coupled two-oscillator system. To study the quantum entanglement of the eigenstates, we need to diagonalize the Hamiltonian H in Eq. (2). To this end, we introduce the transformation operator 52 where the mixing angle θ is defined by It is obvious that the ground state of the two-oscillator system is |0� A |0� B . To study the virtual excitations in the system, we need to know the eigenstates which are expressed in the representation associated with a † a and b † b . .
(16) H|m� A |n� B = E m,n |m� A |n� B , m, n = 0, 1, 2, . . . , www.nature.com/scientificreports/ It can be seen that the superposition components in the ground state are even parity states. This property can be confirmed because the transform U conserves the excitation number and the squeezing operators change the excitation number two by two, without changing the parity.
Ground-state entanglement and quadrature squeezing. We study the ground-state entanglement in this system by calculating the logarithmic negativity. For the two-oscillator system, if the coupling is sufficiently weak, i.e., g ≪ {ω a , ω b } , the interaction Hamiltonian between the two oscillators can be reduced by the RWA as , which conserves the number of excitations. In this case, the ground state of the system is a trivial direct product of two vacuum states |0� a |0� b , which does not contain excitations. In the presence of the CR terms, the |0� a |0� b is not an eigenstate of the system and the ground state will possess excitations. Below, we use numerical method to obtain the ground state of Hamiltonian (5) and calculate the ground state entanglement between the two oscillators. In the presence of the CR terms, the ground state of the two-oscillator system can be expressed as where these superposition coefficients are given by C m,n = a �m| b �n|G� , which should be solved numerically.
The �G|a † a|G� � = 0 and �G|b † b|G� � = 0 reveal that the ground state of the system contains excitations. These excitations in the ground state are called virtual excitations because these excitations cannot be extracted from the system. The effect of the virtual excitations can be seen from the probability amplitudes in the ground state. The distribution of these probability amplitudes can also exhibit the parity of the ground state. As the ground state is an even parity state, and hence these probability amplitudes associated with the odd parity basis states will disappear. In Fig. 2, we show the absolute values of these probability amplitudes |C m,n | . Here we can see that the values of |C m,n | decrease with the increase of m and n and that there is a symmetric relation |C m,n | = |C n,m | . In addition, the values of these odd-parity probability amplitudes C m,n with m + n being an odd number are zero, which is a consequence of the fact that the ground state is an even-parity state.
We also calculate the average excitation numbers a † a and b † b in the ground state |G� as where we have used the formula, In Fig. 3a, we show the average excitation numbers a † a and b † b in the ground state |G� as functions of the scaled coupling strength g/ω r in the degenerate oscillator case ω a = ω b = ω r . These results show that the average excitation numbers of the two modes are identical (two curves overlap each other). This is because the corotating terms conserve the excitations and the CR terms create simultaneously the excitations in the two modes. The The degree of entanglement of the ground state can be quantized by calculating the logarithmic negativity 54,55 . For a bipartite system described by the density matrix ρ , the logarithmic negativity can be defined by where T b denotes the partial transpose of the density matrix ρ of the system with respect to the oscillator b, and the trace norm ρ T b 1 is defined by Using Eqs. (34), (35), and (36), the logarithmic negativity of ground state of the two coupled oscillators can be obtained. In Fig. 3b, we show the logarithmic negativity N as a function of the coupling parameter g/ω r . The curve shows that the degree of entanglement between the two oscillators in the ground state monotonically increases over the entire range of g. This is because the CR terms in Hamiltonian (5) cause the virtual excitations in the ground state of the system and maintain the quantum entanglement between the two oscillators. If the CR terms are discarded, then the ground state of the system becomes a separate state |0� a |0� b .
We also study the quadrature squeezing in the ground state by calculating the fluctuations of the rotated quadrature operators. We introduce the rotated quadrature operators for the two modes as The commutation relation of the above two rotated quadrature operators is According to the uncertainty relation, we have Then the quadrature squeezing appears along the angle θ o if the variances of the rotated quadrature operators satisfy the relation 56 For the ground state |G� given in Eq. (27), the variances of the rotated quadrature operators can be obtained as When we exchange the subscripts a and b in Eq. (41), the expression does not change for a given rotating angle. This means that, in the resonance case ω a = ω b = ω r , the squeezing is the same for the two bosonic modes in www.nature.com/scientificreports/ the ground state. This point can also be seen from Hamiltonian (5), which is symmetric under the exchange of the subscripts and operators for the two modes in the resonance case. In Fig. 4a, we show the variance �X 2 a (θ a ) as a function of the rotating angle θ a in the resonance case ω a = ω b = ω r . Here we can see that the variance �X 2 a (θ a ) is periodic function of θ a and that the minimum of �X 2 a (θ a ) is obtained at θ a = π/2 and 3π/2 . Note that in the present case (sinh r a cosh r a + sinh r b cosh r b ) < 0 . We also show the variance �X 2 a (π/2) as a function of the coupling strength g/ω r in the resonance case ω a = ω b = ω r , as shown in Fig. 4b. We observe that the squeezing increases with the scaled coupling strength g/ω r . This is because the quadrature squeezing is caused by the CR interaction terms.
Dynamics of quantum entanglement. The phenomenon of quantum entanglement accompanied with virtual excitations can also be seen by analyzing the entanglement dynamics of the system. We consider the case in which the system is initially in the zero-excitation state |0� a |0� b . In the closed-system case, a general state of the system can be written as By substituting Eqs. (5) and (42) into the Schrödinger equation, the equations of motion for these probability amplitudes A m,n (t) are obtained as For the initial state |0� a |0� b , the initial condition of these probability amplitudes reads A m,n (0) = δ m,0 δ n,0 . By numerically solving Eq. (43) under this initial condition, the evolution of these probability amplitudes can be obtained. Using Eqs. (35), (36), and (42), we can calculate numerically the average excitation numbers a † a and b † b and the logarithmic negativity of the state |ψ(t)�.
In Fig. 5a, we show the time evolution of the average excitation numbers a † a and b † b in modes a and b. Here we can see that, similar to the ground state case, the average excitation numbers in the two modes are identical (the two curves overlap each other). In addition, the average excitation numbers experience a periodic oscillation. In Fig. 5b, we show the time dependence of the logarithmic negativity N(t) of the state |ψ(t)� . The curve shows that logarithmic negativity between the two oscillators also experiences a periodic oscillation. Here we choose the initial state of the system as |0� a |0� b , the existence of the CR terms still causes the appearance of virtual excitations, which leads to entanglement between the two oscillators. This result is different from that in the RWA case in which the CR terms are discarded in the two oscillators under the same initial state. When we discard the CR terms and choose the initial state as |0� a |0� b , which is the eigenstate of the corotating interaction term g(a † b + b † a) , the system will stay this state. Then there are no virtual excitations in the system and no quantum entanglement between the two oscillators.
We also study the influence of the environment dissipations on the dynamics of the system. As we consider the ultrastrong-coupling regime of the coupled system, we derive the quantum master equation in the normal-mode representation of these two coupled oscillators. We employ the standard Born-Markov approximation under the condition of weak system-bath couplings and short bath correlation times to derive the quantum master equation. The secular approximation is made by discarding these high-frequency oscillating terms including exp(±iω A t) , exp(±iω B t) , and exp[±i(ω A ± ω B )t] . The quantum master equation in the normal-mode representation of Hamiltonian (5) can be written as www.nature.com/scientificreports/ Note that the cross terms between the two modes a and b in Eq. (50) are induced by the interaction between the two oscillators. For below calculations, we express the density matrix of the two-oscillator system in the bare-mode representation as with the density matrix elements ρ m,n,j,k (t) = a �m| b �n|ρ s (t)|j� a |k� b . For an initial state |0� a |0� b , the nonzero density matrix element is ρ 0,0,0,0 (0) = 1 . By numerically solving Eq. (47) under the initial condition, the time evolution of the density matrix ρ s (t) can be obtained.
Below we study the dynamics of the average excitation numbers and quantum entanglement in this system. Based on Eq. (47), the expressions of the average excitation numbers a † a(t) and b † b(t) can be expanded as Therefore, the average excitation numbers a † a(t) and b † b(t) can be obtained by solving the equations of motion for these density matrix elements in the number-state representation.
In Fig. 6a, the dynamics of the average excitation numbers a † a(t) and b † b(t) is shown in the open-system case with different time t. We observe that the two excitation numbers a † a(t) and b † b(t) overlap each other and initially experience a large oscillation. With the increase of time t, the oscillation amplitudes of the average excitation numbers decrease gradually. In the long-time limit t ≫ 1/γ a,b , the average excitation numbers will reach steady values due to the dissipations. www.nature.com/scientificreports/ The entanglement of the density matrix ρ s (t) can be quantified by calculating the logarithmic negativity. In terms of Eqs. (35), (47), and (53), the logarithmic negativity of the state ρ s (t) can be obtained numerically. In Fig. 6b, we show the logarithmic negativity N(t) of the density matrix ρ s (t) versus the time t. The result shows that the logarithmic negativity oscillates very fast due to the free evolution of the system. We also find that the envelope of the logarithmic negativity converges gradually with the evolution time t and eventually reaches a stable value due to the dissipations. The time scale of the oscillation-pattern decay for the logarithmic negativity is very similar to that of the excitations created by the CR interaction terms. In particular, we find that there exists steady-state entanglement due to the presence of the CR interaction terms in this system.
In this work, we consider the ultrastrong-coupling regime and hence the quantum master equation is derived in the normal mode representation. For comparison, we show in Fig. 6c,d the evolution of the average excitation numbers and the logarithmic negativity calculated by solving the phenomenological quantum master equation, which is obtained by adding the dissipators of two free bosonic modes into the Liouville equation, The initial state of the system is the same as that considered in the microscopic quantum master equation. We see from Fig. 6 that, for the average excitation numbers, though these results can approach steady-state values, the envelop and the oscillation amplitude are different for the results obtained with two different quantum master equations. However, for the logarithmic negativity, we find that the difference between the two results exists but is small when g/ω r = 0.2 . We checked the fact that the difference will increase as the increase of the ratio g/ω r . Therefore, the microscopic quantum master equation should be used in the ultrastrongly-coupledoscillator system.
conclusion
In conclusion, we have studied quantum entanglement in an ultrastrongly-coupled two-harmonic-oscillator system. Concretely, we have studied the ground-state entanglement by calculating the logarithmic negativity of the ground state. Here, the quantum entanglement is maintained by the virtual excitations generated by the CR terms and bounded in the ground state. We have also studied the dynamics of quantum entanglement of the system. By microscopically deriving a quantum master equation in the normal-mode representation of the two oscillators, we analyzed the influence of the dissipations on the entanglement dynamics and found that there exists steady-state entanglement in this system. | 4,387.6 | 2019-12-24T00:00:00.000 | [
"Physics"
] |
Real Time Monitoring of Temperature of a Micro Proton Exchange Membrane Fuel Cell
Silicon micro-hole arrays (Si-MHA) were fabricated as a gas diffusion layer (GDL) in a micro fuel cell using the micro-electro-mechanical-systems (MEMS) fabrication technique. The resistance temperature detector (RTD) sensor was integrated with the GDL on a bipolar plate to measure the temperature inside the fuel cell. Experimental results demonstrate that temperature was generally linearly related to resistance and that accuracy and sensitivity were within 0.5 °C and 1.68×10−3/°C, respectively. The best experimental performance was 9.37 mW/cm2 at an H2/O2 dry gas flow rate of 30/30 SCCM. Fuel cell temperature during operation was 27 °C, as measured using thermocouples in contact with the backside of the electrode. Fuel cell operating temperature measured in situ was 30.5 °C.
Introduction
Fuel cells have been increasingly miniaturized and are common in portable electronic products, including cellular phones and PDAs. Silicon-based substrates are highly compatible with microelectro-mechanical-systems (MEMS) technology [1][2]. Porous silicon has been utilized as the gas diffusion layer (GDL) in fuel cells, replacing traditional carbon cloth or carbon paper [3,4,5]. Porous silicon has also been used to produce proton exchange membranes [6].
Electrochemical etching with hydrofluoric acid has been studied since 1956. In 1990, Lehmann [7] characterized porous silicon in detail, and, in 1996, Lehmann [8] investigated the development of a porous silicon array structure, indicating that etching depends on electrolyte concentration, electrolyte temperature, silicon doping density and current density. Such structures are classified into three regimes according to the mean dimensions of the porous silicon. The mean dimension of the microporous regime is <2 nm; that of the mesoporous regime is 2-50 nm, and that of the macroporous regime is >50 nm. Kleimann [9] produced a macroporous array that was 42 μm wide and 200 μm deep. According to Kleimann's findings, porous silicon etching can be utilized to generate a structure with a high aspect ratio at a lower cost than that associated with deep reactive ion etching (DRIE).
Numerous studies have measured important factors concerning the effects of cell temperature, fuel temperature, and fuel humidity, as well as other factors associated with cell performance [10,11,12,13]. In this work, a resistance temperature detector (RTD) sensor was integrated into the GDL on a bipolar plate to measure the temperature inside a micro fuel cell.
Methodology
Wet etching was applied to produce fuel channels in a micro fuel cell. Dry etching was then used to generate silicon micro-hole arrays (Si-MHA). In this investigation, hole size and depth were controlled. After the Si-MHA were formed, platinum (Pt) was deposited on the surface holes as a catalyst of the fuel cell increasing the conductivity of the silicon. Part of the Pt metal layer was formed as a micro thermal sensor.
Theory of Thermal Sensors and Characteristics of Platinum
As a soft and silvery-white metal, Pt is extremely malleable [14], and has a resistance that varies linearly over a large temperature range of -260-1,000 C. Even when the ambient temperature exceeds 1,000 C, it remains stable and does not undergo significant physical or chemical changes. The error range is at minimum ±0.06 % (or ±0.15 C) at 0 C. Notably, Pt cannot be etched in strong acid or alkali, with the exception of aqua regia. Therefore, Pt is the material of choice for thermal sensors.
The resistance of a general metal is expressed as: (1) where R is resistance (Ω), ρ is resistivity (Ω-m), L is wire length (m), and A is wire cross-sectional area (m 2 ). The resistivity of Pt is 1.042 x 10 -9 Ω-m at room temperature. When used in a micro thermal sensor, the temperature coefficient of resistance of Pt varies with thin film thickness and ranges between 0.00375-0.00385. If the temperature variation range of the RTD is linear, then the relationship between measured resistance and temperature change is given by: Equation (2) can be rewritten as: where R t is resistance at t C, R i is resistance at i C, and T is sensitivity (1/ C) [15,16].
Flow Field Design
Hoogers [17] demonstrated that the performance of a serpentine flow field on a fuel cell was better than other flow field configurations (meshed and interdigitated) in some cases, because the fuel (gas or liquid) was driven strongly to flow around the active area of the fuel cell. Hence, a serpentine flow field was applied in the design in this study, as displayed in Figure 1. An N-type thickness of 525±25 μm, and a (100)-oriented double-side polished wafer was used. After the low pressure chemical vapor deposition (LPCVD) oxidation of Si 3 N 4 on the silicon wafer (5,000 Å thick), one side of the silicon was processed photolithographically. Reactive ion etching (RIE) was then utilized to transfer the pattern in Figure 1, as in the wet KOH etching process. This process was employed to etch a 450 μmthick silicon layer. The remaining thickness of silicon constitutes the GDL, with a thickness of 50-70 μm and width of 200 μm. Figures 2a~c depict the details of the process.
Standard Deviation of the Experiment
In this experiment, standard deviation for temperature and resistance are given by: x are particular values, x is the mean of all values, and n is sample size (number of values) [18].
Fabrication
Techniques described elsewhere [7,19] were adopted to design square holes of side 10 μm, and form fuel channels with vertical walls. Etching time and current density were important parameters. In the proposed design, square holes of size 10 μm were fabricated by DRIE. Figure 2 depicts the Si-MHA fabrication process flow. The flow field process was performed as shown in Figures 2a~c, the other side of the wafer was patterned lithographically, 10 μm square at a pitch of 15 μm covered the defined area, over which a 200 μm-thick GDL was formed (Figure 2d), and then was transferred in Si 3 N 4 . The KOH wet etching process on (100)-oriented silicon was anisotropic. The Si 3 N 4 was removed from the fuel field side of the wafer by RIE after the fuel channel was formed. To ensure that the Si-MHA goes through to the flow field, DRIE was applied to reach the purpose have throughholes. Figure 2e displays the Si-MHA through to the flow field. A circle 10 μm in diameter was patterned lithographically on top of each flow channel. The pattern (200 μm×13140 μm) was transferred using Si 3 N 4 .
After the Si-MHA were produced, the wafer was metallized on the Si-MHA by depositing a layer of Ti/Pt (20 nm/70 nm). The Pt acted as the current collector and micro thermal sensor. Physical vapor deposition (PVD) was employed to deposit the Pt and wet etching was used to produce a conductive layer and micro thermal sensor. A photoresist was adopted as the etching mask, ensuring that the Pt remained on the surface of the Si-MHA. An etching mask was utilized in the lithography process with an exposure process. Figures 2e~h present the process flow in detail. The micro thermal sensor and Si-MHA were fabricated, as shown in Figure 3.
Experimental
In this experiment, the temperature of micro thermal sensor was measured and ranged from 25-45 °C. Hydrogen flowed on the anode side and oxygen flowed on the cathode side. Hydrogen and oxygen flows rates were 10 and 30 SCCM. The humidify parameter was increased from 20 °C to 40 °C. The active area of the fuel cell was 1.82 cm 2 (1.3 cm×1.4 cm). The proton exchange membrane was obtained from Ion Power Co. The Pt loading was 0.5 mg/cm 2 . Figure 4 presents the experimental setup for calibrating the micro thermal sensor, and measuring resistance using a 4230 LCR meter. The frequency of the LCR meter was 1 kHz; the meter used a 4-terminal probe connection. In fuel cell performance tests, the fuel cells were connected to the fuel control system; the electronic load controlled the fuel feed rate, and the experimental setup for measuring fuel cell performance is shown in Figure 5.
In-situ measurement of temperature
Experimental results indicate that temperature was almost linearly related to resistance. Accuracy and sensitivity of the micro thermal sensor were 0.5 C and 1.68×10 -3 /C, respectively. Micro thermal sensor accuracy was defined based on temperature in the accuracy range (0.5 C) of the programmable temperature chamber. Figure 6 shows the normalized temperature response of the micro thermal sensor, and the sensor was measured three times, and these response curves were very coincidental and had high reproducibility; standard deviation was 0.037212. Experimental data reveal that temperature was almost linearly related to resistance. Fuel cell temperature during operation was 27 °C, as measured using thermocouple in contact with the electrode backside. Temperature measured in situ during fuel cell operation was 30.5 C.
Fuel cell performance
Performances of the fuel cell with 10 µm holes were compared under the following operating conditions: (a) at 15 C (with no humidity) and 20 C, 30 C, 40 C (with humidity); (b) hydrogen and oxygen flows rates at both 10 and 30 SCCM; and (c) without and with a micro thermal sensor at 15 C (no humidity) and a flow rate of 30 SCCM. Figure 7 presents experimental results with 10 µm holes at 15 C (no humidity) and flow rates of 10 SCCM and 30 SCCM. The maximum power density was approximately 9.25 mW/cm 2 , with a flow rate of 30 SCCM. Figure 8 shows experimental results obtained with 10 µm holes at 20 C, 30 C, 40 C (humidity) and a flow rate of 30 SCCM. The maximum power density was approximately 8.13 mW/cm 2 at 20C. Figure 9 compares the performance obtained without a micro thermal sensor with that obtained with a micro thermal sensor on the anode electrode at 15 C (no humidity) and a flow rate of 30 SCCM. The maximum power density of the fuel cell without and with a micro thermal sensor on the anode electrode was 9.37 mW/cm 2 and 2.15 mW/cm 2 , respectively. Notably, the local temperature could be measured because a micro thermal sensor has an insulating layer on both sides, explaining why the insulated area likely decreased the reaction area and fuel cell performance. Table 1 summarizes the experimental results for
Conclusions
Si-MHA were fabricated on a defined area of the flow field. The silicon wafer combined the flow field and GDL. The silicon holes had 10 μm diameters and were fabricated under various operating conditions. The micro thermal sensor formed as the catalyst was deposited. Furthermore, the performance of a working fuel cell and its internal temperature were measured. The best fuel cell performance was 9.37 mW/cm 2 at 502 mV without micro thermal sensors on the anode electrode, at a flow rate of 30 SCCM at 15 C (no humidify); the anode electrode was integrated with 10 μm of Si-MHA in the fuel field. In situ monitoring of temperature during fuel cell operation was 30.5 C. | 2,560.6 | 2009-03-03T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Terahertz Generation in an Electrically Biased Optical Fiber : A Theoretical Investigation
We propose and theoretically investigate a novel approach for generating terahertz (THz) radiation in a standard single-mode fiber. The optical fiber is mediated by an electrostatic field, which induces an effective second-order nonlinear susceptibility via the Kerr effect. The THz generation is based on difference frequency generation (DFG). A dispersive fiber Bragg grating (FBG) is utilized to phase match the two interacting optical carriers. A ring resonator is utilized to boost the optical intensities in the biased optical fiber. A mathematical model is developed which is supported by a numerical analysis and simulations. It is shown that a wide spectrum of a tunable THz radiation can be generated, providing a proper design of the FBG and the optical carriers.
Introduction
Due to a lack of generation and detection instrumentation, the electromagnetic spectrum between infrared light and microwave radiation, traditionally known as the terahertz (THz) gap, has not been fully explored [1].The application of THz radiation was traditionally limited to astronomy and analytical science.
Recent advances in photonics have laid the groundwork for the realization of THz sources and detectors for applications in biomedical imaging [2] and ultra-fast communications [3].As THz sources become more readily available, THz technology is being increasingly used in a variety of fields, including information and communications technology, biology and medical sciences, nondestructive evaluation, homeland security, quality control of food and agriculture, global environmental monitoring, and ultrafast computing, to mention a few examples [4].The wide and crucial applications of THz waves are due to its unique way of interacting with materials.For example, in medical science, the ability of THz wave to probe intermolecular interactions enables it to provide both structural and functional information.Consequently, and considering its safe, accurate, and economical features, THz radiation promises to alternate other scanning methods such as high frequency ultrasound, magnetic resonance imaging, and near-infrared imaging [4].
This promising technology has the potential to lead the way many diseases are diagnosed and ultimately cured.
In the past few years, several techniques have been proposed to generate THz waves.Generation of CW and pulsed THz waves have been both investigated.Techniques to generate CW THz waves include quantum cascade laser (QCL) [5], directly multiplied source [6], backward wave oscillator (BWO) [7], germanium laser [8], and silicon impurity state laser [9].Similarly, techniques to generate pulsed THz waves include nonlinear optical source (which can also be utilized to generate CW THz waves) [10], optically pumped THz laser [11], and free electron laser [12].THz generation based on a QCL is realized based on intersubband transitions in quantum wells.Although a QCL can provide strong THz waves, its tunability is limited [5].Generating THz wave utilizing a direct multiplied source is achieved by the mean of submillimetre wave multiplication [6].The multiplication can be done in a biased GaAs crystal.Mainly, this approach is limited to generating THz wave at a low frequency.A BWO source has less impetus from a practical point of view, since it has a stringent requirement for the power supply.In addition, a BWO also has a vacuum tube which is fragile.THz generation based on optically pumped THz laser, on the other hand, has a small efficiency (less than 0.1%) resulting in a significant heat loading.A germanium laser can generate a THz wave with narrow International Journal of Optics linewidth, but it requires liquid helium and a pulsed magnet system making the system costly.A silicon impurity state laser can generate a THz wave with a power up to tens of mW, but it is usually operating at a temperature below 20 K, and the operation is limited to a pulsed mode.A free electron laser can generate a THz wave with a large tunable range, but it has issues such as large size, high cost, and great complexity.On the other hand, the main nonlinear optical methods for generating THz radiation [4] include optical rectification [13], parametric conversion [14], and laser generated plasma filament [15].The optical rectification technique can be used to generate a THz wave in a frequency range from 0.3 to 30 THz, and it is a common device for broadband THz generation at room temperature.The parametric conversion technique is based on the mixing of laser beams to generate a beat frequency that is in the THz range.The frequency of the THz wave is tunable and can operate in the CW regime with narrow linewidth.Generating THz waves using lasergenerated plasma filament technique is achieved by the mean of four wave mixing (FWM) process.An intensive laser input is needed though.
Developing a high-power, low-cost, fast tunable, and high reliable THz source is one of most challenging issues in a THz system.For example, for medical applications, a THz source is required to have a frequency between 0.3 and 3 THz in addition to all other features mentioned above.Among the THz sources currently available, the parametric conversion technique could provide a THz source with a narrow line-width and can operate at room temperature.It is also fast to tune at a relatively low cost [16].However, the phase-mismatching dilemma is an issue that would decrease the generation efficiency.One common way to increase the efficiency is to boost the pumping power by the use of a pulsed source.Unfortunately, this approach is limited by the two-photon absorption [17].On the other hand, utilizing a periodic structure is an effective way to achieve quasiphase matching and thus improves the conversion efficiency.Usually, noncentrosymmetric crystals in free space geometry are employed [13][14][15][16][17].Despite the achievable high generated power, this scheme lacks the advantages of compatibility with fiber and integrated optics and is sensitive to environmental effects.
In this paper, we propose a novel approach to generating a THz wave based on parametric conversion utilizing an optical fiber.Indeed, THz generation in an optical fiber has recently been reported [18].There, the THz generation is achieved based on the photo-Dember effect.However, the achieved tunability is limited.In our proposed work, the optical fiber is electrostatically biased to induce an effective second-order nonlinear susceptibility via the Kerr effect [19].Thus, a wide frequency-tunable range can be achieved given the off-resonance nature of the Kerr effect.A fiber Bragg grating (FBG) is used to phase match the two interacting light waves, thanks to the dispersion properties of the FBG.On the other hand, given the weak nonlinearity of the optical fiber, the generation efficiency is expected relatively low.The efficiency is improved by increasing the biasing electrostatic field.The maximum electrostatic field can be very high and is only limited by the electric field strength of the optical fiber material (i.e., 30 kV/mm for fused silica optical fibers).In addition, the two-photon absorption is negligible in the optical fiber, and thus an intensive optical input can also be used to increase the generation efficiency.In the scheme presented in this paper, a ring resonator, incorporating a polarization beam coupler, is utilized to boost the optical intensities.We note here that, for a practical system, a poled fiber with internal electrostatic field (that can be as strong as the electric field strength of the material) can be used instead of having an external electrostatic field bias [20].The proposed scheme using an optical fiber for THz generation has the advantages of compatibility with other fiber optic devices, simplicity, low cost, and high potential for integration.For instant, the proposed system can find application in in-vivo THz scanning.
The remainder of the paper is arranged as follows.In Section 2, a theoretical model is presented, in which the generation of THz radiation in an electrostatically biased optical fiber is modeled.In Section 3, a numerical simulation is presented.Realistic parameters are selected and are used in the simulation.Finally, a conclusion is drawn in Section 4.
Theoretical Modeling
Consider two optical carriers are propagating in an optical fiber, and the optical fiber is mediated by an electrostatic field E dc .The electric field inside the optical fiber can thus be written as where A 1 , A 2 and ω 1 , ω 2 are the amplitudes and frequencies of the two optical carriers.Here the electrostatic field is considered collinear with the polarization of the propagating light modes.The nonlinear polarization field, generated by the Kerr type nonlinear medium, such as an optical fiber, can be written as [21] P NL = ε 0 χ (3) where ε 0 is the free space permittivity and χ (3) is the thirdorder dielectric susceptibility.Substituting (1) into (2), we have the nonlinear polarization at frequencies ω 1 , ω 2 , and ω 3 = ω 1 − ω 2 , which is expressed as P NL = P 1 e j(k1z−ω1t) + P 2 e j(k2z−ω2t) + P 3 e j(k3z−ω3t) .(3) Here where k i = ω i n i /c, i ∈ (1, 2, 3), and α is the ratio of the ω 3 -mode inside the fiber (i.e., the ratio of the fiber to the unguided ω 3 -mode cross sections).Note in (3) that all other polarization terms (i.e., the second harmonic, the third harmonic, and the sum frequency terms) suffer from the phase mismatching and can thus be neglected.However, a proper technique will be utilized to phase match the difference frequency polarization term P 3 .The nonlinear polarization P i can be considered a source of new fields at the ω i frequency.It then follows that a THz waves at ω 3 can be generated, given that the frequency spacing between the optical carriers ω 1 and ω 2 is in the THz range.However, the generated THz wave cannot be guided by the optical fiber and will diffract into free space.
We first model the generation of the THz wave at ω 3 , and then the space diffraction effect will be taken into account.The nonlinear wave equation, governing beam propagation in an optical fiber, can be cast into the form: where D is the electric displacement, μ 0 is the free space permeability, and c is the speed of light in vacuum.Substituting ( 1) and ( 3) into (5), we have the slow-varying amplitudes for the three waves, given by where A 3 is the amplitude of the generated THz wave.
As can be seen in ( 6), the phase mismatching dilemma would limit the THz wave generation.The phase matching condition is given by We thus propose to utilize a FBG to ensure the phase matching for the THz generation.As shown in Figure 1, the frequencies of the two optical carriers at ω 1 and ω 2 lie outside the reflection band of the FBG, but one of the optical carriers, say ω 1 , is close enough to the reflection band and is, thus, affected by the dispersion of the FBG.It then follows that the effective propagation constant of the optical carrier at ω 1 is given by [22,23] where k B = ω FBG n eff /c is the propagation constant of the FBG, ω FBG , n eff , and κ are the central frequency, the effective refractive index, and the coupling coefficient of the FBG, respectively.Here δ 1 = (n 1 /c)(ω 1 − ω FBG ).Thus, by a proper design of the FBG and the locations of the two optical carriers, the phase matching condition can be satisfied.
Let us assume now that the THz radiation is generated in the fiber, and consequently the THz radiation diffracts into the free space.To model the THz radiation propagation, the generation fiber is divided into small segments, each segment with a length of ΔL.It then follows that the diffracted THz radiation, which is generated by one ΔL segment, can be described using the Gaussian beam model [24], where W(z) is the beam width after propagating a distance z, W 0 = A eff /2π and z 0 = W 2 0 ω 3 /(2c).Here, A eff is the effective mode area of the single-mode fiber.
The power of the THz radiation generated by a fiber with a length of L = P × ΔL, collected utilizing a lens of a radius of , is given by where I coll is the collected power, z p is the distance between the pth segment and the lens, and I p is the power generated by the pth segment.
Numerical Simulation
A numerical simulation is performed to evaluate the THz generation.Realistic optical fiber parameters are employed for the simulation.Specifically, we consider , and a 3 cm biased optical fiber.We also assume that the FBG center frequency is f FBG = 193.36THz, the bandwidth is Δ f = 25 GHz, and the coupling coefficient is κ = 400 m −1 .Following the expression in (8) and the condition in (7), a 4 THz wave can be generated in the biased optical fiber by mixing two optical carriers of the frequencies of f 1 = 192.8THz and f 2 = 188.8THz.
The power of the collected THz radiation is simulated, with the results shown in Figure 2. Here, we solve numerically the wave equations given by ( 6) considering a 3 cm optical fiber.Then, we calculate the power generated by every 1 mm length and used ( 9) and ( 10) to calculate the power of the collected THz radiation.In the calculation, a lens of a diameter of 5 cm is assumed.Given these parameters, the collected power is 14.5% of the total generated power.
As can be seen from the numerical simulation, the higher the optical input power, the higher the generated THz radiation.This is because the fiber nonlinearity is relatively weak.We therefore proposed to utilize a ring resonator, incorporating a polarization coupler, to accumulate and boost the optical power.The optical power can hereby be effectively increased, yet using low power optical input sources.The proposed structure is depicted in Figure 3.In the structure, the polarization controllers at the outputs of the laser sources are adjusted such that the optical carriers are directed from arm a to arm b.However, the polarization controller in the ring resonator is adjusted such that the optical carriers are directed from arm c to arm b.Consequently, as both carriers lie within the FBG transmission band, the light power can be accumulated inside the ring resonator.To guarantee a constructive accumulation, a tunable time delay (TD) line is incorporated inside the ring resonator.Using the geometric series expression, one can get the optical power inside the ring resonator, given by where A a and A b are the amplitudes of the two optical carriers at arm a and b, T is the transmission coefficient of the polarization coupler from arm a to arm b, ξ = RL n e jkZ , R is the reflection coefficient from arm c to arm b, L n is the total loss of the ring, Z is the ring length, k is the effective propagation constant, and N is the number of the effective light rotation inside the resonator.
Let us assume T = 0.8, f 1 = 192.8THz, and Z = 3 m.The optical power ratio, defined as the ratio between the The power ratio The factor RL n (m −1 ) Figure 4: The power ratio between the power at arm b and arm a of the polarization coupler for the optical carrier f 1 .Here, the power ratio is calculated as a function of the parameter RL n .
optical power inside the ring resonator and the input optical power, is calculated, and is shown in Figure 4.Here the power ratio is calculated as a function of RL n .As can be seen, the optical power can be boosted by maximizing this parameter, which implies a condition of having low losses inside the ring resonator and strong reflection at arm c of the polarization coupler.
On the other hand, the power accumulation inside the resonator depends on the optical carrier frequency.The structure shown in Figure 3, however, employs two optical carriers.To optimize the total optical power inside the resonator and enhance the total power ratio for both optical carries, a tunable optical time delay (TD) is incorporated.Figure 5 shows the optical power ratio for both optical carriers, f 1 = 192.8THz and f 2 = 188.8THz, versus the length change of the ring resonator.As can be seen from Figure 5, the total optical intensity inside the resonator (which is the sum of the intensities of the two optical carriers) can be boosted upon a relevant choice of the ring resonator length.This can be achieved via controlling the tunable optical time delay.
We note here that several other schemes can be implemented to enhance the power of the generated THz waves.First, an intensive pulsed laser input can be utilized as one of the optical carries to boost the THz radiation, thanks to the weak two-photon absorption (TPA) and the instantaneous nonlinearity of the optical fiber.Second, a THz waveguide can also be utilized to enhance the THz focusing, leading to an improved efficiency in collecting the generated THz radiation.
Although an external electrostatic field has been considered above, this is not necessarily to be the case.For example, a poled optical fiber, with internal electrostatic field, could be utilized.We stress here that utilizing a poled fiber has a tremendous advantages.For instant, it is a solution to avoid biasing fluctuations; resulted in generating a highquality THz waves.Also, having an internal electrostatic field allows vital applications such as in-vivo THz scanning.Furthermore, avoiding an external circuitry enhances the system compatibility and portability.
The frequency of the generated THz radiation can be tuned by controlling the frequency spacing between the two optical carriers.However, as the system is designed to phase match the THz generation at given frequency (e.g., 4 THz in the above simulation), the generation efficiency is expected to drop for any other THz frequencies.To mitigate this effect and to achieve a wide range of frequency-tunability, we choose to keep the optical carrier ω 1 (which is close to the FBG central frequency) unchanged and tune the optical carrier ω 2 (which lies far from the FBG central frequency).Figure 6 shows the numerical simulation of the normalized THz power versus the frequency of the generated THz wave.Here, same parameters (FBG specification, etc.) of the simulation presented in Figure 2 have been assumed.As can be seen the THz radiation can be generated over a large range from 1 to 30 THz.The maximum efficiency is achieved at 4 THz, while at 30 THz the drop in efficiency is only 22%.However, the efficiency is dramatically reduced for frequencies below 4 THz.This can be explained by noting that, for a frequency above 4 THz, the carrier at ω 2 is shifted away from the FBG central frequency, thus the dispersion of the FBG will only have an impact on the carrier at ω 1 and the phase matching condition is always satisfied.However, for a frequency below 4 THz, the optical carrier at ω 2 is shifted towards the FBG central frequency, thus the dispersion of the FBG will have an impact on both the carriers at ω 2 and ω 1 , and the phase matching condition is no longer satisfied.
Conclusion
We have proposed and theoretically investigated a novel approach to generating THz radiation in an optical fiber.
An external electrostatic field is applied to the optical fiber to induce a second-order optical nonlinearity via the Kerr effect.Two optical carriers were introduced to the optical fiber and thus a THz radiation can be generated by the means of difference frequency generation.A FBG was utilized as a strong dispersive element to achieve the phase-match condition, to increase the generation efficiency.Furthermore, a ring resonator, incorporating a polarization coupler (PLC), was utilized to boost the optical power, which resulted in an increase in the THz power.The generated THz wave could cover a wide frequency range utilizing a given FBG.The key on achieving this large tunable range (for a given FBG) is that one optical carrier was placed far away from the center frequency of the FBG.When the frequency of this optical carrier was tuned, the frequency of the generated THz wave was tuned, but the phase match condition was always preserved over the tunable range, assuring a relative constant THz power.We finally note that a poled optical fiber with internal electrostatic field can be used in lieu of the optical fiber with an external electrical bias.The system stability, compatibility, and portability can thus be enhanced.
Figure 1 :
Figure 1: The relationship between the reflection spectrum of the FBG and the two optical carriers.
4 EFigure 2 :
Figure 2: The power of the collected THz radiation utilizing a 5 cm diameter lens.
2 Figure 5 :
Figure 5: The power ratio for the two optical carriers f 1 and f 2 as a function of the ring resonator length.
Figure 6 :
Figure 6: The normalized average THz power versus the frequency of the generated THz wave. | 4,818.8 | 2012-09-09T00:00:00.000 | [
"Physics"
] |
Characterization of high pathogenicity avian influenza H5Nx viruses from a wild harbor seal and red foxes in Denmark, 2021 and 2022
Abstract In 2021 and 2022, clade 2.3.4.4b H5Nx high pathogenicity avian influenza viruses were detected in one harbor seal and in one adult and three fox cubs in Denmark. The viruses were closely related to contemporary viruses found in Europe, and some had obtained amino acid substitutions related to mammalian adaptation. Notably, the virus distribution appeared to have been different in the infected fox cubs, as one exclusively tested positive for the presence of HPAIV in the brain and the other two only in the lung. Collectively, these findings stress the need for increased disease surveillance of wild and farmed mammals.
| MAMMALIAN CASES IN DENMARK
In addition to wild and domestic birds, many mammals have been affected by high pathogenicity avian influenza virus (HPAIV) in recent seasons.In 2021 and 2022, five mammals tested positive for clade 2.3.4.4bH5Nx HPAIVs in Denmark.The first mammalian case was an adult male harbor seal (Phoca vitulina), found dead in September 2021 on a beach at Southwest Funen (Figure 1).The seal was in a state of progressed decay and was assessed to have been dead for up to 1 week at sea.At necropsy, the seal was found emaciated.The heart and respiratory organs appeared unaffected, whereas the abdominal organs were too decayed for thorough examination.The organs were unsuitable for histology.The organs were also tested for the presence of pathogenic bacteria and morbillivirus as previously described, 1 in which none tested positive.The presence of clade 2.3.4.4bHPAIV H5N8 was confirmed in the lung of the seal by real-time RT-PCR (rRT-PCR) and sequencing as previously described 2 (Table 1).
In 2022, HPAIV H5N1 was detected in one adult red fox and three red fox cubs (Vulpes vulpes) by rRT-PCR and sequencing. 2 Brain and/or lung tissues from the four foxes were collected for virological and histopathological analyses.The adult fox was found dead on January 17, 2022, close to a fox pit at Djursland in Eastern Jutland, where HPAIV H5N1 was detected in the lung (Table 1).On April 28, 2022, three fox cubs were found dead in another location, 77 km southwest of Djursland, next to a fox pit in Odder municipality.All three fox cubs were observed alive 3 days prior.HPAIV H5N1 was detected by rRT-PCR in brain tissue from one of the fox cubs and in the lung of the other two cubs.The rRT-PCR analysis was repeated with the same results.The carcass of a bird, most likely a common scoter (Melanitta nigra), was found next to the fox cubs.Unfortunately, the carcass was discarded before any samples could be collected.Macroscopic findings at necropsy of the foxes included pulmonary edema and consolidation, and the livers were enlarged with congestion.Pronounced emphysema in the cranial parts of the lung and hepatic steatosis were also detected in the fox cubs.Based on size and weight, the age of the fox cubs was estimated to be 4-5 weeks.The adult fox was in poor body condition, while the body conditions of the fox cubs were within normal range.Pneumonic lesions in all HPAIV-positive foxes were revealed by histopathology.
In the adult fox, the lesions were dominated by fibrinous to necrotizing pneumonia (Figure 2A), while the three fox cubs revealed varying degrees of fibrinous to interstitial pneumonia.In addition, influenza A virus (IAV) nucleoprotein (NP) antigen was demonstrated in the lung of the adult fox and in one of the cubs by immunohistochemistry using a mouse monoclonal antibody against the IAV NP protein as primary antibody (HYB 340-05, www.ssi.dk/antibodies)(Figure 2B).No significant lesions were detected in other organs (kidney, small intestine, liver, and brain); however, postmortem decay of the organs made the histopathologic evaluation difficult.Additionally, all four foxes were assessed for infection with pathogenic bacteria in the liver and lungs and morbillivirus in lungs, spleen, and for the cubs also in brain. 1 The adult fox was furthermore tested for Aleutian mink disease virus (AMDV). 3No pathogenic bacteria or viruses, other than HPAIV, were identified in any of the foxes.The HPAIV H5N1 positive adult fox and fox cubs were investigated as part of the 2022 wild mammal disease surveillance program in Denmark.In 2022, the presence of IAV was investigated in further five foxes, a European badger and a European polecat that were found dead in nature, as well as in 35 foxes, 2 common raccoons, 1 house marten, 1 wolf, 1 European badger, 1 common raccoon dog, and 1 European polecat that were apparently healthy and killed due to regulation or killed in road traffic collisions.IAV was not detected in any of these mammals.
The viruses from the seal and all four foxes were full genome sequenced as previously described 4 and compared to sequences of related and contemporary viruses from the GISAID EpiFlu™ database (http://www.gisaid.org).Full-genome sequences were generated for the viruses found in red fox cubs 1 and 3, while it was only possible to generate partial genome sequences for the viruses in the harbor seal F I G U R E 1 Map of Denmark showing where the HPAIV-positive harbor seal (blue dot), adult fox (red dot marked with "1"), and three red fox cubs (red dot marked with "2") were detected.The date of observation is also indicated next to the location site.The harbor seal was found at a beach on Southwestern Funen.The adult fox and fox cubs were all detected on the eastern part of Jutland.A1).Analysis of the amino acid sequence revealed substitutions related to adaptation to mammals (Table A1).Notably, the PB2-E627K substitution was present in the viruses found in the lung of the harbor seal and the lung of one of the fox cubs (A/red fox/Denmark/08658-3.02-2/2022) (Table A1).This mutation has been linked to mammalian adaptation and has in recent years been detected in several HPAIVs from infected mammals in Europe. 6
| DISCUSSION
Here, we describe the first documented mammalian infections with genotypes.The viruses detected in the fox cubs were genetically closely related; however, it is unclear if they were infected by eating the same infected bird or by virus transmission among the cubs.It could also be hypothesized that the fox cubs were infected by their mother although no sick or dead vixen was observed in the surrounding area.Similarly, the analyses performed of the HPAIV-infected seal also strongly indicated infection by HPAIV to be the cause of death.
Interestingly, the HPAIV detected in the Danish harbor seal was most similar to viruses detected in two harbor seals in the German part of the North Sea found dead 1 month earlier. 5This could indicate that these seals either had a common source of infection or that there could potentially have been seal-to-seal transmission.During a previous outbreak of H10N7 AIV in 2014-2015, there was clear evidence of transmissions among seal populations from both the Baltic Sea and the North Sea. 8 However, it is important to note that the genome was only partially generated for the HPAIV found in the harbor seal in Denmark.
Pneumonic lesions have been associated with HPAIV infection in other mammals, including ferrets, cats, and dogs, [9][10][11][12] which are in agreement with the lesions that were detected in the seal and three of the foxes.In this study, no evident signs of lung infection were revealed in the seal.Previous investigations have shown that HPAIV induces central nervous system infections in mammals and report that the viral load is higher in the brain in comparison to the respiratory tract. 5,13,14Notably, we were able to detect viral RNA only in the brain of one red fox cub, whereas the remaining fox cubs were only virus positive in the lung.It is uncertain whether the disparities in virus distribution could reflect variation in organ tropism or if it reflects the differences in infection progression, transmission route, and/or dose of exposure.While HPAIV is commonly neurotropic among mammals, our results stress the importance of including multiple organs when testing for infection with HPAIV.
Although AIVs are known to infect many mammalian species, there has been an increasingly extensive mortality among wild mammals associated with HPAIV since 2020. 6Multiple spillover events of clade 2.3.4.4bHPAIVs to both marine and terrestrial mammals have been reported across multiple countries.Many of these spillover events have been onto mammalian species that previously had never been affected by HPAIV.The increased number of detections of mammalian HPAIV cases with markers of genetic mammalian adaption is worrisome, as it may lead to a higher risk of human infections and calls for enhanced one health-based AIV surveillance including both wild and farmed mammals.
and red fox cub 2 .
Phylogenetic analysis identified the viruses to belong to clade 2.3.4.4b of the Goose/Guangdong lineage.The H5N1 virus from the cubs and the adult fox belonged to two different geno--02/2021-like and A/Eurasian Wigeon/Netherlands/1/2020-like, respectively (Figures 3 and A1) (data on the genotyping are not shown).Comparing the available genome sequences, viruses detected in red fox cubs 1 and 2 were 100% identical on amino acid level, whereas they differed to red fox cub 3 by six nucleotides in which four resulted in amino acid substitutions (PB2-E627K, HA-R456G [H5-numbering], NP-E454K, and NP-R485G).Viruses from the adult fox and the fox cubs were all closely related to viruses found in contemporary wild birds from Denmark, while the virus from the seal was closest related to a virus found in a seal in the German part of the North Sea in August 2021 5 (Figures 3 and clade 2.3.4.4bH5Nx HPAIVs in Denmark.While there have been recent reports of clade 2.3.4.4 H5N1 HPAIV in several species of wild mammals, most of the affected species have been seals and foxes,7 which has also been the case in Denmark.Although other causes of deaths cannot be completely ruled out, we postulate that the most probable explanation was infection with HPAIV.This assumption is strongly supported by the lung lesions consistent with HPAIV infection, the molecular detection of the virus by rRT-PCR and sequencing, and the detection of IAV antigen in the lungs by immunohistochemistry combined with no other obvious cause of death having been identified.The genetic analyses combined with the epidemiological data indicate that there was likely no transmission of HPAIV between the adult fox and cubs, since the viruses were of two different H5N1 T A B L E 1 Summary of the virological results and histopathological examinations of the mammals infected with HPAIV.
F I G U R E 2
Lung tissue from two red foxes with pneumonia consistent with influenza infection.The lungs tested positive for HPAIV by rRT-PCR.(A) Adult fox with fibrinous to necrotizing lesions.Clots of fibrin (arrowhead) in alveoli and lining the terminal bronchi intermixed with small necrosis (star).Hematoxylin and eosin, bar 100 μm.(B) Fox cub with non-suppurative interstitial to fibrinous pneumonia with multiple cells staining positive for influenza virus (dark brown/arrow head) by immunohistochemistry using a mouse monoclonal antibody against influenza A virus NP, bar 50 μm.
F
I G U R E A 1 Maximum likelihood phylogenetic tree inferred on the gene segments, polymerase basic 2 (PB2), polymerase basic 1 (PB1), polymerase acidic (PA), nucleoprotein (NP), neuraminidase (NA) subtype N1, NA subtype N8, matrix protein (MP), and non-structural protein (NS).The mammalian viruses found in Denmark were compared to H5 HPAIVs from wild birds and mammals detected in other European countries.Viruses with a high genetic similarity determined by BLAST on EpiFLu™ GISAID (www.gisaid.org)were also included in the analysis.The trees were estimated using IQ-Tree version 2.0.3.HPAIVs detected in Denmark in the same period and a representative set of H5 sequences of viruses were obtained from the EpiFLu™ GISAID database and included in the analysis.Only bootstrap values of ≥70 are shown.The scale bar indicates the nucleotide substitutions per site.F I G U R E A 1 (Continued) F I G U R E A 1 (Continued) . Jiao P, Tian G, Li Y, et al.A single-amino-acid substitution in the NS1 protein changes the pathogenicity of H5N1 avian influenza viruses in mice.J Virol.2008;82(3):1146-1154. doi:10.1128/jvi.01698-0722. Allen JE, Gardner SN, Vitalis EA, Slezak TR.Conserved amino acid markers from past influenza pandemic strains.BMC Microbiol.Mutations observed in the eight gene segments of the HPAIV viruses found in a seal and foxes in Denmark. | 2,809.8 | 2023-10-01T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Biology"
] |
Phenotypic plasticity of Myzus persicae (Hemíptera: Aphididae) raised on Brassica oleracea L. var. acephala (kale) and Raphanus sativus L. (radish)
The study of variability generated by phenotypic plasticity is crucial for predicting evolutionary patterns in insect-plant systems. Given sufficient variation for plasticity, host race formation can be favored and maintained, even simpatrically. The plasticity of size and performance (assessed by the lifetime fitness index rm) of six clones of Myzus persicae was tested, with replicates allowed to develop on two hosts, kale (Brassica oleracea var. acephala) and radish (Raphanus sativus). The clones showed significant variability in their plasticity. Reaction norms varied through generations and negative genetic correlation, although not significant, tend to increase with the duration of host use. The lack of plasticity in lifetime fitness among generalist clones occurred as an after-effect of the highly plastic determinants. Significant morphological plasticity in host used was observed, but no variation in the plastic responses (GxE interaction) was detected. Strong selection for a larger size occurred among individuals reared on radish, the most unfavorable host. Morphological plasticity in general body size (in a multivariate sense) was not linear related to fitness plasticity. These observations suggest that a high potential for the evolution of host divergence favors host race formation.
Introduction
Phenotypic variation in natural populations is influenced by genetic variability but is also environmentally dependent (Schlichting, 1986;Scheiner, 1993;Via et al., 1995). Such environmentally-modulated plasticity may have an important role in the evolution of insect/plant interactions (Schlichting, 1986;Mopper, 1996). The plasticity of an insect in relation to its host allows the production of a superior phenotype without major genetic changes (Via, 1990;Thompson, 1991). Several phytophagous insects are known for their ability to use hosts that are morphologically and chemically distinctive (Gold, 1979;Fry, 1989). The performance of plastic organisms can be explained ecologically by their capacity to make physiological, morphological and behavioral adjustments in response to the nutritional, toxicological and anatomical features of their host plants.
Morphological plasticity, including the absence of a response to the environment, does not necessarily mean the lack of an ability to explore multiple host plants. Morphological effects may be mediated through marked plasticity of physiological characters (Schlichting, 1986). In some cases, morphological plasticity occurs as a byproduct of developmental systems with a maladapted archetype (Schlichting and Pigliucci, 1995), and therefore does not represent a mechanism by which relative fitness is maintained in response to environmental variation (Thompson, 1991).
Given sufficient variation in the plastic responses for a particular trait (variation in reaction norms), plasticity can evolve independently of the mean trait (Scheiner, 1993). However, if the pressures on each host are increased or maintained, the costs for plasticity can become very high and lead to trade-offs (Via and Lande, 1985;Sterns, 1989). Trade-offs constitute the fundamental basis for host race formation (Joshi and Thompson, 1995;Mackenzie, 1996). Therefore, knowledge of the developmental patterns underlying phenotypic variation is crucial for understanding evolutionary mechanisms, particularly specialization.
This study describes the morphological and physiological plastic responses of Myzus persicae clones under distinct environmental regimes imposed by the host plant characteristics. The effects of each environment on the genetic structure of the population and the relevance of phenotypic plasticity to the evolution of host race formation are discussed.
Material and Methods
Collection and maintenance of clones Adult female Myzus persicae were collected at several locations in the city of Uberlândia (MG, Brazil), from February to April, 1998, to establish laboratory parthenogenetic clones. These clones were collected from distinct hosts (broccoli, kale, radish, arugula and Chinese cabbage) kept in order to ensure genetic variability. Stock cultures were maintained on Chinese cabbage leaves and in plastic containers in a refrigerator (15°C). Chinese cabbage was selected as the host for stock cultivation because of its easy conservation and extended durability under refrigeration.
Physiological plasticity
The experiments were done during March, 1998, in experimental gardens at the Federal University of Uberlândia. Brassica oleracea L. var. acephala (kale) and Raphanus sativus L. (radish) were cultivated as the hosts to be tested. Adult females from each clone of M. persicae, were placed individually in clip cages attached to leaves of the hosts in order to initiate parthenogenetic descendents. From each generation produced, some individuals were chosen at random to establish parthenogenetic lineages. These individuals were monitored for their developmental time (from birth to the first reproductive day) and daily fecundity (number of nymphs produced each day). Alata females were discarded. Three generations were analyzed, giving a total of 30 individuals per clone from each host.
A life time index of fitness was calculated using the formula r m = (0.745/dt)*log e (ft), where dt is the developmental time and ft is the number of nymphs produced by the time that the parent aphid is exactly two dt days old (Wyatt and White, 1977). The values for r m , daily fecundity and developmental time were examined using analysis of variance in order to obtain the species' plastic potential and its variability among clones (Via and Lande, 1985). The causal components of variation were estimated from the mean square values in a two-way mixed model ANOVA table performed across environments (hosts). The use of cloned individuals can facilitate the estimation of genetic and environmental components of plasticity, although it does not allow the partitioning of genetic variances into additive and non-additive effects (Via, 1990). Norms of reaction were constructed for each character analyzed and genetic correlations were assessed to verify whether there were trade-offs (Falconer, 1989).
Morphological plasticity
Aphids were preserved in 70% ethanol prior to being mounted for morphometric analysis under a microscope equipped with a micrometer. Five morphological features were measured in 15 adult individuals from each clone from both environments (hosts). These characters were: ultimate rostral segment length (UR), distance between the siphunculi (SS) and lengths of the right antennal segment III (A), hind tibia (HT), and siphunculus (S).
Principal Component Analysis was used to assess the nature and magnitude of variation in the morphological characters, and an index of general body size (in a multivariate sense) was then estimated using a correlation matrix of the original characters (Manly, 1994). The plasticity and causal components of variation were analysed as described above for the physiological characters. The genetic correlation for general size across environments was estimated using the scores of the first principal component. The phenotypic and genetic correlations between physiological and morphological characters were also assessed in order to determine the extent to which two particular characters were influenced by the same genes (Falconer, 1989). These correlations were estimated using the Pearson correlation coefficient. All statistical analyses were done using the computer software package SYSTAT for Windows, version 9.0 (Systat, Evaston, Ill, USA).
Plasticity of fitness and its components
The average values and standard errors for the traits studied on each host are given in Table 1. Analysis of variance showed a significant difference in the average performance (genetic variability) among genotypes, as assessed by the fitness lifetime measurements in both environments (F = 5.581, p < 0.0001). Clones responded phenotypically to the environmental conditions and this source of variation explained most of the total variance (F = 59.854; p < 0.0001). Significant variability in quantitative plasticity among genotypes (genotypic x environment interaction) was also present, indicating a different response for each genotype (F = 7.474, p < 0.0001) ( Table 2).
The reaction norms that indicated the direction of variation (Figure 1) confirmed the variability and demonstrated the occurrence of trade-offs at a population level. Thus, the clones with the best performance on kale were, respectively, those with the worse fitness indices on radish. For other clones, the performances on both hosts were similar. The fitness values fluctuated through generations, and 190 Peppe and Lomônaco no consistent pattern of variation was detected among clones on the two hosts. Such instability modified the reaction norm diagrams over time, mainly because of the increases in total variability ( Figure 2). The relative contributions of the components of variation to each generation indicated by ANOVA showed a decrease in the variability caused by environmental conditions, followed by an increase in variability due to genetic and genetic x environment influence (Table 3). Fecundity was significantly higher among individuals reared on radish than those reared on kale (F = 12.013, p = 0.001). On the other hand, the developmental period was longer on kale than on radish (F = 35.393; p < 0.0001). Some clones did not show plasticity for these components ( Table 1).
Plasticity of morphological traits
All morphological traits were highly correlated so that a multivariate approach using PCA was satisfactory for reducing the dimensionality of the data (Manly, 1994). Since the signs and magnitudes of the first principal component were very similar, this component was interpreted as an index of general size, explaining 54.6% of the total variation (Table 4). Two-way ANOVA was done but the scores of the first principal components revealed no genetic variability among the clones and no significant interaction between clones and hosts (Table 5). However, environment (host) had a significant effect and explained most of the total variation for general size. All of the clones, except for one, tended to be larger when reared on radish than on kale.
Genetic and phenotypic correlations
The genetic correlation between general size and environments was not significant (r = 0.126, p = 0.813), nor was size correlated with fecundity (r radish lineages = 0.975, p = 0.273 r kale lineages = 0.596, p = 0.212). Size was also not correlated with fitness in a specific environment (r radish lineages = 0.659, p = 0.155 and r kale lineages = -0.344, p = 0.505). Phenotypic correlations between size and fecundity and between size and fitness were significant for individuals raised on kale but not for those raised on radish.
Plasticity of host utilization
Individual genotypes from the populations studied varied in their physiological responses according to the environmental conditions. The insects showed significant phenotypic plasticity for fitness when B. oleracea var acephala and R. sativus were used as hosts. There was also considerable variation in plasticity associated with the amount and direction of responses among genotypes. For more host-specialized genotypes, the cost of plasticity, i.e. the cost of maintaining the genetic and cellular machinery necessary to be plastic (Scheiner, 1993) is very high, and results in a gradual tendency toward specialization (Rausher, 1988). Thus, an increase in the efficiency with which one host is used leads to a decrease in the efficient use of an alternative host. The gradual loss in the ability to use several hosts associated with the increased adaptation to a particular host indirectly expresses the selection for host races or biotype formation (Rausher, 1988).
The specialization of phytophagous insects generally depends on the ability to colonize a new host and on changes in the life history parameters of the population in a new environment. These factors may make gene flow between the original population and the newly founded population difficult (Futuyma and Moreno, 1988). Barriers to gene flow could not be investigated in M. persicae since this species reproduces only asexually at the latitude where the study was conducted. On the other hand, the conditioning of host choice, a process in which aphids tend to choose the host on which they were raised (Guldemond, 1990), could magnify the divergences between biotypes originating from disruptive selection (Pereira and Lomônaco, 2001).
Trade-offs in performance have been examined in the current paradigm to explain host specialization (Rausher, 1984). The lack of a significant negative genetic correlation for performance for the population studied does not mean that trade-offs were absent. Even small negative genetic correlations among traits simultaneously selected in distinct environments can promote evolutionary change (Rausher, 1988). Environmental modification can also produce positive genetic correlations that may counterbalance negative correlations (Service and Rose, 1985), and antagonistic pleiotropy may also be present without any detectable effect (Fry, 1993). The absence of negative genetic correlations may also mean that evolution of reaction norms is still in progress (Via and Lande, 1985).
The genetic correlation between the same traits in the two environments measures the degree of genetic independence (additivity) between these characters (Falconer, 1989) and allows the evaluation norms of reaction modification over time (Via, 1993). Nevertheless, this procedure does not assess the absence or presence of shared genetic control (independence of loci) since it includes the combined effects of allelic sensitivity (Schlichting andPigliuci, 1993, 1995). Both of these models of gene-environmental interaction (e.g. pleiotropic and epistatic interactions) can contribute to the expression of plastic responses (Scheiner, 1993). Regardless of the mechanism responsible for generating fitness plasticity in M. persicae, the population is assumed to have sufficient variability in the character itself and in its ability to show plastic responses.
The presence of more generalist genotypes which possess much more flexible functional properties indicates that the costs of plasticity are not always high enough to curtail host use. To some extent, generalist genotypes were able to maintain a coherent metabolic integration on both hosts, without substantially compromising their fitness out- 192 Peppe and Lomônaco put or survival. Since the overall fitness of a genotype results from the integration of several characters, a less variable fitness index across environments can be a consequence of the genotype's highly plastic determinants. Indeed, the fitness components were very plastic across environments. The developmental period and fecundity varied between environments and across generations. For most of the M. persicae clones analyzed, a greater fecundity on radish was not accompanied by greater fitness on this host when compared with the fitness values on kale. This appears to be a consequence of the high mortality rates of the aphids when raised on radish. High mortality is therefore compensated for by high fecundity among generalists that may be adopting a strategy to maximize descendant survivorship (Dixon and Wellings, 1982). When plasticity contributes positively to fitness, it can be considered adaptive, and constitutes an important advantage in exploiting heterogeneous environments (Leclaire and Brandl, 1994).
Plasticity of size
The clones responded phenotypically to host quality. Although significant differences in size were observed, variation in the plastic responses (genetic x environment interaction) was not detected since all clones tended to be larger when reared on radish.
The variation in size among individuals reared on radish was smaller than among those raised on kale. This reduced variability indicated strong selection for size. Yet, why should survival on radish be conditioned to a specific size? If the costs of survivorship on radish are high, only larger individuals would be able to survive and reproduce. According to Dixon (1985), under stable conditions, fecundity and the size of aphid nymphs are positively correlated with their mother's size, although adverse conditions may sometimes alter this correlation (Grüber and Dixon, 1988).
An alternative hypothesis to explain the larger size of individuals reared on radish is that larger individuals also have larger feeding segments. In the particular case of aphids, several aspects of their hosts, such as the structure and abundance of trichomes on the leaf surface, (radish leaves have more trichomes than kale) are related with their modes of locomotion and the dimensions of the feeding apparatus (Moran, 1986). Constraints, such as the impact that the depth of phloem places on birth size have also been postulated as determinants in the selection for body size in aphids (Dixon et al., 1995). Therefore, if a larger ultimate rostral segment (URS) is necessary to increase fitness on radish, the smallest aphids would not be expected to survive. Characteristics that help an organism maintain its fitness are likely targets for selection (Via and Shaw, 1996). However, selection may not be operating on one character (URS) alone but on the whole phenotypic morphological expression of the organism (e.g. body size in the allometric sense).
As pointed out by Trussel (1996), caution must be exercised in evoking morphological plasticity as being responsible for microevolutionary change. Nevertheless, morphological divergence may have an important role in the mechanisms of sympatric speciation (Futuyma and Moreno, 1988).
Genetic and phenotipic correlations
Genetic correlations estimate the degree of independence of the expression of traits in two environments (Via and Lande, 1985). Thus, if characters are being influenced by an environmental stimulus, the phenotypic/genetic correlations among these characters would also be expected to change across environments (Schlichting, 1986). Host quality thus clearly influences the expression of fitness and size in M. persicae. Changes in the magnitude and direction of the genetic correlations appear to indicate different genetic mechanisms of character determination. This new genetic architecture may be caused by the expression of a new set of genes or the differential expression of the same genes, with pleiotropic effects on the characters measured (Holloway et al., 1990). Changes in the genetic correlations between traits in distinct environments have been reported in several studies (Rausher, 1984;Via, 1884a,b;Leclaire and Brandl, 1994).
The positive, significant phenotypic correlations between the characteristics analyzed among aphids raised on kale contrasted with the insignificant correlations among aphids raised on radish. This finding indicates that size and fitness did not covary uniformly across environments and reinforces the possibility that the genetic mechanisms controlling these traits are distinct, or at least not linearly related.
The environmental contribution to plasticity is thus not systematic for all characteristics, and genotypic effects may contribute to this diversification. Since the influence of allele sensitivity and loci independence cannot be separated, differences in at least one of these mechanisms among the genotypes would be sufficient to produce nonlinear correlated responses. These results indicate that host divergence provides a wide scope for evolution and favors the formation of host-specific races of M. persicae for kale and radish. | 4,096.4 | 2003-01-01T00:00:00.000 | [
"Biology"
] |
Molecular dynamics of liquid Zirconia: Effect of Pressure
: This paper carefully examined the influence of pressure within the range of (0 – 20) GPa on the self- diffusion coefficients of zirconium and oxygen in cubic zirconia by a molecular dynamics (MD) simulations at a temperature of 3000 K using 96 -1500 particles. Data obtained showed that, at 96 particles the self-diffusion coefficients decreased from 1.60 x 10 -4 (Å2/ps) to 0.76 x 10 -4 (Å2/ps) between 0.0 GPa and 20.0 GPa. As the pressure is increased, the self-diffusion coefficient for zirconium and oxygen atoms decreases. Particularly, in the region of high pressure, the decrease in self- diffusion coefficient for oxygen is nonlinear. This might be as a result of instability associated with large systems where one oxygen atom moves over the maximum potential of another oxygen atom. This is consistent with previous MD studies for oxygen in liquid SiO 2 . The pair correlation function was found to increase with pressure. These thermo physical properties find practical applications in the electro ceramic industry.
Zirconia (ZrO2) has been found to be useful in the manufacture of electro ceramics, ceramic tiles, ceramic sensors, medical devices, refractory materials and paint pigments. This has attracted the attention of both experimental and theoretical investigations for the past few decades. In recent times, a lot of work has been carried out in the production of ZrO2 (Hrovat et al., 2003;Blumenthal, 1998;Agrawal and Sudhakar, 2002;Hannink et al., 2000;Reynen et al., 1981;Dodd et al., 2001;Paola et al., 2007).Other important features of zirconia include the fact that at low pressure, it crystallizes in three phases: monoclinic (P21/c) below 1400K,tetragonal (P42/nmc) between 1400 -2570K and cubic (Fm3m) between 2570 -2980K structures (Oliveira et al., 2001;Murty et al., 1994). It is also a very hard and stable material.
It is important to note that the dynamics in liquid materials is governed by atomic diffusion which in turn describes the single-particle transport property in molten state. To have better understanding of this property, series of studies have been carried out. For example, the diffusion property of zirconia was studied by Joshua Pelleg(2016). Here, the selfdiffusion of oxygen in zirconia melt was reported. Also, self-diffusion coefficients of zirconium were measured between 1773K and 1373K using radioactive Zr 95 as a tracer (Kidson and McGurn, 1961). Suarez et al. (2008) performed a sintering kinetic process on cubic zirconium and reported the cation diffusion of Zr between 1473K and 1673K. The oxygen-18 gas-solid exchange technique was also used to study the diffusion of oxygen in monoclinic zirconia, and self-diffusion of oxygen was calculated (Keneshea and Douglass, 1971). However, to the best of our knowledge, there is no information on the study of pressure dependence of self-diffusion coefficients for zirconium and oxygen in liquid zirconia. Therefore, in this study we used the molecular dynamics simulations to study the influence of pressure within the range of (0 -20)GPa on the selfdiffusion coefficients of zirconium and oxygen in cubic zirconia.
MATERIALS AND METHODS
The face-centered cubic crystal structure of zirconia (ZrO2) was used for our simulations. We started by first converting the primitive cell of bulk zirconia (consisting of one zirconium (Zr) atom and two oxygen (O) atoms) to a unit cell of four Zr atoms and eight O atoms. This conventional unit cell is repeated twice along the A, B and C axes to create a 2X2X2 cubic supercell of bulk ZrO2. The resulting supercell now contains 96 particles (32 Zr atoms and 64 oxygen atoms). Similarly, the unit cell is repeated three, four and five times along A, B and C axes to respectively generate 3X3X3, 4X4X4, and 5X5X5 cubic supercells consisting of 324, 768 and 1500 particles.
NENUWE, ON; AZI, OJ
We investigated the dynamics of liquid cubic ZrO2 for systems consisting of 96, 324, 768 and 1500 particles at 3000K, and at different pressure of 0, 5, 10, 15 and 20 GPa. The calculations are done using the molecular dynamics (MD) technique as implemented in the ATK-VNL (Atomistix Toolkit, 2017;Griebelet al., 2007;Griebel and Hamaekers 2004). In our MD method the interactions between the atoms are described by the Wang_2012 potential function (Wang et al., 2012) as shown below: Where V(rij) is the potential energy between atoms i and j separated by rij, C, A and ρ are adjustable parameters, qiand qj charges at a separation of rij, and 0 ε is permittivity of free space. The MD process is divided into three steps: (i) heating the solid ZrO2 above its melting point (2988K), (ii) annealing the resulting liquid ZrO2 at 3000K to bring the system to equilibrium, (iii) further annealing the equilibrated liquid ZrO2 to obtain more physical data for accurate calculations of the atomic diffusion coefficient, by using the mean-square displacement extracted from the respective MD trajectory.
To achieve the first step, we use the atk-forcefield code together with the Wang_2012 (Wang et al., 2012) potential for the simulation. The first MD object is added to heat the crystalline ZrO2 from 1500K to 3000K, using an NPT Martyna Tobias Klein (NPT-MTK) barostat (Martyna, et al., 1994). The NPT barostat is used to equilibrate the ZrO2 supercell under constant temperature and pressure. The NPT-MTK was allowed to run for 200000 steps at log intervals of 2000 and Maxwell-Boltzmann type initial velocity was used at 1500K. The NPT reservoir temperature and final temperature is respectively taken as 1500K and 3000K. This allows us to increase the temperature of the supercell gradually above the melting point of ZrO2. It implies that the ordered solid ZrO2 will eventually become a liquid, which usually exhibits a short-range order as illustration by the radial distribution function g(r). Since our system has cubic symmetry, we apply isotropic pressure of 0 GPa for the simulation. This enables all cell vectors to be changed by the same factor, which preserves the shape of the cell during the simulation.
For the second step of the MD simulation, this is annealing liquid ZrO2 to bring it to equilibrium. We set the MD object to equilibrate the supercell structure of liquid ZrO2 at constant temperature 3000K, using the same NPT barostat. This helps us to eliminate any memory effect of the initial solid zirconia structure on the physical properties of liquid zirconia obtained from the initial structure. Here, we also used 200000 steps at log intervals of 2000 for the NPT-MTK. At this stage the Maxwell-Boltzmann temperature, reservoir and final temperatures are all set to 3000K. The isotropic pressure is also taken as 0GPa.
Finally, we used the NPT-MTK to collect sufficient physical data required for calculation of self-diffusion coefficient of zirconium and oxygen in liquid zirconia. The same number of steps and log intervals are used. This time the initial velocity is taken as the configuration velocity. This allows the system to use the initial velocities from the annealing stage (second MD) for the third step of the MD simulation. The reservoir and final temperature are also taken as 3000K. Then, the system is allowed to run the entire simulation. When the calculation is done, we repeated the entire process using isotropic pressure of 5, 10, 15 and 20 GPa. For each pressure, the calculation is allowed to run successfully and the results are analyzed by extracting the mean-square displacement (MSD). The slope of the MSD gives the self-diffusion coefficients as discussed by Nenuwe and Osafile (2018) in their calculation of self-diffusion coefficients of liquid aluminum at different temperatures.
RESULT AND DISCUSSION
Curves representing the mean-square displacement with time as a function of pressure and the number of atoms for different systems are displayed in Figures 1 and 2 for zirconium and oxygen atoms, respectively. Figures 1 and 2 show that in the region of high pressure the MSD versus simulation time graphs for all systems converge to a well-defined slope throughout the simulation time. The convergence of the results with increasing number of particles in Figures 1 and 2 suggests that even 96 particles are sufficient to determine the diffusion property of ZrO2 under high pressure. We observed that both at low and high pressure the diffusion behaviour of the system do not strongly depend on the number of particles used. This is in agreement with James et al. (1990). The self-diffusion coefficients for zirconium and oxygen atoms in liquid ZrO2 were calculated by taking the slopes of each curve (using a Python script) at different pressures, and the results are displayed in Table 1 and Figure 3. These results strongly suggest that as we increase the pressure of the system, the amplitude of self-diffusion maximum for zirconium decreases, and the position of the diffusivity is shifted to higher pressure for all system sizes. This trend of pressure dependence has been reported earlier by Rudd et al. (2012). For oxygen, the diffusivity maximum decreased and later increased and decreased again with increasing pressure. This might be as a result of an instability associated with large systems where one oxygen atom moves over the potential maximum of another oxygen atom. This anomalous behaviour for oxygen was also reported by James et al., (1990) in their molecular dynamics study of .
The radial distribution function (RDF) or pair correlation function g(r) being an effective tool for describing the structure of disordered molecular systems was used to determine the convergence of the simulation at low and high pressure as the size of the system is increased from 96 to 1500 particles. In solid, g(r) is known to have infinite number of pointed peaks whose heights and separations are characteristic of the lattice structure. While in the liquids phase, the pair correlation function has small number of peaks at short distances that steadily decays to unity at longer distances r(Å) (Andrew, 2001). We demonstrated in Figures 4 and 5 that at both 0 and 20 GPa the pair correlation function g(r) versus position curves for all systems converged very well. This is an indication that the simulations reached an equilibrium state and the diffusion coefficients are well defined. Also, as the position r(Å) increases, g(r) decays to a constant value.
Fig 3. Self-diffusion coefficients as a function of pressure for Zirconium and Oxygen
It is clear that, at short distances less than atomic diameter, g(r) is zero. This is as a result of strong repulsive forces. In Figure 4, with the 96 particles system, at zero pressure (0GPa), the first and largest peak for O-O occurs at 2.625 Å, with g(r) having a value of 2.348. This is an indication that it is two times more likely that two oxygen atoms would be found at this separation. Then, the pair correlation function curve falls rapidly and passes through a minimum value around 3.125 Å. The probability of finding two oxygen atoms within this separation is less. Similarly, the first peak for Zr-O occurs at 2.125 Å, with the pair correlation g(r) having a value of 4.534. This is an indication that it is four times more likely that one zirconium atom and one oxygen atom would be found at this position. Then, g(r) falls to a minimum value of 0.0378 around 3.375 Å. In this case the chances of finding zirconium-oxygen atoms around this separation are almost zero. Also, the first and largest peak for Zr-Zr curve occurs around 3.675 Å, with the pair correlation g(r) having a value of 5.583. This suggests that it is five times more likely to find two zirconium atoms at this position. Beyond this point the radial distribution function falls rapidly and passes through a minimum value (0.0034) around 4.425 Å.
Here, the probability of finding two zirconium atoms within this separation goes to zero. At long distances, g(r) approaches 1 which implies the absence of long-range order. It is important to state that absence of long-range order is an indication that solid ZrO2 has completely melted.For the system with 1500 particles, at zero pressure (0 GPa), the first and largest peak for the O-O curve occurs at 2.575 Å, with g(r) having a value of 2.3156. This is an indication that it is two times more likely that two oxygen atoms would be found at this separation. Then, the RDF falls quickly and passes through a minimum value of 0.4863 around 3.075 Å. The probability of finding two oxygen atoms within this separation is less than the probability at 2.575 Å. For the Zr-O curve, the first peak occurs at 2.125 Å, with the correlation function g(r) having a value of 4.8019. This is an indication that it is four times more likely that one zirconium atom and one oxygen atom would be found at this separation. Then, g(r) falls to a minimum value of 0.0161 around 3.325 Å. In this case the probability of finding zirconiumoxygen atoms within this separation goes to zero. For the Zr-Zr curve, the first peak occurs at 3.675 Å, with g(r) having a value of 5.7619. This is an indication that it is five times more likely that two zirconium atoms would be found at this separation. Then, g(r) falls to a minimum value of 0.0017 around 4.425 Å. In this case the probability of finding two zirconium atoms within this separation goes to zero. At a pressure of 20GPa, in the 96 particles system, the first and largest peak for the O-O curve occurs at 2.525 Å, with g(r) having a value of 2.553. This is an indication that it is two times more likely that two oxygen atoms would be found around this point. Then, the RDF falls quickly and passes through a minimum value of 0.2833 around 3.025 Å. Therefore, the probability of finding two oxygen atoms within this separation is less. For the Zr-O curve, the first peak occurs at 2.125 Å, with g(r) having a value of 5.3577. This is an indication that it is five times more likely that one zirconium atom and one oxygen atom would be found at this separation. Then, g(r) falls to a minimum value of 0.0045 around 3.175 Å. In this case the probability of finding zirconium-oxygen atoms with this separation goes to zero. For the Zr-Zr curve, the first peak occurs at 3.575 Å, with g(r) having a value of 5.8494. This is an indication that it is five times more likely that two zirconium atoms would be found at this separation. Then, g(r) falls to a minimum value of 0.000 around 4.325 Å. In this case the probability of finding zirconium-oxygen atoms within this separation is zero. The effect of pressure on the radial distribution function is also examined for all systems simulated. The pressure dependence of pair correlation function g(r) is displayed in Figure 6. It is clear that the pair correlation function is sensitive to pressure. The first peak in the pair correlation function g(r) at each simulated pressure increases with pressure and this is shown in Figure 6. This signifies increase in the number of Zr-Zr, Zr-O and O-O bonds. Particularly, for the 768 and 1500 particles systems, it was observed that the radial distribution function curves for Zr-Zr show a non-linear increase. The might be associated with the size of the system. Conclusion: This study has described the effect of pressure in liquid zirconia at 3000K as the pressure is increased from 0.0 GPa to 20 GPa. Results obtained suggest that self-diffusion coefficients decrease with increasing pressure. Also, the pair correlation function is observed to be sensitive to pressure. The behavior of self-diffusivity and pair correlation function of liquid materials in response to effects of pressure has practical applications in the electro ceramic industry. | 3,623.2 | 2018-11-27T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Isolation, Specification, Molecular Biology Assessment and Vaccine Development of Clostridium in Iran: A Review
Context: The genus Clostridium, which consists of spore-forming anaerobes, can cause different diseases in domestic animals and human and some of them are serious and fatal. According to the increasing economic value of the meat and milk-producing animals, the importance of a certain number of such diseases in Iran is unquestionable. Evidence Acquisition: In Iran, and probably in other Near East countries, much attention was formerly paid to control more serious contagious diseases, such as rinderpest, anthrax, etc. resulting in the negligence of diseases such as enterotoxaemia. The epizootiological position has now changed whereby some of the contagious diseases are eradicated or are being methodically controlled. Now it is time to care about the other problems such as clostridial diseases, which threaten the health of the sheep and cattle. It is impossible to eradicate these infectious microorganisms, since they are normally found in the soil and the intestinal contents of apparently healthy animals. Therefore, it is necessary to resort to vaccination which in some cases has given encouraging results. To avoid the losses from such infections it is necessary to have the best possible vaccination information, methodically and regularity of the susceptible animals. Conclusions: This review refers to the veterinary aspects of the anaerobic clostridial diseases and vaccine development concerning the works carried out in Iran and especially at the Razi Serum and Vaccine Research Institute in the last eight decades.
Context
Clostridia are large, anaerobic, rod-shaped, and Grampositive spore forming bacteria of highly polyphyletic class of Firmicutes.These bacteria are found either as vegetative forms or dormant spores.Soil and intestinal tract of animals, and human are their natural habitats.Dormant spores of several Clostridium species are found in healthy muscular tissue of horses and cows.Differentiation of the various pathogenic and related species is based on diagnostic characteristics of biochemical reactions, morphology of spore shape and position, and also the antigenic specificity of toxins or surface antigens (1).Pathogenic strains or their toxins may be acquired by susceptible animals either by wound contamination or ingestion.In many parts of the world, clostridial diseases are a constant threat to successful livestock production.The genus Clostridia is divided to histotoxic and neurotoxic Clostridia (2).The members of histotoxic Clostridia are invasive and cause extensive destruction of muscle and connective tissue and are characterized by the formation of gas, including Clostridium chavoei, C. In Iran, clostridial infections are among the most important diseases of cattle and sheep.Among the various diseases caused by these groups of bacteria, lamb dysentery, struck, pulpy kidney and black disease in sheep and blackleg in cattle are often observed in the country.Strains of C. perfringens types B, C and D, C. oedematiens types B and D, C. chauvoei and C. septicum from the specimens of infected animals are isolated at the Razi Serum and Vaccine Research Institute in Iran (4).
Genus Clostridium and Associated Diseases
Clostridium perfringens is a Gram-positive, rod-shaped, anaerobic, spore-forming, and heat-resistant bacterium of genus Clostridium that has capsules, and the ability to produce heat resistant spores under improper environmental conditions, and is also a secondary pathogen in diseases such as necrotic enteritis (5,6).Clostridium per-fringens has 118 species and is classified into five isotypes (A, B, C, D, and E) based on producing four major toxins, iota (iA), alpha (cpa), beta (cpb) and epsilon (etx) (7,8).Clostridium botulinum, C. difficile, C. perfringens, and C. spiroforme, can cause enteric problems in animals as well as humans.These diseases, which are often fatal, are partly attributed to binary protein toxins that follow a classic AB paradigm.All clostridial binary toxins destroy filamentous actin via mono-ADP-ribosylation of globular actin by a component within a targeted cell (9).Complete genome sequence of C. perfringens was reported in 2002 (10).Table 1 shows different types of C. perfringens and their major toxins.
Clostridium perfringens toxin type A strain that produce alpha toxin (CPA) is the most common type of C. perfringens and is a member of the normal intestinal flora of warm-blooded animals.This microorganism causes gas gangrene, food poisoning, and gastrointestinal illness in humans, necrotic enteritis in chickens, yellow lamb disease in sheep, enteritis, abomasitis (abomasal bloat) and enterotoxaemia in goats, cattle, pigs and horses (11,12).
Clostridium perfringens type D is the causal agent of enterotoxaemia of sheep.It was first described by Lucey and Hutchins (13).This organism produces several major and minor toxins.Epsilon toxin is a major lethal toxin produced by C. perfringens types B and D (14,15).In 1933 it was named Epsilon (16).It is responsible for a rapidly fatal enterotoxaemia in economically important livestock (17).In the small intestine of the infected animals, the toxin is produced in the form of prototoxin activated by proteolytic enzymes as well as other proteolytic enzymes in vitro in the small intestine (18,19).Prototoxin activation by trypsin is due to cleavage and removal of a small 14-amino acid peptide from the amino terminal (20).There is a tryptophan residue and a histidyl residue in the structure of epsilon toxin which are respectively important and essential for its lethal activity (21).Recently, Clostridium perfringens type D epsilon toxin gene was cloned and expressed in E. coli and its immunization response was tested in vivo.Results showed good protection against native epsilon toxin (22,23).
Clostridium perfringens type B belongs to enterotoxaemia is a major problem of veterinary sciences.Beta toxin produced by C. perfringens types B and C, is the major toxin of these types and causes inflammation and bloody necrotic enteritis and fatal diseases originating in the intestines of humans or live-stock (24,25).It is known to aid in the lysis of endothelial cells by forming pores in the cell membrane (26).This function is necessary both for necrotizing enteritis and lethal enterotoxaemia caused by type C isolates (27,28).A 17-protein exotoxin is produced by this bacterium which four of them are major (alpha, beta, epsilon and Utah), and the others (sigma, theta, kappa, lambda, mu, nu, neuraminidase and enterotoxin) are minor toxins (29).In 2012, in Brazil a vaccine was produced in E. coli (rBT) based on a beta toxoid of C. perfringens type C. The non-toxic rBT was innocuous for mice and induced 14 IU/mL of beta antitoxin in rabbits, which was complying with the European Pharmacopeia and CFR9-USDA guidelines (30).
The enteric toxins of C. perfringens showed two general characteristics 1st, beta and epsilon toxin are pore-forming toxins, and 2nd, iota and TpeL (31) modify an intracellular target.These enteric toxins are involved in the pathogenesis of avian enteric necrosis disease (24).
Clostridium septicum is a large anaerobic, Gram-positive, rod-shaped and fermentative bacterium of genus Clostridium.Terminal spore gives the bacterium a drumsticklike shape while holding and peritrichous flagella enable the bacterium to be motile.C. septicum is a member of the normal gut flora in humans and other animals; therefore, it can cause different diseases both in humans and animals.Clostridium septicum can produce and secrete a number of toxic proteins such as alpha, beta, gamma and delta.Alpha toxin that is the lethal cytolytic and the major virulence factor appears to be its immunodominant extracellular antigen (32,33).
Clostridium chauvoei, (C.feseri) is an anaerobic, spore forming, motile, and polymorph bacteria, which its size varies from 0.5 -1 to 3 -8 micron and could be observed as individual bacterium, diplococcus, and rarely streptococcus (34).Blackleg is a fatal disease of young cattle.It produces an acute local infection, and the resulting blood poisoning leads to rapid death.Clostridium chauvoei and, less frequently, C. septicum are most commonly the responsible organisms.Vaccination is the only effective means to control blackleg disease.Several kinds of vaccine are commercially available (4).
Clostridium Perfringens Type Alpha Toxin Beta Toxin Epsilon Toxin Iota Toxin
A + --- Black disease or infectious necrotic hepatitis is an acute toxemia of sheep, cattle and occasionally pigs caused by the alpha toxin of C. oedematiens type B (C. novyi), which is a Gram-positive, endospore-forming, obligate anaerobic bacteria of the class Clostridia.It is ubiquitous, being found in the soil and faeces.This bacteria is pathogenic, causing a wide variety of diseases in man and animals.C. novyi alphatoxin belongs to the family of large clostridial cytotoxins, which act on cells through the modification of small GTPbinding proteins.The distribution of the disease is worldwide and is always fatal in sheep, cattle and pigs (35).
Clostridium difficile, one of the most important anaerobic of genus Clostridium, is a prevalent factor that can lead to antibiotic associated diarrheas.This bacterium is the causative agent of pseudo membrane colitis.The role of this bacterium along with the overuse of antibiotics is proved to result in colitis.Toxins A and B are the major virulence factors of these bacteria (36).
Isolation and Specification of Different Species of Genus Clostridium in Iran
In Iran, the clostridial infections among domestic animals were reported first in 1938 during an outbreak of blackleg of cattle.The disease was found in many parts of the country especially on wet bottom lands, hilly and sandy areas.Several sporadic outbreaks so far had been reported from different parts of the country; however, the most severe one occurred in August 1968 among herds of cattle in Southern Iran (34).Fifteen strains of C. chauvoei were isolated from the specimens received for laboratory diagnosis at anaerobic vaccine research and production department.
Further studies proved that the clostridial infections were widespread all over the country.The first strain of C. Perfringens type D was isolated from cases of enterotoxaemia of lamb and sheep in 1954 (37).Later on, the infections caused by C. Perfringens Iranian variant type B was isolated in 1954 from intestinal contents of enterotoxaemia of sheep and goats.Three strains of C. welchii type B were isolated which were different from the classical type B strains in the production of (kappa) and non-production of (lambda) and hyaluronidase toxins.Two of the strains were isolated from young goats and the other one from an adult sheep (38).Clostridium perfringens type C was isolated in 1971 from cases of necrotic enteritis of piglets (39) and enterotoxaemia of sheep.The first strain of C. septicum from cases of gas gangrene (malignant edema) in cattle was isolated in 1971.
Black disease is an acute and fatal disease of sheep and goats in Iran.Fifty one strains of C. oedematiens types A, B, and D were isolated and typed from liver lesions received from different parts of the country.Isolation and rapid identification by fluorescent labeled antibodies, typing, sugar fermentation, toxicity and hemolytic activity of the isolated strains were used for more than 330 suspected livers received to diagnose black disease from different parts of the country.Results showed that 187 cases were positive (40).Clostridium oedematiens type B strain, caus-ative agent of black disease of sheep, was first isolated in 1969 and C. oedematiens type D was also isolated from cases of liver necrosis in sheep (41).
In 1988, a putrefied carcass of a dairy cow was submitted to Razi Institute.A necropsy was performed, but the internal organs were decomposed.Smears prepared from liver tissue and the Gram staining showed many Gram-positive rod-shaped bacilli, some of them with ovoid or elongated sub-terminal spores (42).In 1997, seventeen C. perfringens strains isolated by post mortem from sheep and goats, were examined by biochemical tests and enzyme immunoassay (EIA).Seven of these strains belonged to type B, eight strains were type D, one strain was type A, and one strain was untypable.To identify the Iran subtype of C. perfringens type B, the isolated strains were examined for Minor Toxin Lambda (proteinase).The results were compared to the characteristics of C. perfringens reference strains (43).
In 1999 specimens of pathological tissues of cattle were examined to identify the malignant edema causal agents.Nineteen specimens out of thirty-eight were positive and fluorescent antibody analyzing showed three strains of C. septicum, four strains of C. chauvoei, and one strain of C. oedematiens.Fermentation tests showed that eleven of the isolated strains were C. perfringens and they were all type A (44).Several suspected cases of anaerobic diseases from different parts of Iran were studied and intestine contents of fish, cattle and sheep, were examined for different types of C. perfringens.Results showed that 104 isolates out of 110 specimens were identified as C. perfringens type A (45).
Vaccination Against Clostridial Diseases
Vaccination is frequently practiced to protect animals against clostridial diseases and seems to be the most effective way to control C. perfringens diseases.However, the industrial production of clostridial toxins is laborious.Wide varieties of vaccines are available, in the form of bacterins, toxoids, or mixtures of bacterins-toxoids.Single vaccination with most clostridial vaccines does not provide adequate levels of protection and a booster dose within three to six weeks is needed.Since young animal vaccination dose is not adequate for protective immunity until at least until the age of one to two months, therefore, most vaccination strategies target the pregnant dam therefor the maximal immunity is transferred to the neonate in colostrum.The clostridial vaccines often cause tissue reactions and swelling and should be administered in the neck and by the Subcutaneously rather than the intramuscular (IM) route.Most commercial vaccines are inactivated and may contain two to eight combinations of clostridial bacterins/toxoids.These should be optimally timed for provision of maximal protection at the most likely age of susceptibility (46).
Clostridial Vaccine Production for Veterinary Use in Razi Institute
In 1976, large-scale production and the standardization of polyvalent C. perfringens vaccine in Iran was started.Based on this report over 20 million doses of this vaccine was produced and utilized in Iran every year.Certain modifications were made to the culture media used in the animal anaerobic bacterial vaccine Department of Razi Institute to produce this vaccine at low cost.The prepared vaccine was highly immunogenic as determined by the laboratory examination on the quality of the vaccine according to British Veterinary Codex and the field reports (4) and European pharmacopoeia (47).
A report in 1988 revealed that two types of clostridial vaccines had been prepared and tested in rabbits and sheep according to the british pharmacopoeia (BP) and field reports; one including C. perfringens types, B, C, D and C. oedematiens and the other one including C. chauvoei.Both had developed adequate antibody titer in the injected animals (48).
In 1992, attempts were made to produce and formulate the ingredients of a culture medium suitable to obtain a highly immunogenic C. oedematiens type B vaccine to immunize sheep and goats against black disease.Largescale production of C. oedematiens toxin was achieved in a special culture medium.The prepared vaccine was diluted to concentrations of 20%, 40%, 60% and 80% of antigens, and precipitated using adjuvant.The potency test of the prepared vaccine was determined according to the BP.Maximum titer was obtained at 80% with the level of 33 units per milliliter of alpha antitoxin in rabbit pooled serum (RPS).The obtained alpha antitoxin was 20, 16 and 8 units per milliliter for 60%, 40% and 20% of diluted antigen in RPS, respectively.Sheep were vaccinated in the areas affected by black disease.Reports on the field indicated that black disease in sheep had been effectively controlled by this vaccine in Iran (49).
Clostridium chauvoei and, less frequently, C. septicum are most commonly the responsible organisms for blackleg disease.Vaccination is the only effective means to control blackleg disease.Several kinds of vaccine are available commercially.It was that blackleg vaccine was produced in the traditional manner in Razi Institute for four decades until 2005; then, because of the enhanced demand of the country, it was decided to improve the production procedure of this vaccine using a large-scale fermenter; therefore, C. chauvoei was adapted for growth and proliferation in the fermenter to prepare a potent vaccine.Accordingly, attempts were made to prepare and formulate the ingredi-ents in order to obtain high yield of C. chauvoei in the culture medium by the fermenter.Results showed high yield of C. chauvoei suspension in the fermenter after 10 hours, using enriched culture medium (more than 1,480,000,000 organisms/mL), but no significant changes were obtained in the condition of glass bottles compared to that of the fermenter.The safety and potency of the prepared vaccine was satisfactory in sheep and guinea pigs according to British Pharmacopoeia (veterinary).Since this research was successfully conducted in Razi Research Institute, and enriched culture medium fermenter is currently used to produce the mono-valent inactivated blackleg vaccine to immunize cattle in Iran (50).
Concentrated blackleg vaccine was prepared according to the method described by food and agriculture organization (FAO).The medium (modified medium to produce experimental C. chauvoei vaccine by fermenter) including peptone, glucose, sodium chloride, cysteine hydrochloride and yeast extract was prepared by fermenter and inoculated by C. chauvoei strain to prepare blackleg vaccine.Aluminum hydroxide gel was used as adjuvant to the high yield vaccine.The vaccine was also concentrated by precipitation method.None of the tested animals showed any local or general adverse reactions.All of the vaccinated guinea pigs resisted the challenge with 4four minimum lethal dose (MLD) of virulent C. chauvoei (51).Table 2 shows the clostridial vaccines manufactured for veterinary use in Razi institute.
The effects of enterotoxaemia vaccine, manufactured by Razi Institute, on reducing isolates of intestinal Clostridia genus specifically C. perfringens were studied.Sheep dung samples were randomly collected from 10 areas in Kerman, Iran.Following processing and culture the samples were processed and cultured, and colonies were identified; C. perfringens were isolated from (54.0%) 27 out of 50 unvaccinated sheep and (2.2%) 2 out of 90 vaccinated sheep; isolates were analyzed by multiplex Polymerase Chain reaction (PCR).Genotyping of two strains, isolated from the vaccinated sheep, indicated that the strains were type D, while the strains isolated from the unvaccinated sheep were types A, B, C and D; 14.8%, 22.2%, 40.7%, 22.2%, respectively.No isolates contained iota gene (type E).Results showed that vaccination against enterotoxaemia had a significant effect (P < 0.01) on reducing C. perfringens isolates.Occurrence of the disease in the vaccinated and unvaccinated groups was 3.3% and 64.0%(P < 0.01), respectively (52).
Molecular Biology Studies on Clostridium to Produce Vaccine in Iran
A genetic construct containing C. perfringens epsilon and beta toxin genes was produced in 2013.Epsilon and beta toxin genes were fused using a small linker sequence.And the fusion gene was expressed as a soluble protein in E. coli and its immunogenicity was studied in mouse.The recombinant cell lysate was used for immunization studies in mouse.Potency of the toxin (as an antigen) induced 6 and 10 IU/mL of epsilon and beta anti-toxin in rabbit, respectively.In conclusion, E. coli is a suitable expression host for immunogenic epsilon-beta fusion toxin of C. perfringens (53).C. perfringens type A alpha toxin gene cloned in E. coli as a candidate for recombinant vaccine production was described in 2014.High molecular weight genomic DNA of C. perfringens type A was isolated and cpa was amplified using one pair of primers.The 1,094 base pair gene was ligated into 2,974 bp pJET1.2blunt recombinant vector and 4,068 bp pJETAα recombinant vector was produced.After extraction of pJETAα recombinant cloning vector, Nucleotide sequence and a 364-amino acid protein sequence was deposited into the GenBank.In silico analysis of these sequences showed several putative conserved domains (46).
In 2014, molecular cloning and sequencing of C. septicum vaccine strain alpha toxin gene was studied.After extraction of genomic DNA and amplification of the target gene through polymerase chain reaction by specific primers, pJETαsep cloning vector was produced and E. coli/TOP10 competent cells were transformed.Then pJETαsep recombinant plasmid was purified and sequenced using universal primers.Sequencing and BLAST analysis of csa showed over 99% identity to other previously deposited csa in the GenBank.The csa sequence was deposited into the GenBank under access number JN793989.E. coli/TOP10/pJETαsep as a recombinant bacterium could be used to purify the recombinant csa gene and its expression in the suitable prokaryotic hosts (54).
The effects of C. perfringens type D prototoxin and toxin on the mouse body weight was evaluated.After preparation of the filtrated and freeze dried crude prototoxin, three series of experiments were set up.First, MLD/mL was determined and according to MLD, 50% end point (LD 50 ) was determined.Finally, different concentrations of activated freeze dried prototoxin were injected to the mice with 18 to 20 g body weight.Body weight decrease was observed two days after injection while no body weight decrease was observed in the control group.The results indicated that the activated C. perfringens culture filtrate temporarily inhibits mouse general metabolism.Since the secreted prototoxin is activated in the small intestine of the infected animals, vaccination of the domestic animals at the right time and using the right vaccine could prevent these effects (56).
Since 1892 that William Welch discovered Bacillus aerogenes capsulatus, Nov. Spec.(a gas-producing Bacillus) (57), and described its distribution (58) and after that Morton in 1928 reported phlegmonous gastritis that was originated by this bacterium (59), scientists and researchers are working on eradication of this bacterium, since it is normally found in the soil and the intestinal contents of apparently healthy animals.Therefore, it is necessary to resort to vaccination which in some cases has given encouraging results.
Razi Vaccine and Serum Research Institute was funded in 1925 and started vaccine production against Rinderpest.In Iran, enterotoxaemia was recognized in sheep in 1938.For the first time, the disease was found in the imported merinos, but, later, it was found in the fat-tailed sheep in the country.The 1st report on anaerobic clostridial whole culture formalized vaccine in Iran was published in 1961.In the last five decades, conventional anaerobic vaccine is produced to control clostridial diseases.Therefore, up to now it is about eight decades that anaerobic diseases are studied and anaerobic vaccines are produced in Razi Institute.Hence, inducing immunity against Clostridium species has been achieved in Iran.Furthermore, research and development on these vaccines and also in the field of molecular biology of different Clostridium vaccine strains are continually done and anaerobic toxoid vaccines are produced in Razi Institute.
Conclusions
At the present time there are no licensed clostridial vaccines which protect against either gas gangrene or epsilon-toxin in humans.However, it seems that the vaccines developed for animals, have the potential to be developed for humans (60).
Tetanus toxoid is commonly used as a single vaccine in horses but often a common combination of tetanus toxoid plus C. perfringens types C and D is used in sheep, goats, and cattle.However, tetanus toxoid is not used for animals in Iran.In cattle, a frequently used combination in feedlots is a 4-way vaccine that consists of killed cultures of C. chauvoei, C. septicum, C. novyi, and C. sordellii to protect against blackleg and malignant edema.In Iran, blackleg vaccine against C. chauvoei is used for cattle.A more complex clostridial vaccine that contains C. perfringens types C and D plus Iranian variant type B (34) in addition to the C. septicum vaccine (braxy vaccine) is used to protect sheep and goats against enterotoxaemia as well.The addition of C. haemolyticum extends the protection to include infectious necrotic hepatitis but it is not used in Iran.
Table 2 .
The Clostridial Vaccines Manufactured in Razi Institute for Veterinary Use | 5,280.8 | 2015-11-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Marginal effects of economical development and university education on China’s regular exercise population
Objective Although the regular exercise population is a key metric for gaging the success of China’s fitness-for-all activities, effective policy approaches to increase mass sports participation remain unclear. Previous research suggests that GDP, educational attainment, sports resources, and meteorological conditions could influence regular exercise participation. Therefore, this study first analyzed the macro-level correlates influencing China’s regular exercise population. Methods We utilize ordinary least squares (OLS) regression and geographical weighted regression (GWR) to theorize the relationship. The analysis encompasses data from the 31 administrative regions of Mainland China, as reported at the end of the 13th Five-Year Plan period. The log–log model enables us to quantify the marginal effect (elasticity) of the explanatory variables. Results The OLS regression showed that regional GDP and the proportion of the population with a university education were significant predictors. In the global model, the marginal effects of regional GDP and university education were 0.048 and 0.173, respectively. Furthermore, the GWR revealed a distinct geographic pattern that corresponds to the classic Hu Line. Conclusion While regional GDP was also a significant correlate in our model, the elasticity demonstrates that university education had an asymmetric effect on China’s regular exercise population. Therefore, this paper sheds light on a policy priority for the upcoming 15th Five-Year Plan, emphasizing the strategic importance of expanding university education to enhance mass sports participation. In turn, a better-educated populace may yield significant secondary effects on public health and contribute to the high-quality development of the Chinese path to modernization.
Introduction
The concept of modernization, emerging in the mid-twentieth century, is generally regarded as "a multifaceted process involving changes in all areas of human thought and activity" (1).As the post-World War II era ushered in a period of peace and the industrial civilization progressed, GDP growth became a universal benchmark for modernization.In the context of China, modernization is seen as a blend of contemporary values with the enduring essence of traditional Chinese culture.A notable theoretical advancement in this concept is Xi Jinping's assertion that "people's health is the primary indicator of modernization." This underscores the importance of the proactive health concept (2), with China's fitness-for-all activities set to play a key role in the Chinese path to modernization (3).Since its inception in 1995, the National Fitness Plan has shown promising results, with recent data revealing that 37.2 percent of China's population regularly do physical exercise (4).However, this figure falls short when compared to the 40-45 percent participation rates observed in developed countries (5), indicating that China still faces significant challenges in promoting its mass sports program.The latest National Fitness Plan (2021-2025) highlights the government's focus on improving the quality of fitness-for-all activities.It points out ongoing challenges, such as the imbalanced regional development of these activities and the insufficient provision of public fitness services.Therefore, addressing the issue of imbalanced regional development is essential in the next phase of the nation's modernization endeavors.
The developmental goals of the National Fitness Plan are articulated through several critical metrics designed to gage its success.These metrics encompass the percentage of the population that regularly do physical exercise (thereafter, regular exercise population), the establishment of 15-min community fitness circles, the number of social sports instructors, and, on a broader scale, the passing rate of the national physical fitness standard.Thus, employing quantitative research focused on these four key metrics can provide policy recommendations for drafting a refined National Fitness Plan.The existing literature has largely focused on regional disparities in passing rates, with limited discussion given to the regular exercise population (6)-arguably the most important metric for assessing the effectiveness of fitness-for-all activities.To date, there has been no quantitative analysis examining the imbalanced regional development concerning the regular exercise population.
Previous research on correlates and determinants influencing physical exercise practices has centered on individual-level factors, such as age, sex, and income (7).While these studies are valuable for influencing individual behavior, they provide limited guidance for developing effective national policies that target the entire population, especially for a diverse and populous nation like China.From a macro perspective, existing theories focus on four key areas: economic fundamentals, health consciousness, resource availability, and climatic factors.First, economic development plays a critical role in facilitating the mass sports program.The level of economic prosperity determines the extent of government support for non-essential social activities, such as fitness-for-all activities.Empirical data from a study conducted in 34 nations revealed a positive correlation between per capita GDP and economic freedom and increased engagement in physical exercise (8).In the meantime, an individual's participation in leisure-time physical exercise is influenced by their economic status.A pooled analysis of health surveys conducted in England in 2008, 2012, and 2016 revealed that individuals from high-income households are more likely to participate in and spend more time engaging in moderate-to-vigorous physical exercise compared to those from lower-income households (9).Similarly, data from Belgium indicated that sports consumption and participation were stronger among higher-income households (10).Essentially, the inequality in sports participation can be attributed to a better economic position, which stimulates proactive health awareness (11).
Second, education not only critically influences employment opportunities but also affects participation in physical exercise at the population level.For example, an analysis of physical exercise patterns across 27 European countries revealed a strong correlation between higher government expenditure on education, as a proportion of GDP, and a greater likelihood of citizens engaging in regular exercise (12).The said study additionally found that government health spending does not directly impact exercise habits, suggesting that health promotion policies may be more effective when focused on education.Similar studies conducted in Brazil, China, Ireland, Sweden, and the United States reinforce education as an important correlate of sports participation (13)(14)(15)(16)(17).The mechanism is suggested to be that better education affects an individual's economic status and health beliefs, which in turn shape their long-term attitude toward physical exercise.
Third, government infrastructure spending on sports venues has an important role in promoting public exercise behavior.In this regard, Dallmeyer and colleagues showed that in Germany, the key factor is not the average fiscal expenditure but rather consistent fiscal support (18).To effectively promote public health policy through sports, it is imperative for governments to consistently provide funding for the development of sports resources to influence citizens' exercise habits.Similarly, data from 21 European countries endorse the notion that an accountable government (e.g., through investing in sports venues) positively impacts the physical exercise levels of its citizens (19).In developing countries, the availability of sports resources has an even greater impact on public interest and participation in sports.In a populous country like China, the per capita availability of sports resources is a limiting factor in organizing mass sports events.Of note, it is important to consider availability in a broader sense, as research from both developing and developed countries indicates that individuals are less inclined to use sports venues that are located far from their homes (20, 21).
Fourth, natural factors can restrict sporting activities to some extent.Obradovich and Fowler reported that recreational physical activity decreases during both cold and extremely hot conditions, as well as on days with precipitation, based on data collected from over 1.9 million survey respondents in the United States (22).Given that global warming appears to be beyond mitigation under current global efforts (23), it is likely that heat will have a greater impact on global sports participation going forward.Additionally, meteorological conditions caused by human activities, such as air pollution, would also have a detrimental effect on sports participation (24).Given China's vast territory, which spans several climatic zones, the impact of meteorological conditions on physical exercise is an aspect that cannot be overlooked.
Based on actual statistics regarding China's regular exercise population (as presented in the following section), it becomes evident that the aforementioned macro factors are intricately interconnected.For instance, Guangdong consistently ranks highest among provinces in terms of GDP output.It boasts remarkable sports resources, particularly in professional soccer and basketball, and benefits from a favorable climate.However, its regular exercise population is among the lowest.In contrast, Liaoning faces freezing temperatures in winter, and its economic development is constrained by its geographical location.Despite these challenges, Liaoning enjoys one of the largest regular exercise populations.Such a phenomenon, in some cases against popular theories (22), not only underscores regional disparities in the regular exercise population but also highlights a discrepancy between the real conditions and the theoretical framework.This disparity underscores the imperative of quantifying the macro factors contributing to these variations.
From a macro perspective, the birth rate in China is concerning, and the Chinese society as a whole is experiencing accelerated aging.By 2050, it is expected that the percentage of individuals aged 55 to 64 in the overall workforce will increase to 26.7% (25).This unfavorable demographic structure underscores the urgency to promote the proactive health concept, particularly through regular exercise participation, to enhance health outcomes and alleviate both personal and national financial burdens associated with healthcare costs.A recent study reinforces this notion.Analysis of credit utilization data from one million Chinese between 2018 and 2021 revealed a significant inverse correlation between individual sports consumption and medical expenses: for every 1 % increase in sports consumption, there was a corresponding decrease of 0.203 percent in personal medical expenditure (26).The benefits of regular exercise participation on health and financial resilience cannot be overstated.
Therefore, the purpose of this paper was to identify macro correlates that can explain the dynamics of China's regular exercise population.Here, we hypothesize that China's regular exercise population is influenced by four macro factors: economic, educational, resource-related, and meteorological.Their relationship can be mathematically represented in Eq. ( 1) as follows: ) where ε follows a parametric probability distribution.Moreover, this paper utilized modern econometric and spatial statistical methods to deepen the understanding of how the elasticity of these macro factors varies across different regions.To compute the marginal products of these macro factors, we apply the natural logarithm function to the data, as illustrated in Eq. (1).These marginal products are referred to as marginal effects within the context of our paper to better match the specific focus of our research.Essentially, we advocate for a macro-level theory of human behavior that is consistent with realities.The insights gained from this analysis are intended to provide policymakers with a solid foundation for crafting upcoming national policies.
Data overview
According to the General Administration of Sport of China, "population regularly do physical exercise" is defined as engaging in physical exercise at least three times a week, with each session lasting 30 min or more, and at a moderate or higher intensity level.The study examined the regular exercise population across 31 administrative regions in Mainland China, using government statistics collected between 2020 and 2021 as the response variable, except for Tibet, which only had data available for 2019.These statistics reflect the overall progress of the fitness-for-all activities by the end of the 13th Five-Year Plan (2016-2020).
The explanatory variables encompass four macro factors discussed previously.First, regional GDP can serve as a composite indicator to approximate the level of fiscal expenditure for sporting causes, as well as its impact on human development (27).It is worth noting that the author (Y.Z.) traditionally does not consider per capita GDP for scientific analysis.Although there may be some controversy surrounding the 2015 income Gini coefficient of 0.62 obtained from the China Household Finance Survey (28), its primary conclusion remains indisputable: wealth inequality in China has always existed and is significantly greater than that observed in OECD countries.According to the Global Wealth Report 2023 (29), the wealth Gini coefficient in Mainland China has increased to 0.707, and the share of the top 1% of Chinese residents in total wealth has risen to 31.1%, indicating a further widening of wealth inequality.Given that Chinese individuals possess the highest savings rate globally, it is reasonable to question the potential distortion in the per capita GDP data.Furthermore, better education is often associated with improved job prospects, making it more appropriate to utilize education as a proxy measure of an individual-level economic standing (30).
Second, despite the popularity of certain leisure physical activities in China, such as fitness walking and square dancing, which do not necessarily depend on traditional sports venues, the per capita sports area remains an important metric for assessing the resources available for fitness and sports activities.Moreover, the accessibility to sports venues significantly affects an individual's likelihood of participating in sports (31).Hence, both the per capita sports area and the number of public buses per 10,000 people were analyzed as correlated indicators of sports resources for physical exercise.
Third, statistics on educational attainment from the 2020 Seventh National Population Census were examined to understand the impact of education on sports participation (32).To explore the sensitivity of various levels of educational attainment, the data were decomposed into five categories: population with a university education, population with a tertiary education, population with at least high school education, population with at least middle school education, and population with at least primary school education.
Fourth, meteorological data from 2016 to 2020 were considered to explore factors relevant to exercise behaviors, including the average daily temperature, the average highest daily temperature, annual precipitation (measured from 0800 to 2000 h), the number of rainy days, haze days, dust and sand storm days per year, and the number of days with extreme highest temperatures (≤ 5°C and ≥ 35°C).Meteorological data sets were collected by over 100 benchmark weather stations in China.
Table 1 presents the primary data used in the statistical modeling, offering insights into the macro factors influencing China's regular exercise population.The full data source is accessible on Figshare. 1
Spatial statistics
The statistical modeling was conducted using ArcGIS Pro version 3.2.1 (Esri Inc., Redlands, CA, United States) and the gamlss package version 5.4-20 in R. For brevity, we have omitted the presentation of commonly calculated equations in computer software and domainspecific statistical theories.Figure 1 illustrates an overview of the steps involved in the statistical modeling.
During the estimation period, we performed preliminary statistical analyses.Our first step involved utilizing the global Moran's I statistic to evaluate spatial autocorrelation of regular sports populations across the 31 administrative regions.Following this, we converted the raw data into natural logarithms.In instances where the data frame contained zero or negative values, we applied a log-modulus transformation, which shares similar characteristics with a traditional log transformation.Then, we assessed whether the log-transformed response variable adhered to a normal distribution.We also carried out a Pearson correlation analysis to identify significant correlations between the response variable and potential explanatory variables.Only those variables that demonstrated significant correlations were selected for inclusion in the subsequent regression analysis.
In the search for sensible model terms for the ordinary least squares (OLS) regression, we adopted a forward selection approach, guided by statistical significance and the Akaike Information Criterion (AIC).Briefly, one explanatory variable was incrementally included in the OLS model until all terms were shown to be significant predictors (Chi-square p < 0.05) of the response variable, and no more improvement in AIC was possible.Of note, the statistical significance was determined by the generalized likelihood ratio test statistic, which, in general, is preferable to the Wald test statistic.Additionally, we experimented with fitting model terms using the Generalized Additive Model for Location, Scale, and Shape (GAMLSS).This method involved the use of a non-parametric smoother within a normal distribution family, aiming for a more flexible model adaptation to the data.
The model terms identified as significant in the global model were subsequently incorporated into a geographical weighted regression (GWR), applying a Bisquare weighting scheme to account for geographical variations.Finally, the adequacy of the GWR model specification was assessed by analyzing the residual spatial autocorrelation, ensuring that our model accurately reflected the underlying spatial patterns in the data.
Preliminary analyses
The global Moran's I statistic, with a z-score of 3.51 (p < 0.001), rejects the hypothesis of complete spatial randomness in regular sports populations across the 31 administrative regions.Furthermore, the Shapiro-Wilk normality test for the log-transformed regular sports populations yielded a p-value of 0.565.These preliminary statistics justify both the global and local linear modeling.
Pearson correlation analysis revealed moderate yet significant correlations between regular sports populations and several explanatory variables: regional GDP (r = 0.40, p < 0.05), educational attainment (r = 0.53-0.62,with all p-values below 0.01), and the number of public buses per 10,000 people (r = 0.6, p < 0.001).Accordingly, per capita sports area and all meteorological variables were deemed statistically insignificant for inclusion in further modeling.
OLS model
Table 2 presents the stepwise term selection procedure.We began with a model containing only regional GDP.Given its statistical significance, this term was considered for retention in the final model.Subsequently, we incorporated each of the incremental models representing different levels of educational attainment.Upon including the population with a university education (Chi-square p < 0.001), the AIC of model 2 decreased to −52.8, indicating a substantial improvement in model performance.This process was repeated for subsequent models.However, as we moved toward lower levels of educational attainment, the AIC values for models 3-6 gradually increased, and the associated terms became statistically insignificant.Consequently, the population with a university education was retained at this stage.Further, we introduced the number of public buses in model 7.Although the inclusion of this term slightly increased the AIC compared to model 2, the added term did not significantly contribute.While the R 2 -value suggested an improved model fit, this could potentially be attributed to a coincidental correlation with the response variable, thus, we opted for model parsimony and dropped the number of public buses from the model terms.
Additionally, we re-fitted model 2 with a non-parametric smoother in the GAMLSS model 8. Through experimentation, penalized cubic splines marginally decreased the AIC and substantially enhanced the model fit compared to the parametric linear model 2. It is necessary to remind that this model is entirely experimental and is further discussed in a later section.
Finally, we examined the residuals of model 2. The Jarque-Bera statistic yielded a p-value of 0.896, suggesting the absence of model bias and confirming that the model is properly specified.Hence, based on considerations of statistical significance, AIC, and R 2 -value, model 2 was determined to be the most suitable assessment of China's regular Flowchart of the study.AIC, Akaike information criterion; bus, public buses per 10,000 people; CS, cubic splines; eco., regional GDP; edu., educational attainment.# , *, **, and *** denote Chi-square p-values equal to or less than 0.1, 0.05, 0.01, and 0.001, respectively.edu 1 , population with a university education; edu 2 , population with a tertiary education; edu 3 , population with at least high school education; edu 4 , population with at least middle school education; edu 5 , population with at least primary school education.
where, robust standard errors in [].
GWR model
With a distance band of 3,553,403 meters, the AIC (small sample adjusted) of the GWR model was −50.11.While this value increased slightly compared to the OLS model 2, their difference was less than 3.Meanwhile, the model fit improved to 0.508 from 0.467 of model 2, highlighting the advantages of transitioning from a global model to a local model.Additionally, the z-score of the regression residuals from the spatial autocorrelation was −0.135 (p = 0.893), indicating that the GWR model was spatially random and correctly specified.
The GWR model generated local models for each administrative region, as summarized in Table 3.While the local marginal effects fluctuate across regions, the main point remains evident: university education had a much more pronounced marginal effect than regional GDP (mean ratio of university education to regional GDP = 4.43).It is worth noting that the local model fit decreased in Fujian, Guangdong, Guangxi, and Hainan.
We visualize the local marginal effects across the 31 administrative regions in Figure 2. The local marginal effects of educational attainment exhibited a notable disparity between the west and east sides of the Hu Line (33).In other words, a one-unit increase in the proportion of the population with a university education in Western China could lead to a greater increase in the regular exercise population compared to Eastern China.Similarly, the local marginal effects of regional GDP displayed a comparable pattern, as depicted in the scaled map.
Discussion
The National Fitness Plan, alongside existing theories, suggests that the regular exercise habits of the population may be influenced by factors related to economics, education, resources, and meteorological conditions.In this paper, we estimated Eq. ( 1) through the use of both global and local regressions.Effectively, the work presented is consistent with a methodology to let the actual data inform the theoretical relationship.Our findings reveal that China's regular exercise population did not significantly react to the emphasis on sports resources outlined in the National Fitness Plan or to meteorological conditions.As demonstrated in Eq. ( 2), our log-log model leads to the identification of elasticity for the two significant macro factors, with a particular emphasis on university education.The significance of this paper extends beyond being merely the first quantitative analysis of the key metric defined in the National Fitness Plan.The insights provided offer guidance on more effective resource allocation strategies for the Chinese path to modernization and could influence the formulation of China's 15th Five-Year Plan.
The first obvious question is whether the observed statistical correlation, especially between educational attainment and participation in sports, reflects a genuine causal relationship.In our view, educational attainment plays a critical role in determining exercise habits.This view is supported by a large body of empirical research.Here, we provide just a handful of examples from studies conducted in China.Hui and colleagues found that diabetic adults with a university education not only had a better understanding of physical activity but also participated in it more frequently (34).Similarly, Li and colleagues conducted a 4-year cohort study to investigate the risk factors for physical inactivity.Their research found that, compared to those with a university education, the population with only a primary school education were 2.36 times more likely to lead a physically inactive lifestyle, while those with a middle/high school education were 2.13 times more likely to be inactive (35).
Moreover, an analysis of four national surveys on fitness-for-all activities revealed a decline in regular exercise among individuals with lower levels of education (i.e., primary and middle school).In contrast, the most significant increase in regular exercise was seen among individuals holding a master's degree or higher (36).
Regarding the impact of the economy on the demographic dynamics of sports participation, established theories provide insights (8).Two principal theories explain the role of education in this context.The first theory suggests that education leads to a shift in health perception, thereby influencing an individual's lifelong attitude toward well-being (37).We refer to it as the educationhealth conscientiousness pathway.The second theory emphasizes the link between an individual's economic status and their education.According to this view, educational attainment directly affects employment prospects, which in turn shapes the individual's economic base (38).This relationship is termed the educationemployment pathway here.Additionally, a third perspective integrates both of the aforementioned theories.We propose that university education not only fosters a heightened awareness of health during one's academic years but also paves the way for wealth accumulation in the future.This holistic view underscores the multifaceted impact of education on both health consciousness and economic security, suggesting a synergistic effect on sports participation.
Two important aspects need emphasis.Our combination model demonstrates that educational attainment, particularly at the university level, significantly impacts regular exercise participation compared to tertiary education.This implies that obtaining a university degree increases the likelihood of engaging in physical exercise throughout one's life.Tertiary education encompasses formal university academic programs and vocational programs preparing individuals for the workforce.In China, vocational education lasts 5 years, including 3 years of high school education, while university education spans 4 years.Though vocationally educated individuals may have less exposure to non-vocationally related health education programs due to a two-year education gap, we believe that education's influence primarily stems from the education-employment pathway mentioned earlier.Evidence suggests concerning education quality among vocational school students, with their career prospects not matching those of university graduates (39).Consequently, changes brought about by vocational education may only meet basic living needs, limiting an individual's overall future development.Further investigation is necessary to validate our hypothesis.
Furthermore, priority for future intervention should be given to the western part of China, specifically the area west of the Hu Line.The Spatial distribution of GWR model coefficients.The main map highlights the marginal effect of university education on China's regular sports population, while the scaled map illustrates the marginal effect of regional GDP.The magnitude of the coefficients is shown by the gradient green color: the darker the shade, the greater the coefficient.The Hu Line is represented by the blue line.Hu Line, first delineated by Prof. Hu Huanyong in 1935 (33), serves as the demarcation line for China's population distribution.Despite facing nearly a century of scrutiny, it continues to be supported by a wealth of economic development data.Economic growth east of the Hu Line is significantly more diverse compared to the western region due to differences in environmental conditions and population distribution.
Our findings indicate that sports development also aligns with the Hu Line.According to our model, the western part of China lags behind other regions in terms of both economic development and educational attainment.From a policy-making perspective, our findings suggest that relying solely on sports policy will not immediately address the disparity between the east and west.Instead, it is crucial to prioritize national-level economic development, particularly the promotion of science education, to bring about meaningful change.While our analysis does not demonstrate a significant relationship between sports resources and the regular exercise population, it is still valuable to discuss the underlying causes.We argue that two factors contribute to this occurrence.Firstly, sports resources are primarily allocated to urban areas (40).According to the findings of the Seventh National Population Census, more than one-third of China's population resides in rural areas.Therefore, the per capita sports area does not accurately reflect the quantity of sports resources available to the rural population.Additionally, while the availability of sports venues is essential for promoting fitness-for-all activities, the economic foundation, particularly time resources, also plays a crucial role in influencing individuals' inclination to use sports venues.The consumption of non-essential sports is influenced by income (41), meaning that having more sports resources may not necessarily lead to increased use of sports venues in areas with significant wealth gaps.Moreover, research indicates that exercise participation declines significantly when sports venues are located more than 10-15 min away from one's residence (31).Thus, the availability of sports resources does not effectively contribute to the growth of China's regular exercise population at the current stage.Chinese authorities are aware of this issue, and the current National Fitness Plan prioritizes the establishment of 15-min community fitness circles.The penetration rate of community gyms in Changsha, as demonstrated by our HEHA CAT Fitness model, is five times greater than that of traditional gyms (42).Future research could potentially validate the correlation between the regular exercise population and the number/ length of greenway trails since fitness walking is the most common physical exercise in China.
It is also useful to discuss the goodness of fit of the model.This paper aims to present an explanatory model, making the R 2 -value irrelevant for this purpose.Through GAMLSS analysis, we showcase the ability to overcome the limitations of standard OLS fitting using a modern regression approach, which is valuable for predictive models.However, it is worth noting that in Southern China (i.e., Guangdong, Guangxi, and Hainan), the R 2 -value was notably lower than the national average.In Hainan, it was more than two standard deviations below the average.Notwithstanding that additional factors may affect exercise participation in these regions, an inherent limitation of the methodology used in this study may prevent us from reaching an accurate conclusion, at least within a few local extreme contexts.Specifically, even though we believe the observed relationships, such as the effect of educational attainment, may indicate a genuine cause-andeffect relationship, our theoretical model can only establish correlations with exercise behaviors in strict statistical terms.Put simply, neither the local GDP nor university education can be considered determinants of China's regular exercise population.Similarly, we cannot rule out the existence of a true causal relationship between meteorological conditions and regular exercise participation.Using the aforementioned R 2 -value outliers as an example, we hypothesize that high temperatures may be a potential cause of this occurrence locally.In Chinese culture, individuals often avoid exercising under direct sunlight.Given that these three regions experience the highest number of sunshine hours in China, they are likely affected by intense sunlight.This suggests that the impact of climate on exercise habits may vary (22), and extremely high temperatures influence the level of physical exercise most among the Chinese population.Further empirical research is necessary to delve deeper into our hypothesis.
Moreover, although this study analyzed a large amount of highquality meteorological data, the regular exercise population data was based on government surveys instead of empirical data.Hence, there could be discrepancies compared to results from controlled experimental studies.Indeed, although this study did not find any influence of haze days or dust and sand storm days on regular exercise participation, other empirical studies suggest that both of these meteorological conditions negatively impact individual-level exercise participation (43).Nevertheless, although research into sports participation has burgeoned in the past decade in China, this is the first study that attempts to explain the macro factors influencing regular exercise participation.Despite the methodological limitation and concerns about data quality, the main conclusions are in line with existing theories observed in other countries (8,15) and contribute theoretically to the literature from the world's second most populous developing country.
Policy outlook
In developing the National Fitness Plan, our model not only offers a reasonable explanation but also provides the elasticity of the variables.The global model indicates that the economy has a marginal effect of 0.048, whereas university education has a marginal effect of 0.173.Assuming China's economy continues to grow at an annual rate of 5%, the regular exercise population can increase by 0.24% annually, all else being equal.Based on data from two national population censuses, the proportion of university students relative to the total population increased from 3.71% in 2010 to 7.43% in 2020 (32), representing an annual growth rate of approximately 7.19%.If this growth rate remains constant, the university education will lead to a 1.24% annual increase in the regular exercise population, all else being equal.It is evident that the development of university education is more effective in increasing the regular exercise population.Regarding sports policy, there are two options contingent upon these distinct mechanisms.If the education-health conscientiousness pathway is in play, the National Fitness Plan should prioritize promoting the proactive health concept, particularly by expanding the number of social sports instructors.We propose utilizing government-funded social financing to provide each community with a dedicated social sports instructor (3).However, in the second scenario where the education-employment pathway is in play, there are currently no viable tools for sports policy to influence university education's impact.
This paper, while focused on sports, underscores the significance of university education in enhancing population quality and boosting China's global competitiveness.The United States Census Bureau noted in 2021 that 37.9% of Americans aged 25 and older had earned a university degree (44).In contrast, data from the Seventh Population Census shows that only 10.4% of Chinese aged 25 and above hold a university degree (32).As China navigates its economic transition, it appears to have reached the point of diminishing returns with its early WTO-era (2001-2021) economic model, which was fueled by globalization and currency arbitrage.The economic slowdown observed since 2022 suggests that China is still in the process of adapting to new macro-political environments.In an era where efficiency and productivity improvements are increasingly driven by automation, China must shift away from low-quality economic growth to focus on efficiency.This transition is expected to lead to higher and more predictable margins, enhancing China's economic position globally.Therefore, incorporating the expansion of university education into the 15th Five-Year Plan could provide China with a robust intellectual foundation and support Xi Jinping's vision of creating new quality productive forces.This move would strengthen the sustainability of its economic development in the post-WTO era, setting the stage for a future where China's economy is more innovation-driven.
Conclusion
Using OLS and GWR, we are the first to present evidence that economic development and educational attainment probably drive China's regular exercise population.We show that the inequalities in the development of the National Fitness Plan also align with the classic Hu Line, with West China generally lagging in regular exercise participation due to these two factors.Considering that higher education influences personal economic prosperity and regional economic development, it can be said that disparities in China's regular exercise population mirror differences in educational literacy.More importantly, our log-log model for the first time identifies the elasticities of regional GDP and the proportion of the population with a university education, indicating that the marginal effect of university education is more pronounced in the Chinese context.These results hold significant practical value for drafting effective resource allocation policies.
Laozi's wisdom, "In the pursuit of learning, every day something is acquired" underscores the concept of lifelong education.We contend that access to university education is a fundamental right that should be extended to all young people, beyond the narrow focus on employment outcomes.Such a focus promises to generate a cascading effect, progressively benefiting all facets of Chinese society by fostering a more educated and healthy populace capable of driving innovation, economic resilience, and social well-being.
TABLE 1
Descriptive data of main interests.
TABLE 2
Model significance, performance, and fit from OLS and GAMLSS models.
TABLE 3
Estimates of coefficients from the GWR model. | 7,785.4 | 2024-07-15T00:00:00.000 | [
"Economics",
"Education"
] |
Effect of Die Shape and Size on Performance of III-Nitride Micro-LEDs : A Modeling Study
Flip-chip truncated-pyramid-shaped blue micro-light-emitting diodes (μ-LEDs), with different inclinations of the mesa facets to the epitaxial layer plane, are studied by simulations, implementing experimental information on temperature-dependent parameters and characteristics of large-size devices. Strong non-monotonous dependence of light extraction efficiency (LEE) on the inclination angle is revealed, affecting, remarkably, the overall emission efficiency. Without texturing of emitting surfaces, LEE to air up to 54.4% is predicted for optimized shape of the μ-LED dice, which is higher than that of conventional large-size LEDs. The major factors limiting the μ-LED performance are identified, among which, the most critical are the optical losses originated from incomplete light reflection from metallic electrodes and the high p-contact resistance caused by its small area. Optimization of the p-electrode dimensions enables further improvement of high-current wall-plug efficiency of the devices. The roles of surface recombination, device self-heating, current crowding, and efficiency droop at high current densities, in limitation of the μ-LED efficiency, are assessed. A novel approach implementing the characterization data of large-size LED as the input information for simulations is tested successfully.
Introduction
Micro-LEDs are light sources operating at very high current densities [1], where the device self-heating, efficiency droop caused by Auger recombination, and surface recombination, become the major factors limiting the device performance.In particular, surface recombination results in a shift of the peak µ-LED efficiency towards higher current density, and in a decrease of the peak efficiency value when the device dimensions are reduced [2,3].Earlier studies of µ-LEDs were focused primarily on their current-modulation characteristics [4][5][6].Only recently, the efficiency improvement had become a hot topic in the research and development of µ-LEDs.Typically, maximum values of external quantum efficiency (EQE) of µ-LEDs did not exceed 10% [7,8], which could be attributed to non-optimal light extraction from the LED dice.A record EQE of 42% at the current density of 50 A/cm 2 was recently demonstrated for 10 × 10 µm 2 devices and light extraction to silicone with the refractive index of 1.41 [3].Those µ-LEDs utilized profiled sapphire as the substrate for LED structures, and minimized the area of metallic electrodes on the emitting surface of the devices, in order to improve their LEE (see Figure 1a for schematic design of the µ-LED die).
Being borrowed from the fabrication technology of large-size devices, the above approach seems, however, to be not quite suitable for µ-LEDs, frequently being components of high-density multipixel arrays.First, the feature dimensions of the profiled sapphire and periods of their sequence lies on the scale of a few microns.In this case, only a small number of the features fall into the µ-LED area.
area.Therefore, LEE of such devices becomes dependent on both the number of the features and their particular arrangement inside the device area, which may vary randomly from sample to sample.In our opinion, just this factor was responsible, in particular, for the scatter and some irregular behavior of EQE as a function of μ-LED area, reported in [3].Second, the manner of light extraction from the μ-LEDs utilized in [3] implies double-passing of emitted photons through the sapphire substrate, and their outgoing from the top emitting surface of the LED dice (see Figure 1a).Here, the effective area of light emission becomes remarkably larger than a particular area of the μ-LEDs, which is clearly seen in the micrographs of the emission patterns reported in [3].This effect is undesirable for μ-LEDs operating in multipixel arrays.
Third, μ-LEDs normally operating at high current densities require an efficient heat removal from the active region.From this point of view, the chip design shown in Figure 1a is not optimal, as the heat sinking occurs through a thick sapphire substrate having a rather low heat conductivity.
An alternative approach, suggested as a design unit of large-size AlGaInP red LEDs [9], rather than for single μ-LEDs, was based on flip-chip device mounting on a heat sink.Here, inclined walls of the mesa, formed by etching after growing the LED structure, served as micro-reflectors for emitted photons (see Figure 1b).After flip-chip mounting of the wafer on a carrier-substrate, the original substrate was removed, and the back surface of the n-contact layer was textured to increase LEE [9].The use of such micro-reflector seems to be quite promising for InGaN-based μ-LEDs, provided that geometry of the LED dice is carefully optimized with account of particular properties of nitride semiconductors and other materials employed.
To optimize a μ-LED design, modeling and simulation of its operation is a powerful approach.Since μ-LEDs operate frequently at extremely high current densities, electrical, thermal, and optical phenomena become strongly coupled with each other, generally requiring joint 3D simulations [10].A specific problem of such simulations is accurate accounting for temperature-dependent recombination coefficients related to non-radiative Shockley-Read-Hall (SRH), radiative, and Auger recombination, which is necessary for adequate prediction of thermal droop of the emission efficiency.As the recombination processes in InGaN quantum wells (QWs), serving as active regions in III-nitride LEDs, are interfered with by carrier localization due to composition fluctuations in InGaN alloys [11,12] and a strong electric field induced in polar InGaN QWs [13], no consensus has currently been reached regarding temperature dependence of the recombination coefficients.Experimentally, there is a limited number of reports [7,14,15] providing highly scattered and contradictory information on the dependence of recombination coefficients on temperature.Hence, this problem should be resolved in order to get predictive simulations of μ-LEDs.
In this paper, we will suggest a novel simulation approach using characterization data of conventional large-size LEDs as input information, making up for the lack of data on temperaturedependent recombination coefficients.Using this approach, an effective way for the efficiency Second, the manner of light extraction from the µ-LEDs utilized in [3] implies double-passing of emitted photons through the sapphire substrate, and their outgoing from the top emitting surface of the LED dice (see Figure 1a).Here, the effective area of light emission becomes remarkably larger than a particular area of the µ-LEDs, which is clearly seen in the micrographs of the emission patterns reported in [3].This effect is undesirable for µ-LEDs operating in multipixel arrays.
Third, µ-LEDs normally operating at high current densities require an efficient heat removal from the active region.From this point of view, the chip design shown in Figure 1a is not optimal, as the heat sinking occurs through a thick sapphire substrate having a rather low heat conductivity.
An alternative approach, suggested as a design unit of large-size AlGaInP red LEDs [9], rather than for single µ-LEDs, was based on flip-chip device mounting on a heat sink.Here, inclined walls of the mesa, formed by etching after growing the LED structure, served as micro-reflectors for emitted photons (see Figure 1b).After flip-chip mounting of the wafer on a carrier-substrate, the original substrate was removed, and the back surface of the n-contact layer was textured to increase LEE [9].The use of such micro-reflector seems to be quite promising for InGaN-based µ-LEDs, provided that geometry of the LED dice is carefully optimized with account of particular properties of nitride semiconductors and other materials employed.
To optimize a µ-LED design, modeling and simulation of its operation is a powerful approach.Since µ-LEDs operate frequently at extremely high current densities, electrical, thermal, and optical phenomena become strongly coupled with each other, generally requiring joint 3D simulations [10].A specific problem of such simulations is accurate accounting for temperature-dependent recombination coefficients related to non-radiative Shockley-Read-Hall (SRH), radiative, and Auger recombination, which is necessary for adequate prediction of thermal droop of the emission efficiency.As the recombination processes in InGaN quantum wells (QWs), serving as active regions in III-nitride LEDs, are interfered with by carrier localization due to composition fluctuations in InGaN alloys [11,12] and a strong electric field induced in polar InGaN QWs [13], no consensus has currently been reached regarding temperature dependence of the recombination coefficients.Experimentally, there is a limited number of reports [7,14,15] providing highly scattered and contradictory information on the dependence of recombination coefficients on temperature.Hence, this problem should be resolved in order to get predictive simulations of µ-LEDs.
In this paper, we will suggest a novel simulation approach using characterization data of conventional large-size LEDs as input information, making up for the lack of data on temperature-dependent recombination coefficients.Using this approach, an effective way for the efficiency improvement will be demonstrated based on shaping the µ-LED dice.Finally, the main characteristics of µ-LEDs with optimized design will be calculated and compared with those of large-size devices.
Simulation Approach and µ µ µ-LED Design
Simulation of µ-LED operation is carried out with the SimuLED package [16] implementing a hybrid approach [17], accounting self-consistently for current spreading, heat transfer, and light extraction in/from an LED die.Within the approach, the active region is simulated via (i) a relationship between the local density of current j crossing the active region, and p-n junction bias U, which is the electric potential drop between the lower and upper boundaries of the region, (ii) a dependence of the local internal quantum efficiency η i (IQE) on the current density j, and (iii) a current density dependence of sheet carrier concentration n 2D injected into the active region assumed to be nearly the same for electrons and holes.In order to obtain the LED structure characteristics j(U), η i (j), and n 2D (j), direct simulations can be applied [10], provided that temperature-dependent recombination coefficients are known with sufficient accuracy.Below, we will show how the above dependences can be extracted from characterization data of large-size LEDs, where the impact of surface recombination on LED characteristics is considered as negligible.
Approximation of LED Structure Characteristics
The dependence of p-n junction bias U, on the current density j, can be extracted from the current-voltage characteristic of an LED measured at a certain temperature T. For this purpose, the characteristic should be fitted by the modified Shockley's equation where V f is the forward voltage, j is the current density, i.e., the ratio of the LED operating current to the area of the active region, ρ S is the specific series resistance, m is the ideality factor, k is the Boltzmann constant, q is the elementary charge, and j S (T) = j 0 exp (−E a /mkT) is the saturation current density with temperature-independent specific current density j 0 and activation energy E a .
In order to estimate necessary parameters, we have used the data on temperature-dependent current-voltage characteristics of a blue (453 nm) 1 × 1 mm 2 MQW LED reported in [18].Figure 2a demonstrates excellent fitting of the experimental characteristic corresponding to 300 K by Equation ( 1) and shows the parameters j S , ρ S , and m extracted from the fitting.Processing of the data reported in [18] for temperatures varied from 200 K to 440 K, provides the ideality factor m = 1.8 ± 0.1, which is practically constant in the above temperature range.Also, the saturation current density j S varies exponentially with the prefactor j 0 = 5 kA/cm 2 and activation energy E a = 3.06 eV.The activation energy is higher than the energy gap of InGaN QW in the LED active region, but it is lower than the bandgap of GaN barriers cladding the QWs.
The dependence of internal quantum efficiency η i on the current density j is derived by using the ABC recombination model considering three main recombination channels: SRH non-radiative recombination with the coefficient A, radiative recombination with the coefficient B, and Auger recombination with the coefficient C. The model neglects the electron leakage from the LED active region to p-layers of the LED structure, and assumes the concentrations of electrons and holes in the active region to be nearly equal to each other.Within the ABC-model, current density and IQE of an LED can be calculated by the expressions [19,20]: Here, the normalized output power p = P out /P m is the ratio of the LED output power P out at a certain current to the power P m at the EQE peak, j m is the current density corresponding to the EQE peak, and Q is the dimensionless combination of the recombination constants A, B, and C. Using conventional characterization data, i.e., dependence of EQE on the LED operating current, one can directly obtain P m and j m , and extract Q-factor by a special procedure suggested in [18].The data of [18] provide the following approximations for the blue MQW LED:Q(T) = Q 0 exp(−T/T Q ), where Q 0 = 110 and T Q = 134 K, and P m (T) = P 0 exp(T/T P ), where P 0 = 0.17 mW and T P = 67 K. Using them, it is possible to estimate the current density j m by the following expression: j m (T) = Q −1 (Q + 2)P m /η ext E ph S, where S = 0.01 cm 2 is the active region area, E ph = 2.74 eV is the mean energy of emitted photons, and η ext is LEE, linearly decreasing from 71% to 68% as the temperature varied from 200 K to 440 K [18].Finally, the IQE dependence on j corresponding to a particular temperature can be obtained from Equation 2 by varying the normalized power p in an appropriate range.It has been shown in [18] that Equation (2) fits, very accurately, the experimental EQE dependence on operating current/output power of the LED in a wide range of its variation.
The sheet concentration of carriers n 2D , injected in the LED active region, can also be obtained within the ABC-model.In terms of the variables discussed above, Simulation of μ-LED operation is carried out with the SimuLED package [16] implementing a hybrid approach [17], accounting self-consistently for current spreading, heat transfer, and light extraction in/from an LED die.Within the approach, the active region is simulated via (i) a relationship between the local density of current j crossing the active region, and p-n junction bias U, which is the electric potential drop between the lower and upper boundaries of the region, (ii) a dependence of the local internal quantum efficiency η i (IQE) on the current density j, and (iii) a current density dependence of sheet carrier concentration n 2D injected into the active region assumed to be nearly the same for electrons and holes.In order to obtain the LED structure characteristics j(U), η i (j), and n 2D (j), direct simulations can be applied [10], provided that temperature-dependent recombination coefficients are known with sufficient accuracy.Below, we will show how the above dependences can be extracted from characterization data of large-size LEDs, where the impact of surface recombination on LED characteristics is considered as negligible.
Approximation of LED Structure Characteristics
The dependence of p-n junction bias U, on the current density j, can be extracted from the current-voltage characteristic of an LED measured at a certain temperature T. For this purpose, the characteristic should be fitted by the modified Shockley's equation where V f is the forward voltage, j is the current density, i.e., the ratio of the LED operating current to the area of the active region, ρ S is the specific series resistance, m is the ideality factor, k is the Boltzmann constant, q is the elementary charge, and is the saturation current density with temperature-independent specific current density j 0 and activation energy E a .In order to estimate necessary parameters, we have used the data on temperature-dependent current-voltage characteristics of a blue (453 nm) 1 × 1 mm 2 MQW LED reported in [18].Figure 2a In contrast to IQE, evaluation of sheet carrier concentration requires knowledge of the SRH recombination coefficient A in addition to the parameters j m and Q.The temperature dependence of A-coefficient has been reported recently for blue and green LEDs [15]; both data can be well approximated by the expression A(T) = A 0 exp (T/T A ), where A 0 = 8000 s −1 and T A = 65 K. Combination of Equations ( 2) and (3) provides a parametric dependence of n 2D on the current density j.
Evaluation of Surface Recombination Velocity
Surface recombination is accounted for in our simulations via boundary conditions for 2D carrier transport equations in the LED active region [21].As it was mentioned previously [21], the surface recombination velocity V S is known for InGaN materials with insufficient accuracy.In order to evaluate its value more accurately, we used the recent data [22] on the size-dependent effective SRH recombination coefficient A', which is related to the perimeter Photonics 2018, 5, x FOR PEER REVIEW 4 of demonstrates excellent fitting of the experimental characteristic corresponding to 300 K by Equati (1) and shows the parameters j S , ρ S , and m extracted from the fitting.Processing of the data report in [18] for temperatures varied from 200 K to 440 K, provides the ideality factor m = 1.8 ± 0.1, wh is practically constant in the above temperature range.Also, the saturation current density j S var exponentially with the prefactor j 0 = 5 kA/cm 2 and activation energy E a = 3.06 eV.The activati energy is higher than the energy gap of InGaN QW in the LED active region, but it is lower than t bandgap of GaN barriers cladding the QWs.
The dependence of internal quantum efficiency η i on the current density j is derived by usi the ABC recombination model considering three main recombination channels: SRH non-radiat recombination with the coefficient A, radiative recombination with the coefficient B, and Aug recombination with the coefficient C. The model neglects the electron leakage from the LED acti region to p-layers of the LED structure, and assumes the concentrations of electrons and holes in t active region to be nearly equal to each other.Within the ABC-model, current density and IQE of LED can be calculated by the expressions [19,20]: ) Here, the normalized output power / out m p P P = is the ratio of the LED output power P out a certain current to the power P m at the EQE peak, j m is the current density corresponding to the EQ peak, and Q is the dimensionless combination of the recombination constants A, B, and C. Usi conventional characterization data, i.e., dependence of EQE on the LED operating current, one c directly obtain P m and j m , and extract Q-factor by a special procedure suggested in [18].The data [18] provide the following approximations for the blue MQW LED: where Q 0 = 110 and T Q = 134 K, and , where P 0 = 0.17 mW and T P = 67 Using them, it is possible to estimate the current density j m by the following expressio , where S = 0.01 cm 2 is the active region area, E ph = 2.74 eV is t mean energy of emitted photons, and η ext is LEE, linearly decreasing from 71% to 68% as t temperature varied from 200 K to 440 K [18].Finally, the IQE dependence on j corresponding to particular temperature can be obtained from Equation 2 by varying the normalized power p in appropriate range.It has been shown in [18] that Equation (2) fits, very accurately, the experimen EQE dependence on operating current/output power of the LED in a wide range of its variation.
The sheet concentration of carriers n 2D , injected in the LED active region, can also be obtain within the ABC-model.In terms of the variables discussed above, In contrast to IQE, evaluation of sheet carrier concentration requires knowledge of the SR recombination coefficient A in addition to the parameters j m and Q.The temperature dependence A-coefficient has been reported recently for blue and green LEDs [15]; both data can be w approximated by the expression
A A T A T T =
, where A 0 = 8000 s −1 and T A = 65 Combination of Equations ( 2) and (3) provides a parametric dependence of n 2D on the current dens j.
Evaluation of Surface Recombination Velocity
Surface recombination is accounted for in our simulations via boundary conditions for 2D carr transport equations in the LED active region [21].As it was mentioned previously [21], the surfa recombination velocity V S is known for InGaN materials with insufficient accuracy.In order evaluate its value more accurately, we used the recent data [22] on the size-dependent effective SR recombination coefficient A', which is related to the perimeter P and area Σ of the LED active regio via an approximate equation valid for sufficiently small devices [10,23]: and area Σ of the LED active region, via an approximate equation valid for sufficiently small devices [10,23]: 1 evaluate its value more accurately, we used the recent data [22] on the size-dependent effective SRH recombination coefficient A', which is related to the perimeter P and area Σ of the LED active region, via an approximate equation valid for sufficiently small devices [10,23]: Approximating the data of [22] by the above expression (see Figure 3), we have obtained V S = 7.5 × 10 3 cm/s.This value is 7.5 times higher than that assumed in our previous simulations [10].Due to the lack of data on the temperature dependence of surface recombination velocity, we used the above value in the whole range of temperature variation.
Design of μ-LED Dice and Material Parameters
Design of a flip-chip blue μ-LED die with truncated-pyramid shape and various inclinations of the side mesa facets, considered in our study, is shown schematically in Figure 4. Every die had a lateral dimension of the base d b , a lateral size of InGaN-based active region d AR , a lateral size of the p-electrode d p , a mesa depth M, and a width of the n-contact w n .Variation of the mesa facet inclination angle θ, used in our simulations, did not change the dimensions of active region (d AR ) and μ-LED die (d b , d p , and w n ).The thicknesses of the n-GaN contact layer h n = 5 μm, and p-GaN contact layer h p = 0.2 μm, were also fixed in the simulations.Approximating the data of [22] by the above expression (see Figure 3), we have obtained V S = 7.5 × 10 3 cm/s.This value is 7.5 times higher than that assumed in our previous simulations [10].Due to the lack of data on the temperature dependence of surface recombination velocity, we used the above value in the whole range of temperature variation.Approximating the data of [22] by the above expression (see Figure 3), we have obtained V S = 7.5 × 10 3 cm/s.This value is 7.5 times higher than that assumed in our previous simulations [10].Due to the lack of data on the temperature dependence of surface recombination velocity, we used the above value in the whole range of temperature variation.
Design of μ-LED Dice and Material Parameters
Design of a flip-chip blue μ-LED die with truncated-pyramid shape and various inclinations of the side mesa facets, considered in our study, is shown schematically in Approximating the data of [22] by the above expression (see Figure 3), we have obtained V S = 7.5 × 10 3 cm/s.This value is 7.5 times higher than that assumed in our previous simulations [10].Due to the lack of data on the temperature dependence of surface recombination velocity, we used the above value in the whole range of temperature variation.
Design of μ-LED Dice and Material Parameters
Design of a flip-chip blue μ-LED die with truncated-pyramid shape and various inclinations of the side mesa facets, considered in our study, is shown schematically in Besides, in the case of small µ-LEDs, we considered two kinds of mesas: a shallow mesa with M = 1.0 µm and a deep mesa with M = 2.4 µm.The size of p-electrode was intentionally chosen much smaller than d AR , in order to prevent carriers injected into the LED active region from surface recombination at its edges.
The n-GaN contact layer was assumed to be doped with Si up to the donor concentration N D = 1×10 19 cm −3 (ionization energy of donors is 13 meV), and to have a temperature-independent mobility of 100 cm 2 /V•s.The p-GaN contact layer was assumed to be doped with Mg up to the acceptor concentration N A = 3×10 19 cm −3 (ionization energy of acceptors is 170 meV), and to have the hole mobility decreasing with temperature from 10 cm 2 /V•s at 300 K to 6 cm 2 /V•s at 600 K. Temperature dependence of both electron and hole concentrations in the contact layers was accounted for in our simulations.The specific resistances of n-and p-contacts were chosen to be 10 −5 Ω•cm 2 and 10 −4 Ω•cm 2 , respectively.
Reflectivity of n-and p-electrodes were simulated using the optical constants of silver and gold, respectively [24], being the basic materials for the contacts.The refractive indexes of 2.48 and 1.465 were chosen for GaN and SiO 2 insulating film, respectively, corresponding to the emission wavelength of 453 nm.The main substrate was assumed to be removed after growth of LED structure.No intentional surface texturing was assumed on free semiconductor surfaces, including back surface of the n-GaN contact layer, so that the surfaces provided Fresnel reflection of light dependent on its polarization.
The heat was assumed to release through the top surfaces of the µ-LED dice.The heat sinking was accounted for via heat transfer coefficient of 1×10 5 W/K•m 2 .Temperature-independent heat conductivity of 120 W/K•m was chosen for all GaN-based contact layers.
Results
Simulations of µ-LEDs shown schematically in Figure 4 were carried out in two stages.At the first stage, we optimized the shape of the µ-LEDs, including inclination angle of mesa facets and die dimensions.Using the optimized shape of the µ-LED dice, the output characteristics of the devices were calculated at the second stage, in order to compare them with those of large-size LEDs.The results of the simulations are summarized below.
Optimization of Insulating Layer Thickness
One can see in Figure 4 that the µ-LED die contacts mostly with the highly reflective metal through the insulating film made of SiO 2 .This film influences, remarkably, the reflectivity of light emitted from the active region.Figure 5 shows the reflection coefficients of photons with the wavelength of 453 nm versus their incident angle calculated for SiO 2 films of various thicknesses placed on bulk Ag, which is the basic material of n-electrode.The calculations were carried out for two polarizations of the incident light: transverse-electric (TE) and transverse-magnetic (TM), accounting for interference of photons reflected from the SiO 2 /GaN and Ag/SiO 2 interfaces.At small SiO 2 thickness (d SiO2 = 50 nm), the reflectivity of TE-polarized light is higher than 90% in the whole range of the incident angles.However, there is a remarkable dip in the reflectivity of TM-polarized light at the angles close to the Brewster angle of silver.As photons traveling inside the µ-LED die change their incidence angle and polarization from one reflection to another, the above dip results in higher optical losses caused by incomplete reflection and, eventually, in a lower LEE.
Results
Simulations of μ-LEDs shown schematically in Figure 4 were carried out in two stages.At the first stage, we optimized the shape of the μ-LEDs, including inclination angle of mesa facets and die dimensions.Using the optimized shape of the μ-LED dice, the output characteristics of the devices were calculated at the second stage, in order to compare them with those of large-size LEDs.The results of the simulations are summarized below.
Optimization of Insulating Layer Thickness
One can see in Figure 4 that the μ-LED die contacts mostly with the highly reflective metal through the insulating film made of SiO2.This film influences, remarkably, the reflectivity of light emitted from the active region.Figure 5 shows the reflection coefficients of photons with the wavelength of 453 nm versus their incident angle calculated for SiO2 films of various thicknesses When the thickness of the SiO 2 film is increased up to the value, comparable with the light wavelength in SiO 2 , i.e., about 305 nm (see Figure 5), the dip magnitude is reduced considerably at the expense of slight reduction in the reflectivity of TE-polarized light.This occurs due to a constructive interference of TM-polarized photons reflected from SiO 2 /GaN and Ag/SiO 2 interfaces.As a result, LEE of the µ-LED dice with thicker SiO 2 film tends to rise.Increasing the thickness of the film above 250 nm no longer leads to further LEE improvement.Therefore, d SiO2 = 250 nm has been regarded as the optimal value permanently used in subsequent simulations.
Optimization of µ-LED Die Shape
Shape optimization for the µ-LED dice was carried out by SimuLED package [16] using LEE to the bottom hemisphere of the die (see Figure 4) as the optimization target.LEE was determined by 3D ray tracing assuming light extraction to air and accounting for polarization of light.Uniform distribution of the emission intensity over the µ-LED active region is assumed in the simulations.Two million rays were found to be sufficient for predicting LEE with an accuracy of 0.1%.We have considered both small and large µ-LEDs (see Section 2.3); in the former case, we have performed the simulations for a shallow (M = 1.0 µm) and a deep (M = 2.4 µm) mesa.Variation of the mesa-facet inclination angle was made in a wide range only limited by a particular geometry of the µ-LED dice.
Figure 6 demonstrates a non-monotonous dependence of LEE to the bottom hemisphere on the mesa-facet inclination angle, computed for the above µ-LED designs.In all three cases considered, there are two distinct maxima of LEE with a minimum between them.The strongest LEE maximum is observed at glancing angles of 8-13 • .This effect can be interpreted in terms of light propagation in a laterally tapered waveguide formed by the bottom surface of the µ-LED die, and inclined facets of the mesa.The taper turns directions of photons propagating inside the µ-LED die at every reflection from the n-electrode providing, eventually, effective light extraction through the bottom surface of the die (see inset in Figure 6).The magnitude of the second LEE maximum is lower than that of the first one by a factor of about 1.5.It exists, tentatively, due to micro-reflectors formed by the inclined mesa facets, largely focusing the emitted light in the direction normal to the bottom surface of the µ-LED die.A minimum of LEE, observed in Figure 6 between 35 • and 45 • , corresponds to non-optimal facet inclinations producing high optical losses of light captured by the µ-LED die.
3D ray tracing assuming light extraction to air and accounting for polarization of light.Uniform distribution of the emission intensity over the μ-LED active region is assumed in the simulations.Two million rays were found to be sufficient for predicting LEE with an accuracy of 0.1%.We have considered both small and large μ-LEDs (see Section 2.3); in the former case, we have performed the simulations for a shallow (M = 1.0 μm) and a deep (M = 2.4 μm) mesa.Variation of the mesa-facet inclination angle was made in a wide range only limited by a particular geometry of the μ-LED dice.Figure 6 demonstrates a non-monotonous dependence of LEE to the bottom hemisphere on the mesa-facet inclination angle, computed for the above μ-LED designs.In all three cases considered, there are two distinct maxima of LEE with a minimum between them.The strongest LEE maximum is observed at glancing angles of 8-13°.This effect can be interpreted in terms of light propagation in a laterally tapered waveguide formed by the bottom surface of the μ-LED die, and inclined facets of the mesa.The taper turns directions of photons propagating inside the μ-LED die at every reflection from the n-electrode providing, eventually, effective light extraction through the bottom surface of the die (see inset in Figure 6).The magnitude of the second LEE maximum is lower than that of the Breakdown of the optical losses (not shown here) has revealed two principal mechanisms of them.The strongest one is incomplete light reflection from the metallic n-electrode.The losses relevant to this mechanism consume from 25% (at optimal shape of the die) to 55% of emitted photons, with the strong maximum corresponding to non-optimal facet inclinations, i.e., at the inclination angles between 35 • and 45 • .The latter means that non-optimal inclination of the mesa facets provides a larger number of light reflections from the n-electrode than in the case of optimized µ-LED shape.
The second in the rank of importance mechanism of optical losses is free-carrier absorption in the thick heavy-doped n-GaN contact layer.The donor concentration of 1×10 19 cm −3 provides, here, the room-temperature free-electron absorption coefficient of 10.7 cm −1 .The losses by this mechanism consume from 10% (at optimal shape of the die) to 25% of emitted photons, depending on particular µ-LED design, with a rather weak dependence on the inclination angle.
Comparison of LEEs of large and small µ-LED with different mesa depths enables making the following conclusions.First, at the same mesa depth and thicknesses of n-and p-contact layers, the large µ-LED has a remarkably lower LEE.Second, a deeper mesa in the small µ-LED provides a higher LEE.These conclusions are important, pointing out the fact that maximum LEE can be achieved, if (i) the length of the inclined facets exceed, remarkably, the lateral size of the active region, and (ii) the mesa depth is comparable with the total thickness of the LED structure.
Current Density-Dependent Light Extraction Efficiency
The above simulations, aimed at optimizing the shape of the µ-LED dice, were carried out by assuming the emission intensity to be uniformly distributed over the active region.This assumption is, however, invalid at high-current operation of µ-LEDs because of current crowding.In order to take this effect into account, we have carried out self-consistent electrical-thermal-optical simulations of µ-LEDs, with optimized shapes of the dice: d AR = 5 µm, M = 2.4 µm, and θ = 13 • for small, and d AR = 20 µm, M = 2.4 µm, and θ = 8 • for large devices.
Figure 7 shows LEEs to the bottom hemisphere of large and small µ-LEDs as a function of mean current density in the active region, which is the ratio of the operating current to the active region area.One can see that, in both types of µ-LED, LEE starts to decrease towards higher current densities.The reason for the LEE reduction is localization of the non-equilibrium carriers under the poorly reflective Au-based p-electrode.This is clearly seen in 2D-distributions of sheet carrier concentration in the active region displayed in insets of Figure 7.At a low current density, the carriers are more or less uniformly distributed, producing LEE close to that predicted for uniform emission intensity in the active region.At a high current density, the carriers are largely localized under the p-electrode, exhibiting higher optical losses caused by incomplete light reflection from the metal.The higher the current density, the stronger the current crowding, and the lower LEE becomes, as is shown in Figure 7.The predicted evolution of LEE (η ext ), with the mean current density, can be well approximated by the function where η 0 is the low-current LEE, ∆η is the magnitude of the LEE reduction at high currents, j c is the current density at which current crowding reduces LEE by the half-value of ∆η, and γ is the specific exponent.We have found η 0 = 54.4%,∆η = 4.3%, j c = 230 A/cm 2 , and γ = 1.5 for small µ-LEDs, and η 0 = 42.6%,∆η = 8.3%, j c = 26 A/cm 2 , and γ = 0.9 for large µ-LEDs.Comparison of the parameters shows that current crowding starts to affect LEE at lower current densities, and produces a higher LEE reduction in large devices.This conclusion is in line with those made in our previous study on scaling µ-LED dimensions [10].
The second in the rank of importance mechanism of optical losses is free-carrier absorption in the thick heavy-doped n-GaN contact layer.The donor concentration of 1×10 19 cm −3 provides, here, the room-temperature free-electron absorption coefficient of 10.7 cm −1 .The losses by this mechanism consume from 10% (at optimal shape of the die) to 25% of emitted photons, depending on particular μ-LED design, with a rather weak dependence on the inclination angle.
Comparison of LEEs of large and small μ-LED with different mesa depths enables making the following conclusions.First, at the same mesa depth and thicknesses of n-and p-contact layers, the large μ-LED has a remarkably lower LEE.Second, a deeper mesa in the small μ-LED provides a higher LEE.These conclusions are important, pointing out the fact that maximum LEE can be achieved, if (i) the length of the inclined facets exceed, remarkably, the lateral size of the active region, and (ii) the mesa depth is comparable with the total thickness of the LED structure.
Current Density-Dependent Light Extraction Efficiency
The above simulations, aimed at optimizing the shape of the μ-LED dice, were carried out by assuming the emission intensity to be uniformly distributed over the active region.This assumption is, however, invalid at high-current operation of μ-LEDs because of current crowding.In order to take this effect into account, we have carried out self-consistent electrical-thermal-optical simulations
Output Characteristics of µ-LEDs
The output characteristics of µ-LEDs have been simulated for small and large devices, having optimized mesa facet inclinations, with account of current-dependent LEE. Figure 8a demonstrates that current density-voltage characteristics of both µ-LEDs are close to each other with the slope, corresponding to specific series resistance of 2.71 ± 0.05 mΩ•cm 2 .Analysis of the electric potential distributions in the µ-LED dice has shown that the major contribution to the specific series resistance of the devices comes from the p-contact.Indeed, because of the small area of p-electrode (see Section 2.3 for description of µ-LED design), the current density at the p-electrode is 25 times higher than the mean current density in the µ-LED active region.Therefore, at the specific contact resistance of 10 −4 Ω•cm 2 assumed in our simulations, this provides the µ-LED-specific series resistance of 2.5 mΩ•cm 2 , which is slightly less than the value estimated directly from the current density-voltage characteristics shown in Figure 8a.
Figure 8b displays dependence of the average temperature in the active region on the mean current density.One can see that, at the same current density, large µ-LED exhibits a stronger self-heating compared to a small device.The reason for the effect is stronger current crowding in large µ-LED, as it was discussed in detail earlier [10].A considerable, i.e., more than by 50 K, overheating of the LED active region is predicted for current densities greater than 1.5-2.0kA/cm 2 .Below 1 kA/cm 2 , the thermal effects, including the thermal efficiency droop, may be regarded as negligible.
Photonics 2018, 5, x FOR PEER REVIEW 10 of 15 attributed to the fact that the peak EQE of the small μ-LED corresponds to a higher current density, producing a higher operating voltage, thus reducing WPE.
The carrier losses by surface recombination at the current densities at EQE peak are found to be of nearly 15% and 20% for large and small devices, respectively.Such relatively small losses are due to the small dimensions of the p-electrode, localizing the injected carriers far from the active region edges.
Discussion
The most important conclusion following from our simulations is that optimizing the shape of a μ-LED die enables considerable LEE improvement, as compared to large-size devices.Since no profiled sapphire substrate or intentional texturing of emitting surfaces was assumed in the simulations, encapsulation of the μ-LEDs should give additional rise to LEE.Indeed, LEE to the bottom hemisphere into the air predicted for small μ-LED with optimized inclination of the mesa facets approaches 54.4%, whereas the total LEE is as high as 58.8%.If light is extracted to silicone with the refractive index of 1.41, LEE to the bottom hemisphere and total LEE become equal to 66.9% and 79.8%, respectively.The latter value is higher than the experimental LEE reported in [25] for conventional large-size blue flip-chip LEDs with highly reflective electrodes made of Al or Ag, which do not utilize patterned sapphire substrates.The shape effect is most pronounced, if the mesa depth Simulated EQEs of the µ-LEDs are plotted versus mean current density in Figure 8c.Here, total LEE to air, and its dependence on the current density, has been taken into account, instead of LEE to the bottom hemisphere.First, the current density corresponding to the efficiency peak is shifted to higher values in small µ-LED.Being in agreement with a general trend of scaling the µ-LED dimensions, this effect originates from surface recombination at the active region edges, which is especially critical in small-size devices [10].The relative difference between the peak EQEs of large and small µ-LEDs, i.e., 31.7% and 29.7%, is smaller than the difference in their IQEs, i.e., 68% and 51%, respectively, which is due to a higher LEE of the small µ-LED, as it has been demonstrated in Section 3.2.The relative difference between the peak wall-plug efficiencies (WPEs) of large and small µ-LEDs, i.e., 30.8% and 27.0%, respectively, is higher than the difference between EQEs.This is attributed to the fact that the peak EQE of the small µ-LED corresponds to a higher current density, producing a higher operating voltage, thus reducing WPE.
The carrier losses by surface recombination at the current densities at EQE peak are found to be of nearly 15% and 20% for large and small devices, respectively.Such relatively small losses are due to the small dimensions of the p-electrode, localizing the injected carriers far from the active region edges.
Discussion
The most important conclusion following from our simulations is that optimizing the shape of a µ-LED die enables considerable LEE improvement, as compared to large-size devices.Since no profiled sapphire substrate or intentional texturing of emitting surfaces was assumed in the simulations, encapsulation of the µ-LEDs should give additional rise to LEE.Indeed, LEE to the bottom hemisphere into the air predicted for small µ-LED with optimized inclination of the mesa facets approaches 54.4%, whereas the total LEE is as high as 58.8%.If light is extracted to silicone with the refractive index of 1.41, LEE to the bottom hemisphere and total LEE become equal to 66.9% and 79.8%, respectively.The latter value is higher than the experimental LEE reported in [25] for conventional large-size blue flip-chip LEDs with highly reflective electrodes made of Al or Ag, which do not utilize patterned sapphire substrates.The shape effect is most pronounced, if the mesa depth and active region lateral dimensions are comparable by the order of magnitude with the total thickness of LED structure.The use of either shallower mesa or remarkably larger active region dimensions results immediately in a lower LEE.Further LEE improvement is expected to gain by texturing of the back n-GaN contact layer surface with nanoscale structures, like moth-eye [26], and lowering the donor concentration in the n-GaN contact layer to reduce free-carrier absorption of emitted light.
Peak EQEs, predicted for large and small µ-LEDs with light extraction to air, are 31.7%(achieved at the mean current density of 4.5 A/cm 2 ) and 29.8% (achieved at the current density of 50.2 A/cm 2 ), respectively.In the case of light extraction to silicone, peak EQE values become equal to 43.1% for large, and 40.4% for small µ-LEDs.These values are close to each other, due to the fact that the remarkable difference in IQEs of large and small µ-LEDs (see Section 3.4) is compensated by the opposite difference in their LEEs.It is important that the peak EQE of small µ-LED is achieved at the current density an order of magnitude higher than in the large device, which is advantageous for some µ-LED applications.Also, the EQE predicted for small µ-LEDs encapsulated with silicone is comparable with that reported in [3] for 10 × 10 µm 2 devices.This means that controlling the shape of µ-LED dice, as an approach to EQE improvement, is at least as effective as that based on the use of patterned sapphire substrates.
Maximum WPEs simulated for large and small µ-LEDs, with light extraction to air, are 30.8%and 27.0%.In the device encapsulated with silicone, they rise up to 41.8% and 36.6%,respectively.Here, the difference in WPEs of small and large devices is larger than the difference in EQEs.This is because the WPE peak in the small µ-LED is achieved at higher current densities, producing a higher operating voltage, reducing WPE.
The µ-LED chip design considered in our study utilizes a small (d p = 1.0 µm) p-electrode aimed at suppression of surface recombination by localization of current far from the active region edges.However, this results in a high specific series resistance of the µ-LED leading, eventually, to a lower WPE of the device.In order to understand whether it is possible to optimize the p-electrode dimensions, we have simulated characteristics of µ-LEDs with d p = 2.0 µm and 3.2 µm.The most important results of the simulations are shown in Figure 9.
Current density-voltage characteristics of the µ-LEDs with different p-electrode dimensions are compared in Figure 9a.Increasing the lateral size of the p-electrode, from 1.0 µm to 3.2 µm, results in a dramatic improvement of the series resistance, from 2.71 mΩ•cm 2 to 0.33 mΩ•cm 2 .Comparison of WPEs of the µ-LEDs, given in Figure 9b, shows the following trends.First, the current density corresponding to WPE peak is shifted to higher values, and the peak WPE lowers at larger d p , which is evidence for higher carrier losses for surface recombination.On the other hand, high-current WPE grows when d p is enlarged.The latter effect can be explained by reduction of the µ-LED series resistance at larger d p , thus lowering the operating voltage of the device.Hence, the optimal p-electrode dimensions depend on desirable µ-LED operation conditions.If the device is assigned for operation at peak WPE, then the use of small p-electrode is preferable.By contrast, high-current operation of the µ-LED requires utilizing larger p-electrode sizes.In particular, µ-LEDs with d p = 3.2 µm are predicted to have WPE higher than in the case of d p = 1.0 µm, by a factor of two, at 1 kA/cm 2 (see Figure 9b).
The μ-LED chip design considered in our study utilizes a small (d p = 1.0 μm) p-electrode aimed at suppression of surface recombination by localization of current far from the active region edges.However, this results in a high specific series resistance of the μ-LED leading, eventually, to a lower WPE of the device.In order to understand whether it is possible to optimize the p-electrode dimensions, we have simulated characteristics of μ-LEDs with d p = 2.0 μm and 3.2 μm.The most important results of the simulations are shown in Figure 9. Current density-voltage characteristics of the μ-LEDs with different p-electrode dimensions are compared in Figure 9a.Increasing the lateral size of the p-electrode, from 1.0 μm to 3.2 μm, results in a dramatic improvement of the series resistance, from 2.71 mΩ•cm 2 to 0.33 mΩ•cm 2 .Comparison of WPEs of the μ-LEDs, given in Figure 9b, shows the following trends.First, the current density corresponding to WPE peak is shifted to higher values, and the peak WPE lowers at larger d p , which is evidence for higher carrier losses for surface recombination.On the other hand, high-current WPE Let us now discuss the roles of current crowding and surface recombination in operation of µ-LEDs.There is a common opinion that current crowding is not important in such small devices as µ-LEDs.Our simulations, comparing µ-LEDs with 5 × 5 µm 2 and 20 × 20 µm 2 active regions, show that this is not the case.This can be clearly seen, in particular, from Figure 7, demonstrating essentially different LEE dependences on the current density for small and large µ-LEDs.Since the dependence originates from localization of the injected carriers under poorly reflective p-electrode enhanced with current, this effect can be unambiguously attributed to different current crowding in small and large devices.Another signature of the valuable current crowding is different self-heating of small and large µ-LEDs at high current densities, producing a corresponding thermal droop of the emission efficiency (see Figure 8b).Hence, current crowding still remains an important factor, which should be accounted for in further developments of efficient µ-LEDs.
Our simulations were carried out with the surface recombination velocity V S = 7.5 × 10 3 cm/s, obtained in [22] for vertical side walls of the mesa likely formed by non-polar facets of wurtzite crystal.On the other hand, the use of small angles, θ, of the facet inclination, which is beneficial for high LEE, may alter the surface recombination velocity.Indirect evidence for this is a large scatter in the data on V S in InGaN and GaN, reported for various crystal orientations (see [21] for a more detailed literature review on this issue).Therefore, experimental investigations into the crystal orientation dependence of surface recombination velocity are quite desirable for future developments of µ-LEDs.If the dependence of V S on the orientation of the active region facets is found to be strong enough, the proper choice of the mesa-facet orientation/inclination may become an additional degree of freedom for optimization of µ-LEDs with embedded micro-reflectors.
In this study, we have tested, successfully, a novel approach to µ-LED simulation, utilizing the temperature-dependent characterization of large-size devices as the input data.The approach uses quite limited information on the operation of LED structure (see Section 2.1).Due to the current lack of reliable experimental data and theoretical results on temperature-dependent recombination coefficients, the above approach becomes advantageous over the detailed simulations of LED structures from the predictability point of view.Operating with a limited number of integral parameters, it is not even required to know a particular LED structure.On the other hand, there are some limitations on the use of the approach.Based on the ABC-model, the approach fails in prediction of IQE of red AlInGaP LEDs, as the high-current efficiency droop in those devices originates from electron leakage to p-side of the LED structure, rather than from Auger recombination assumed in the ABC-model.In the case of true-green InGaN-based LEDs, a more complex model should be applied, in order to approximate, properly, the IQE dependence on the current density and temperature [27].
Figure 1 .
Figure 1.(a) Schematic μ-LED design from Ref. [3] where triangles indicate the features of the profiled sapphire substrate and insulating film serves as an omnidirectional reflector; (b) Schematic design of a flip-chip μ-LED with internal micro-reflector and removed substrate.Blue arrows show selected pathways of emitted photons.
Figure 1 .
Figure 1.(a) Schematic µ-LED design from Ref. [3] where triangles indicate the features of the profiled sapphire substrate and insulating film serves as an omnidirectional reflector; (b) Schematic design of a flip-chip µ-LED with internal micro-reflector and removed substrate.Blue arrows show selected pathways of emitted photons.
Figure 2 .
Figure 2. (a) Current-voltage characteristic of a blue LED at 300 K reported in [18] (symbols) and the solid line presents its fitting by Equation (1); (b) Dependence of saturation current density j S on inverse temperature (symbols) and its approximation by an Arrhenius curve (line).
Figure 2 .
Figure 2. (a)Current-voltage characteristic of a blue LED at 300 K reported in[18] (symbols) and the solid line presents its fitting by Equation (1); (b) Dependence of saturation current density j S on inverse temperature (symbols) and its approximation by an Arrhenius curve (line).
Figure 3 .
Figure 3. Dependence of the effective Shockley-Read-Hall (SRH) recombination coefficient A' on the size of square-shaped blue μ-LEDs.Symbols are experimental points borrowed from Ref. [22], line is their approximation by Equation (4).
Figure 4 .
Every die had a lateral dimension of the base d b , a lateral size of InGaN-based active region d AR , a lateral size of the p-electrode d p , a mesa depth M, and a width of the n-contact w n .Variation of the mesa facet inclination angle θ, used in our simulations, did not change the dimensions of active region (d AR ) and μ-LED die (d b , d p , and w n ).The thicknesses of the n-GaN contact layer h n = 5 μm, and p-GaN contact layer h p = 0.2 μm, were also fixed in the simulations.
Figure 4 .Figure 3 .
Figure 4. Schematic design of μ-LED chips considered in this study, with most important geometrical parameters: lateral size of the active region d AR , lateral dimension of the base d b , size of the p-electrode d p , mesa depth M, width of the n-contact w n , thickness of the n-GaN contact layer h n , and mesa facet
Figure 4 .
Every die had a lateral dimension of the base d b , a lateral size of InGaN-based active region d AR , a lateral size of the p-electrode d p , a mesa depth M, and a width of the n-contact w n .Variation of the mesa facet inclination angle θ, used in our simulations, did not change the dimensions of active region (d AR ) and μ-LED die (d b , d p , and w n ).The thicknesses of the n-GaN contact layer h n = 5 μm, and p-GaN contact layer h p = 0.2 μm, were also fixed in the simulations.
Figure 4 .Figure 4 .
Figure 4. Schematic design of μ-LED chips considered in this study, with most important geometrical parameters: lateral size of the active region d AR , lateral dimension of the base d b , size of the p-electrode d p , mesa depth M, width of the n-contact w n , thickness of the n-GaN contact layer h n , and mesa facet
Figure 5 .
Figure 5. Reflectivity of the SiO2/Ag stack as a function of photon incident angle calculated for various thicknesses of SiO2 film and different light polarizations: TE (dash-dotted lines) and TM (solid lines).The wavelength of light is 453 nm.
Figure 5 .
Figure 5. Reflectivity of the SiO 2 /Ag stack as a function of photon incident angle calculated for various thicknesses of SiO 2 film and different light polarizations: TE (dash-dotted lines) and TM (solid lines).The wavelength of light is 453 nm.
Figure 6 .
Figure 6.Light extraction efficiency (LEE) to the bottom hemisphere into air computed for small and large μ-LEDs with different mesa depths.Insets show, schematically, the shapes of the μ-LED dice and selected photon pathways corresponding to the first and second maxima of LEE.
Figure 6 .
Figure 6.Light extraction efficiency (LEE) to the bottom hemisphere into air computed for small and large µ-LEDs with different mesa depths.Insets show, schematically, the shapes of the µ-LED dice and selected photon pathways corresponding to the first and second maxima of LEE.
Figure 7 .
Figure 7. LEE to the bottom hemisphere into air as a function of mean current density computed for small and large μ-LEDs, with M = 2.4 μm and optimized facet inclinations; symbols are simulation results, lines are approximations by Equation (5).Insets show 2D distributions of sheet carrier concentration in the active region of large μ-LED corresponding to low and high current densities
Figure 7 .
Figure 7. LEE to the bottom hemisphere into air as a function of mean current density computed for small and large µ-LEDs, with M = 2.4 µm and optimized facet inclinations; symbols are simulation results, lines are approximations by Equation (5).Insets show 2D distributions of sheet carrier concentration in the active region of large µ-LED corresponding to low and high current densities indicated by arrows; inner squares correspond to the p-electrode positions.The color scale spreads from zero to a maximum concentration at every particular current density.
Figure 8 .
Figure 8.(a) Current density-voltage characteristic; (b) average temperature of the active region; (c) EQE; (d) wall-plug efficiency (WPE) as a function of mean current density simulated for small (solid lines) and large (dash-dotted lines) μ-LEDs of optimized shapes.
2 )Figure 8 .
Figure 8.(a) Current density-voltage characteristic; (b) average temperature of the active region; (c) EQE; (d) wall-plug efficiency (WPE) as a function of mean current density simulated for small (solid lines) and large (dash-dotted lines) µ-LEDs of optimized shapes.
Figure 9 .
Figure 9. (a) Current density-voltage characteristics; (b) WPE dependence on the current density simulated for small μ-LEDs with p-electrodes of various lateral sizes.
2 )Figure 9 .
Figure 9. (a) Current density-voltage characteristics; (b) WPE dependence on the current density simulated for small µ-LEDs with p-electrodes of various lateral sizes. | 13,129.2 | 2018-10-27T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Photochemistry with laser radiation in condensed phase using miniaturized photoreactors
Miniaturized microreactors enable photochemistry with laser irradiation in flow mode to convert azidobiphenyl into carbazole with high efficiency.
Introduction
Classical combinatorial chemistry [1,2] approaches usually aim at the synthesis of multi-milligram amounts of new compounds to extend screening decks used in multiple screening campaigns [3]. An alternative method enabled by the maturing microreaction technology and the use of flow chemistry [4][5][6] is the integration of synthesis and screening in one integrated lab-on-achip approach [7].
Using this methodology we have integrated photochemistry in a miniaturized reaction setup to enable combinatorial flow chemistry in lab-on-a-chip applications.
The influence of photons, which are delivered via a suitable light-transparent window, on the processes running in miniaturized photoreactors, is investigated with a focus on increasing Scheme 1: Synthesis of carbazole (2) by photolysis. the yield and selectivity as well as decreasing the reaction time. Photochemistry with laser radiation is a promising tool to broaden the application spectrum of miniaturized systems, by facilitating a powerful activation step due to a wide range of available wavelengths and energy ranges [29,30]. Moreover, the optical systems can be designed in a way that the reaction initiation by photons and an additional online analysis of the running reaction is feasible.
Design and fabrication
In order to realize photochemical synthesis, several reactors and small reactor arrays with reaction volumes of approximately 1 mL down to 35 µL were developed. These reactors were especially designed for the stimulation of photochemical reactions (UV-vis radiation) as well as for demanding reaction conditions, such as the rapid elevation of temperature (with pulsed IR-laser radiation) or pressure pulses (due to the evaporation of the solvent upon the introduction of energy).
Several microstructured reactor types were designed and produced for reactions in the liquid phase. They are equipped with quartz-glass cover plates, transparent to the laser radiation, pressed onto an appropriate sealing material. Moreover, channels suitable for the mixing and reaction of two or more isopycnic solutions were built in a polymer bloc by mechanical treatment [31]. The provision of bubble-free fluid is ensured in this case by microchannels in at least two levels, which are built from corresponding structured layers. These reactors were made of polyether ether ketone (PEEK) and polytetrafluoroethylene (PTFE) [32] to study the influence of side reactions with the reactor material, which could reduce the yield of the desired reaction product. The multilayer system is placed in a stainlesssteel frame.
With this type of reactor, it is possible to realize a series of reactions in parallel by arranging the reactor chambers in an (n × m) matrix. The microreactors applied for this study have four reaction chambers with varying volumes of the chambers due to increasing depth, and different connections for the reagent entrance ( Figure 1).
During the past few years, we successfully employed triazene resins, such as 4, which are readily available from aniline in the synthesis of a library of aromatic derivatives [45][46][47][48]. Moreover, triazene-resins are perfectly suitable for the synthesis of arylazides 5 (Scheme 2) [49]. The photochemical decomposition of arylazides into carbazoles is appropriate for application in miniaturized photoreactors, since significant results can be observed by an online analysis through HPLC and GC [50]. Because of the miniaturization, online analysis is especially suitable for our setup.
We therefore investigated whether the photoreaction can be realized in miniaturized photoreactors and to what extent the use of a laser as a photon source is advantageous. The irradiation of 2-azidobiphenyl (1) in methanol with a conventional xenon lamp (400 W, λ > 345 nm) required 18 h for 50% yield (95% selectivity) in a 10 mm cell with an 8 mm light-exposure diameter ( Figure 2). Frequency-tripled Nd:YAG laser radiation (λ = 355 nm, 8 kHz pulse frequency; pulse duration 26 ns) was chosen because the wavelength is close to that of the applied UV-lamp, 355 nm is usually within the absorption area of azides, and this laser type is commonly used in most laser labs. We applied a single-pulse power of 0.16 to 3 W resulting in pulse energies between 4 and 87 nJ and energy densities of approximately 0.02 to 0.17 µJ/cm 2 within a defocused laser spot of 0.2 to 0.5 cm 2 , to carry out the same reaction (Scheme 1), but carbazole was obtained much faster from 2-azidobiphenyl (1). Compared to conventional UV sources, the use of laser irradiation clearly accelerated the reaction: from 18 h (Xe lamp, Figure 2) to 30 s (Nd:YAG laser) for 50% yield and 95% selectivity, calculated from the data presented in Figure 3. This reaction was successfully carried out in a miniaturized photoreactor (Figures 3-6).
The monomolecular reaction can be realized by using laser radiation of 355 nm wavelength as a photon source, in a clean way, avoiding almost completely the formation of the undesired diazo derivatives 3. The side reaction is supposedly reduced due to a lesser effect of heating owing to the small bandwidth irradiation and minimized exposure time through the miniaturized flow setup.
During these tests, it was shown that a largely better selectivity can be achieved, compared to the one obtained in a standard UV irradiation setup (Figure 2). Experiments were performed in batch as well as flow-injection configuration. The continuous process used allowed us to vary the residence time in the reactor by regulating the flow speed of the reactant solution, with the help of a syringe pump ( Figure 5).
For this study, reactors made of PEEK, as well as PTFE reactors were used, leading to similar yields of carbazole ( Figure 3 and Figure 4), showing no major influence of the reactor material on the reaction.
The deviations from linearity in the low power area of Figure 4 can be attributed to fluctuations of the laser power. For the high-power area, a correlation between yield, power and reaction time, which can be explained by kinetics, is observed.
Conclusion
The preparation and application of polymeric, miniaturized photoreactors, equipped for the effective use of photons in the reaction chamber, provided by frequency-converted laser sources, was successfully shown.
With these reactors or reactor components, the photonic influence on reactions in miniaturized photoreactors was proven to be useful in parameter studies in which laser power and flow rate were varied.
The advantages of laser chemistry in the condensed phase compared to standard photochemical approaches have been shown in this preliminary study, proving the suitability of laser photochemistry for organic synthesis. Thanks to the further miniaturization and the availability of new moderately priced laser systems even better suited beam sources can be provided for photochemistry.
In the described experiments, laser radiation of 355 nm wavelength (frequency-tripled Nd:YAG) was used. Since the spectral range of interest for most photoreactions ranges from the ultraviolet to the visible region, tunable laser systems (optical parametric oscillators) feature promising properties for use photochemical experiments. Thus, the irradiation wavelength can be adapted to the needs of the reaction (e.g., to a shifted absorption maximum of the reactant due to substitution) facilitating a large range of applications of this technique. Furthermore, IR laser sources (diode laser, Nd:YAG laser, CO 2 laser) could be applied for pulsed temperature and pressure elevation in microreactors, as well as microwave stimulation to accelerate reactions.
Experimental
All starting materials and products were characterized by standard techniques ( 1 H NMR, 13 C NMR, and elemental analysis) and are compared with authentic samples. The products were analyzed by GC-MS (internal standard, dodecane) and/or HPLC.
A solution of 2-azidobiphenyl (1) was continuously added to a miniaturized reactor of type II (see Figure 1, dimension of the reactor chamber 3 × 5 mm, 35 μL volume) with a syringe pump. The chamber was continuously irradiated with a Nd:YAG laser (355 nm). At a constant flow of 26 mL/h, the laser-pulse power was varied from 0.16 to 1.28 W. Furthermore, at a constant intermediate power of 0.92 W the flow rate (10 to 100 mL/h) and therefore the dwell time (exposure time) in the reactor was varied. The yield was determined by HPLC.
Supporting Information
Supporting Information File 1 Description of the flow reactor setup, kinetics, experimental procedures and spectroscopic data of all compounds. | 1,904 | 2012-07-31T00:00:00.000 | [
"Chemistry"
] |
E ff ect of Receiver’s Tilted Angle on the Capacity for Underwater Wireless Optical Communication
: This paper focuses on the e ff ect of the receiver’s tilted angle on the capacity of a clear ocean water Underwater Wireless Optical Communication (UWOC) system. To achieve this goal, the relationship between the channel capacity and the receiver’s tilted angle is investigated. First, we propose a double-exponential fading model with pointing error which can more accurately depict the channel of clear ocean water UWOC instead of the traditional Beer’s law model. Based on this channel model, we present the close-form expression of the capacity bounds of the UWOC system. Then, an optimization problem is formulated to improve the capacity by tilting the receiving plane. Both theoretical analyses and simulation results verify that the capacity bounds of UWOC can be enhanced dramatically by tilting the receiver plane at an optimal angle. Thus, in practice, we can provide an e ff ective design strategy for a UWOC system.
Introduction
Underwater wireless optical communication (UWOC) has attracted considerable attention due to its high band-width and large data rates compared with conventional acoustic communications [1][2][3]. However, in UWOC, the received signal suffers severe attenuation effects caused by the optical properties of the water channel, namely, absorption and scattering, which is defined as channel loss in [4,5] and degrades the system performance. Besides this, for successful wireless optical communication, the optical beam needs to be highly directive. Practically, however, due to the misalignment between the transmitter and receiver, the so-called pointing loss is incurred [5,6]. Obviously, the pointing loss effect will impair the system performance dramatically.
So far, many studies have been conducted on the effect of misalignment of the transmitter and receiver on the received intensity [4][5][6][7][8]. However, in the above literature, the receiver plane was fixed and could not be tilted, which greatly limited the performance of the UWOC system. Although [9,10] allowed for the tilting or movement of the receiver plane, no further consideration was given to theoretically optimize the performance of UWOC by tilting the receiver plane to overcome the pointing loss.
Channel capacity is an important indicator to evaluate the system performance of a communication link. For a visible light communication (VLC) system, the closed-form expression of the tight bounds on the capacity are presented in [11,12]. However, for UWOC, analyzing capacity is still a work in progress [13,14] because the underwater channel model is difficult to be depicted by a closed-form expression. Generally, Beer's law adopted in [7][8][9] is used to describe the underwater channel loss and to evaluate the system performance; however, it overlooks the indirect path. Compared with the traditional Beer's law, a double-exponential model originally used in [15,16] can more accurately depict the UWOC channel loss, but neither of them considered the pointing loss between the transmitter and receiver due to the receiver misalignment.
Based on the research results on the channel loss in [8,15] and the tight bounds on the capacity in [11], and considering the pointing loss, in this paper, we propose a new double-exponential channel model with a pointing error angle existing in a practical environment to thoroughly study the capacity of a UWOC system. Moreover, we consider the optimizing method of the capacity performance of the UWOC system by tilting the receiver plane. Numerical and simulation results both verify the effects of tilting the receiver plane on the capacity enhancement.
The remainder of the paper is organized as follows. The double-exponential with pointing error angle UWOC model is presented in Section 2. In Section 3, we derive the theoretical expressions of the capacity bounds for the UWOC channel. In Section 4, the optimization problem is formulated to improve the capacity by tilting the receiver plane; Section 5 presents numerical and simulation results of the above-mentioned optimization problems; and finally, we conclude the paper in Section 6.
System Model
The UWOC system model is illustrated in Figure 1. Point light source O is assumed as the origin of the 3D coordinates; the receiver moves within a quarter circle plane. (For analysis simplicity, only the 1/4 circle plane is considered, and the quadrant where the receiver is located is set as the first quadrant). Let the coordinates of the light source and receiver be [0,0,0] and [x 0 , y 0 , z 0 ], respectively. The field of view (FOV) of the receiver is 180 • , and d is the distance between the light source and the receiver.
Electronics 2020, 9, x FOR PEER REVIEW 2 of 8 more accurately depict the UWOC channel loss, but neither of them considered the pointing loss between the transmitter and receiver due to the receiver misalignment. Based on the research results on the channel loss in [8,15] and the tight bounds on the capacity in [11], and considering the pointing loss, in this paper, we propose a new double-exponential channel model with a pointing error angle existing in a practical environment to thoroughly study the capacity of a UWOC system. Moreover, we consider the optimizing method of the capacity performance of the UWOC system by tilting the receiver plane. Numerical and simulation results both verify the effects of tilting the receiver plane on the capacity enhancement.
The remainder of the paper is organized as follows. The double-exponential with pointing error angle UWOC model is presented in Section 2. In Section 3, we derive the theoretical expressions of the capacity bounds for the UWOC channel. In Section 4, the optimization problem is formulated to improve the capacity by tilting the receiver plane; Section 5 presents numerical and simulation results of the above-mentioned optimization problems; and finally, we conclude the paper in Section 6.
System Model
The UWOC system model is illustrated in Figure 1. Point light source O is assumed as the origin of the 3D coordinates; the receiver moves within a quarter circle plane. (For analysis simplicity, only the 1/4 circle plane is considered, and the quadrant where the receiver is located is set as the first quadrant). Let the coordinates of the light source and receiver be [0,0,0] and 0 0 0 [ , , ] x y z , respectively. The field of view (FOV) of the receiver is 180°, and d is the distance between the light source and the receiver. We also assume that the beam of the light source is always aimed at the receiver. For the receiver, the vector from the receiver to the light source is defined as or V . n V is the unit normal vector perpendicular to the plane of the receiver. is the incidence angle, which is also the pointing error angle in [5]. is the receiver's tilted angle. To further simplify the problem analysis, we have set the Z axis, vectors or V and n V , as co-planar. In this way, the relationship between them can be listed in the dashed box in the upper-right corner of Figure 1.
Assuming that the transceiver is located in a clear ocean water situation, and considering the channel fading of the UWOC and the noise term of the receiver, the received current signal is a combination of the transmitted signal and the path loss in the link; referring to Cox's link equations considering the path loss [8], the received signal has the following expression s y L P x n where / e hv = is the photoelectric transformation coefficient, in which is the quantum efficiency, h is Planck's constant, v is the frequency of light waves in seawater, and e is the electron charge. s P is the transmitting power; is the transmitting bit of the on-off keying (OOK) intensity modulation. n is the background noise of the receiver and can be simulated as We also assume that the beam of the light source is always aimed at the receiver. For the receiver, the vector from the receiver to the light source is defined as → V or . → V n is the unit normal vector perpendicular to the plane of the receiver. β is the incidence angle, which is also the pointing error angle in [5]. θ is the receiver's tilted angle. To further simplify the problem analysis, we have set the Z axis, vectors → V or and → V n , as co-planar. In this way, the relationship between them can be listed in the dashed box in the upper-right corner of Figure 1.
Assuming that the transceiver is located in a clear ocean water situation, and considering the channel fading of the UWOC and the noise term of the receiver, the received current signal is a combination of the transmitted signal and the path loss in the link; referring to Cox's link equations considering the path loss [8], the received signal has the following expression where τ = ηe/hv is the photoelectric transformation coefficient, in which η is the quantum efficiency, h is Planck's constant, v is the frequency of light waves in seawater, and e is the electron charge. P s is the transmitting power; x ∈ {0, 1} is the transmitting bit of the on-off keying (OOK) intensity modulation.
n is the background noise of the receiver and can be simulated as Gaussian white noise with a mean value of zero and variance of σ 2 . In (1), path loss L can be described as where τ channel is the channel loss, which is from absorption and scattering. Compared with the conventional Beer's law, the double-exponential channel loss model can more accurately depict the channel fading in a clear ocean water type [15,16]. Therefore, it can be modified and depicted as where C 1 , C 2 , C 3 , and C 4 are the fitting coefficients obtained by Monte Carlo simulations. However, when misalignment deployment of the receiver and transmitter occurs, the pointing loss must be considered. As shown in [6], it can be expressed as where β is the pointing error angle. Insert (3) and (4) into (2); the double-exponential channel loss model with pointing error is written as According to the knowledge of spatial analytic geometry, the pointing loss cos β can be expressed in the following form where the vector → V n = [cos ϕ sin θ, sin ϕ sin θ, cos θ], and → V or = [−x 0 , −y 0 , −z 0 ]; substituting both of them into (6), it simplifies to where ϕ is the azimuth angle formed by the positive direction of the X axis and the projection of → V n on the horizontal plane. In fact, since → V n , → V or and → z are coplanar, ϕ is totally determined by the coordinates of the receiver, as shown in Figure 2 below. As can be seen from Figure 2, the azimuth angle ϕ can be expressed as Electronics 2020, 9, x FOR PEER REVIEW 3 of 8 Gaussian white noise with a mean value of zero and variance of 2 . In (1), path loss L can be described as where channel is the channel loss, which is from absorption and scattering. Compared with the conventional Beer's law, the double-exponential channel loss model can more accurately depict the channel fading in a clear ocean water type [15,16]. Therefore, it can be modified and depicted as where 1 C , 2 C , 3 C , and 4 C are the fitting coefficients obtained by Monte Carlo simulations. However, when misalignment deployment of the receiver and transmitter occurs, the pointing loss must be considered. As shown in [6], it can be expressed as where is the pointing error angle.
Insert (3) and (4) into (2); the double-exponential channel loss model with pointing error is written as According to the knowledge of spatial analytic geometry, the pointing loss cos can be expressed in the following form where is the azimuth angle formed by the positive direction of the X axis and the projection of n V on the horizontal plane. In fact, since n V , or V and z are coplanar, is totally determined by the coordinates of the receiver, as shown in Figure 2 below. As can be seen from Figure 2, the azimuth angle can be expressed as As shown in (7), the pointing loss should be a function with respect to the tilted angle at a fixed distance. If we tilt the receiver plane, the pointing loss will change accordingly. As shown in (7), the pointing loss should be a function with respect to the tilted angle θ at a fixed distance. If we tilt the receiver plane, the pointing loss will change accordingly.
Capacity Bounds for UWOC
According to the work of Wang et al. [11], combined with the path loss in UWOC mentioned in (2), the lower and upper bounds on the channel capacity for the UWOC are expressed as where µ * ∈ [0, 1] is the solution to the equation Based on (5), (7), and (9) mentioned above, it can be seen that when the distance d is fixed, the lower bound on the channel capacity is a unary function with respect to the receiver's tilted angle θ. Changing the receiver's tilted angle θ will make it possible to obtain the optimal low bound on the capacity of UWOC at a given distance.
Optimization Problem: Raising and Solving
Therefore, the above description turns into one mathematical optimization problem. To this end, the optimization problem of capacity is first given, then the optimization issue is further proved to be a simple convex optimization problem, and lastly the theoretical expression of the optimal tilted angle is obtained.
Description of the Capacity Optimization Problem
Taking the maximum lower bound of UWOC capacity as the optimization target and considering the limit of the receiver's tilted angle, the optimization problem can be formulated as max θ C Low s.t.0≤θ≤π/2 (12)
Solution of the Optimization Problem
Based on (9), the channel capacity of UWOC is a unary increasing function with respect to the path loss L with pointing error. Therefore, (12) is equal to max θ L s.t.0≤θ≤π/2 (13) Substituting (7) into (5), the expression of path loss L with receiver's tilted angle θ is Further taking the first and second derivatives of L with respect to θ we obtain and Since the channel fading L is nonnegative, the second derivative of L with respect to θ is less than or equal to 0, indicating that the objective function L is a convex function with respect to θ. In other words, there is a value of θ that maximizes L and thus maximizes the capacity. Since (13) is convex and the constraint is also convex, it is a convex optimization problem. With the first derivative of L with respect to θ, the optimal tilted angle θ 0 for capacity to reach its maximum is where d is the distance between the receiver and the projection point of the light source on the X-Y circular plane. Combining Figure 2 and (17), it is easy to see that the pointing error angle β is equal to 0 • while the system capacity reaches the maximum, that is, the optimal tilted angle θ 0 is the case where → V n and the incident light are fully aligned.
Numerical Simulations and Analyses
In this section, we will investigate how the distance between the light source and receiver d and the receiver's tilted angle θ affects the capacity bounds. We will verify that pointing error cos β can be eliminated by tilting the receiver plane, hence improving the capacity of UWOC. The simulation parameters are listed in Table 1. Figure 3 depicts the capacity bounds of UWOC against the tilted angle θ when distance d between the light source and receiver is set to 19.25 m, 20.5 m, 21.75 m, and 23 m. As shown in Figure 3, we can see that the change trends of the curves of the capacity bounds with respect to the tilted angle θ are similar. In particular, the capacity bounds are monotonously decreasing functions at the distance of 19.25 m. Other capacity bounds increase first and then decrease with the increase in the tilted angle. For the given distance, there exists an optimal tilted angle for each curve. For example, at the distance of 21.75 m, when the tilted angle is set to 25 degrees, the maximum of the lower bound on capacity can be achieved. As a whole, as the distance increases, the optimal tilted angle becomes higher.
We also compared the capacity bounds at different distances at a certain tilted angle value. When the tilted angle is small (below 40 degrees), the capacity bounds decrease dramatically as the distance increases. However, as the tilted angle becomes large, the capacity bounds decrease slightly as the distance increases. As a whole, when the tilted angle is small, distance is a major influential factor only on the capacity bounds; when the tilted angle increases, the tilted angle rather than distance becomes a major factor. We also compared the capacity bounds at different distances at a certain tilted angle value. When the tilted angle is small (below 40 degrees), the capacity bounds decrease dramatically as the distance increases. However, as the tilted angle becomes large, the capacity bounds decrease slightly as the distance increases. As a whole, when the tilted angle is small, distance is a major influential factor only on the capacity bounds; when the tilted angle increases, the tilted angle rather than distance becomes a major factor.
We also notice that at a short distance, the capacity bounds vary more evidently than at a long distance. This can be explained as follows: when the distance is short, tilting the receiver plane slightly can almost make up for the pointing error loss; thus, the optimal capacity can be achieved. With the increase in distance, the corresponding optimal tilted angle gradually increases, which means that when the receiver is farther away from the light source, the receiver plane needs to be deflected at a larger angle to overcome the adverse effect of the pointing error. Figure 4 shows the curved surface distribution of the UWOC channel capacity's lower bound over the horizontal circular surface shown in Figure 1 under two situations of untilted and optimally tilted receiver planes. In the simulation, the point of light source O is assumed as the origin of the optical axis Z. The distance between the light source and the receiver plane is set to 19.25 m. The receiver moves on a quarter circle plane with a radius of 12.6 m. As can be seen from Figure 4a,b, irrespective of whether the receiver plane is tilted, the capacities on both boundaries of circles are the worst, and the optimal capacity can be reached when the receiver is located directly below the light source. This result can be explained that as distance increases, the path loss will gradually increase and lead to capacity performance deterioration.
In addition, the capacities are the same on the circumference of a circle in the X-Y plane with a radius a fixed distance from the optical axis Z. This is because the points on this circle are the same distance from the light source. Comparing Figure 4b with Figure 4a, we can find that after tilting the receiver plane with an optimal angle, the lower bound of capacity increases in the whole X-Y plane.
For example, when d is set to 23 m, the lower bound of capacity Low C is 0.593 bit/s when the receiver plane is not tilted, while Low C reaches 0.7479 bit/s after tilting with the optimal angle. This shows that through tilting the receiver plane, the receiver can overcome the adverse effect of the pointing error, further enhancing the capacity performance. We also notice that at a short distance, the capacity bounds vary more evidently than at a long distance. This can be explained as follows: when the distance is short, tilting the receiver plane slightly can almost make up for the pointing error loss; thus, the optimal capacity can be achieved. With the increase in distance, the corresponding optimal tilted angle gradually increases, which means that when the receiver is farther away from the light source, the receiver plane needs to be deflected at a larger angle to overcome the adverse effect of the pointing error. Figure 4 shows the curved surface distribution of the UWOC channel capacity's lower bound over the horizontal circular surface shown in Figure 1 under two situations of untilted and optimally tilted receiver planes. In the simulation, the point of light source O is assumed as the origin of the optical axis Z. The distance between the light source and the receiver plane is set to 19.25 m. The receiver moves on a quarter circle plane with a radius of 12.6 m. As can be seen from Figure 4a,b, irrespective of whether the receiver plane is tilted, the capacities on both boundaries of circles are the worst, and the optimal capacity can be reached when the receiver is located directly below the light source. This result can be explained that as distance increases, the path loss will gradually increase and lead to capacity performance deterioration. Figure 5 shows the distribution of the optimal tilted angle when the receiver is located at different positions of the X-Y circular plane. The light source is set at the origin of the optical axis. As can be seen from Figure 5, when the receiver is located directly below the source, it is obvious that the optimal tilted angle is 0°. When the receiver moves in the X-Y plane, the optimal tilted angle becomes gradually larger as the distance between the light source and receiver increases. When the receiver is positioned at the boundary, the optimal tilted angle will reach the maximum of 33.2°. This means that we need to tilt the receiver plane to a greater angle to overcome the adverse effect of the pointing error. In addition, similar to the results of Figure 4b, the optimal tilted angles are the same on the circumference of any circle in the X-Y plane. In addition, the capacities are the same on the circumference of a circle in the X-Y plane with a radius a fixed distance from the optical axis Z. This is because the points on this circle are the same distance from the light source. Comparing Figure 4b with Figure 4a, we can find that after tilting the receiver plane with an optimal angle, the lower bound of capacity increases in the whole X-Y plane. For example, when d is set to 23 m, the lower bound of capacity C Low 0.593 bit/s when the receiver plane is not tilted, while C Low reaches 0.7479 bit/s after tilting with the optimal angle. This shows that through tilting the receiver plane, the receiver can overcome the adverse effect of the pointing error, further enhancing the capacity performance. Figure 5 shows the distribution of the optimal tilted angle when the receiver is located at different positions of the X-Y circular plane. The light source is set at the origin of the optical axis. As can be seen from Figure 5, when the receiver is located directly below the source, it is obvious that the optimal tilted angle is 0 • . When the receiver moves in the X-Y plane, the optimal tilted angle becomes gradually larger as the distance between the light source and receiver increases. When the receiver is positioned at the boundary, the optimal tilted angle will reach the maximum of 33.2 • . This means that we need to tilt the receiver plane to a greater angle to overcome the adverse effect of the pointing error. In addition, similar to the results of Figure 4b, the optimal tilted angles are the same on the circumference of any circle in the X-Y plane.
(a) (b) Figure 4. (a) Lower bound of the capacity with respect to the coordinates of the receiver without tilting the receiver plane; (b) Lower bound of the capacity with respect to the coordinates of the receiver with an optimally tilted receiver plane. Figure 5 shows the distribution of the optimal tilted angle when the receiver is located at different positions of the X-Y circular plane. The light source is set at the origin of the optical axis. As can be seen from Figure 5, when the receiver is located directly below the source, it is obvious that the optimal tilted angle is 0°. When the receiver moves in the X-Y plane, the optimal tilted angle becomes gradually larger as the distance between the light source and receiver increases. When the receiver is positioned at the boundary, the optimal tilted angle will reach the maximum of 33.2°. This means that we need to tilt the receiver plane to a greater angle to overcome the adverse effect of the pointing error. In addition, similar to the results of Figure 4b, the optimal tilted angles are the same on the circumference of any circle in the X-Y plane.
Conclusions
This paper focuses on the effect of a receiver's tilted angle on the channel capacity of a clear ocean water UWOC system. We propose a new double-exponential channel model with pointing error angle existing in a practical environment to optimize the capacity of the UWOC system. Then, we derive the close-form expressions of the capacity bounds for the UWOC system. Based on the results, the optimal receiver's tilted angle to maximize the lower bound of the capacity is obtained. The simulation results suggest that in practice, in order to achieve the optimal performance of a UWOC system, we can tilt the receiver plane at an optimal angle at a given communication distance or configure the optimized communication distance under a fixed tilted angle.
Conclusions
This paper focuses on the effect of a receiver's tilted angle on the channel capacity of a clear ocean water UWOC system. We propose a new double-exponential channel model with pointing error angle existing in a practical environment to optimize the capacity of the UWOC system. Then, we derive the close-form expressions of the capacity bounds for the UWOC system. Based on the results, the optimal receiver's tilted angle to maximize the lower bound of the capacity is obtained. The simulation results suggest that in practice, in order to achieve the optimal performance of a UWOC system, we can tilt the receiver plane at an optimal angle at a given communication distance or configure the optimized communication distance under a fixed tilted angle. | 6,209 | 2020-12-04T00:00:00.000 | [
"Computer Science"
] |
A Coordinated DC Power Support Strategy for Multi-Infeed HVDC Systems
A DC power support strategy utilizes the flexibility of a High-voltage direct-current (HVDC) system in power modulation to optimize the operating point or compensate the power imbalance caused by a disturbance. The major impediment to the strategy is the difficulty in maintaining DC voltage values at converter stations during the process of DC power support. To overcome the difficulty, a coordinated DC power support strategy for multi-infeed HVDC systems is proposed in this paper. Synchronous condensers are employed to provide dynamic reactive power compensation in sustaining DC voltage values at converter stations. Models are built for the optimal leading phase operation and adjusting excitation voltage reference value of synchronous condensers. Multiple HVDC links are coordinated to participate by using the DC power support factor to rank and select the links. Optimal DC power support values of the participating HVDC links are obtained with a comprehensive stability margin index that accounts for transient stability of the sending-end systems and frequency security of the receiving-end systems. An optimal load shedding model is used to ensure the frequency security of receiving-end systems. Case study results of a provincial power system in China demonstrate the effectiveness and performance of the proposed DC power support strategy.
Introduction
High-voltage direct-current (HVDC) technologies have been extensively applied to long-distance large-scale power transfer worldwide [1].HVDC systems are of fast power modulation capability that can be applied to steady-state power flow control or emergency control after a disturbance.The term of DC power support is used to quantify the flexibility of an HVDC system in power modulation, which can optimize the operating point or compensate the power imbalance caused by a disturbance [2].The DC power support value an HVDC link provides is the difference between the steady-state value of DC power after modulation and the rated value before modulation, which is usually in the 10-50% range of the rated DC power.Works on designing and improving a DC power support strategy have been conducted in the field [2][3][4].
The major impediment to the performance of a DC power support strategy is the difficulty in maintaining DC voltage values at the rectifier and inverter stations of the HVDC system during the process of DC power support.A line commutated converter based HVDC (LCC-HVDC) system Energies 2018, 11, 1637 2 of 20 usually absorbs a certain quantity of reactive power under steady-state operating conditions, which will increase proportionally with the increasing power transferred on the HVDC links.Therefore, significant voltage drop of DC voltage values can be witnessed if the extra reactive power requirement cannot be satisfied, which will discount the effect of DC power support.To address the issue, an indirect matrix converter-based topology is proposed in [5], which enhance the input reactive power capability of the HVDC links.The ratio between the reactive power consumption and active power transmission of an HVDC link is derived in [6].Synchronous condensers (SCs) are much more robust to transient overload and low voltage conditions compared with other dynamic reactive power compensation devices [7][8][9].They can provide the extra quantity of reactive power required during the process of DC power support, add rotating inertia and enhance system short-circuit strength.
Design and control technologies of SCs have advanced significantly.Excitation control system is the core of SCs, which is responsible for the adjustment of their working modes and operating points.The advantage of leading phase operation for synchronous generators is discussed in [10,11], which can be readily extended to SCs.The impact of the excitation control system on the dynamic performance of an HVDC system is analyzed in [12,13].Excitation voltage reference values of SCs installed close to the converter stations of HVDC links are adjusted to increase the reactive power output, which is an effective measure for maintaining DC voltage values at the converter stations [14].Nevertheless, inappropriate adjustment of excitation voltage reference values will deteriorate the performance.
Security and stability of the HVDC system should be carefully considered when implementing the DC power support strategy.For each HVDC link, the sending-end system is more prone to the transient stability issue while frequency security is the major concern of the receiving-end system.Conventionally, transient stability analysis methods can be categorized into two major classes, which are time-domain simulation-based method [15] and direct method [16].The former method lacks the ability to quantify the stability margin while the latter one is not adaptable to any model conditions.The extended equal area criterion (EEAC)-based method can overcome the disadvantages of the above two methods to some degree [17,18].An assessment approach for frequency security is proposed, which can account for the cumulative effect and is suitable for quantifying frequency security of the system [19,20].
The DC power support strategy for a multi-infeed HVDC system should rely upon the coordination of various HVDC links.In [21], comprehensive support factors are defined and used to arrange HVDC links participating in DC power support following a disturbance, but the solutions are not optimal.A DC power support strategy is proposed in [22], which is verified to be effective for the large-scale power system by simulation results.However, frequency security of the receiving-end system is neglected in the strategy.A DC power support framework is presented in [23], which comprehensively accounts for the impact of activating time, power changing rate and load characteristics on DC power support.Nevertheless, it lacks coordination among various HVDC links.
A coordinated DC power support strategy for multi-infeed HVDC systems is proposed in this paper.SCs are employed to provide dynamic reactive power compensation in sustaining DC voltage values at the converter stations of the HVDC system during the process of DC power support.Models are built for appropriately adjusting the working modes and operating points of SCs.Multiple HVDC links are coordinated to participate in DC power support through ranking and selecting the links in accordance with the DC power support factor.A comprehensive stability margin index is defined and used to obtain the optimal DC power support values of the participating HVDC links, which accounts for transient stability of the sending-end systems and frequency security of the receiving-end systems.An optimal load shedding model is used to ensure the frequency security of the receiving-end systems when necessary.The main contributions of this paper are summarized as follows: (1) Steady-state operating points of SCs are adjusted through the leading phase operation to increase the control margin during the process of DC power support.
(2) Multiple HVDC links are ranked and selected to participate in the DC power support strategy, for which the feasible range of DC power support values are guaranteed by properly adjusting excitation voltage reference values of SCs.(3) A comprehensive stability index is proposed to quantify the impacts of the DC power support on the sending-end and receiving-end systems, which accounts simultaneously for the transient stability and frequency security issues.
The remainder of this paper is organized as follows.Section 2 introduces the main framework of the proposed DC power support strategy.Section 3 builds models for the optimal leading phase operation and adjusting excitation voltage reference value of SCs, respectively.Section 4 defines the comprehensive stability margin index accounting for transient stability and frequency security and then proposes the optimal load shedding model.Section 5 presents the flowchart and detailed steps of the proposed DC power support strategy.Section 6 conducts case studies.Conclusions are drawn in Section 7.
Main Operating Principles of the Proposed DC Power Support Strategy
SCs play a vital role in the proposed DC power support strategy.Steady-state operating points of SCs are adjusted to increase the control margin during the process of DC power support.It is through the leading phase operation, i.e., SCs absorbing a proper quantity of reactive power, to achieve the goal.The terminal voltage values of SCs are to be moderately decreased in the leading phase operation mode, which will keep relatively low values when the DC power support is actuated.Therefore, the maximum values of reactive power outputs of SCs can be decreased with the control margin increased during the process of DC power support.Various HVDC links are to be selected to participate in the DC power support strategy.The HVDC links with better performance in mitigating the impact of the disturbance are of high priority to be selected.The DC power support factor is defined and used to rank the candidate links, which concurrently accounts for the electrical distance to the disturbance and the support provided by the AC system.The feasible range of DC power support values for each participating HVDC link will be determined and ensured by properly adjusting excitation voltage reference values of SCs.
The optimal DC power support values for the participating HVDC links are to be obtained by evaluating the impacts of the DC power support provided by various HVDC links to the power system.The sending-end and receiving-end systems for each HVDC link are of distinct characteristics in responding to the impact of the DC power support.A comprehensive stability index is formed to quantify the impacts of the DC power support, which accounts simultaneously for the transient stability and frequency security issues.A quantity of load shedding in the receiving-end system will be required if the DC power support provided by the participating HVDC links cannot totally compensate for the power imbalance.The degree of power imbalance can be measured by the frequency security margin, which is used to build an optimal load shedding model.
To provide a complete picture of the DC power support strategy, a multi-infeed HVDC system is exemplified as shown in Figure 1.The system comprises one AC tie-line and four HVDC links that connect the receiving-end system with two sending-end systems.Assume that HVDC Link 1 is blocking, which causes large power loss for the receiving-end system.To compensate the power loss, the other three HVDC links are coordinated to participate in DC power support by increasing the DC power transferred on them.SCs installed close to the rectifier and inverter stations are appropriately adjusted to maintain DC voltage values at the rectifier and Inverter Buses 1-4 during the process of DC power support.To ensure the system stability, the DC power support values for the three participating HVDC links are obtained by carefully assessing the impacts to the system.The impacts are quantified by the comprehensive stability index that accounts simultaneously for the transient stability of the two sending-end systems and frequency security of the receiving-end system.If the DC power support provided by the three participating HVDC links cannot totally compensate the power loss, the minimum quantity of load shedding in the receiving-end system will be determined.
Optimal Leading Phase Operation for SCs
Leading phase operation for SCs can decrease the maximum reactive power output values of SCs during DC power support.However, voltages of AC buses at converter stations and DC power support values will also decrease.Therefore, it is vital to obtain the relationships between variations of reactive power values absorbed by SCs, voltage variations of AC buses at converter stations and variations of DC power support values.
Quasi-steady-state model of an HVDC link can be expressed as [24] , where at a given DC power support value in percentage for the kth HVDC link without leading phase operation for SCs can be obtained by time-domain simulations [25].Without loss of generality, constant current control and constant extinction angle control are assumed to be applied to rectifier and inverter stations of the kth (k = 1, 2, …, Ndc) HVDC link.Therefore, the steadystate values of DC current I where n k i0 represents the ratio of transformer at inverter station of the kth HVDC link and is invariant within a short time.
Optimal Leading Phase Operation for SCs
Leading phase operation for SCs can decrease the maximum reactive power output values of SCs during DC power support.However, voltages of AC buses at converter stations and DC power support values will also decrease.Therefore, it is vital to obtain the relationships between variations of reactive power values absorbed by SCs, voltage variations of AC buses at converter stations and variations of DC power support values.
Quasi-steady-state model of an HVDC link can be expressed as [24] where P dr is DC power sending from rectifier stations; V dr and V di are DC voltages of rectifier and inverter stations, respectively; I d is DC current; R dc is resistance of HVDC transmission line; α is firing angle of rectifier station; γ is extinction angle of inverter station; V dor and V doi are open-circuit DC voltages of rectifier and inverter stations, respectively; X cr and X ci are commutation reactance of rectifier and inverter stations, respectively; and n r and n i are ratios of transformers at rectifier and inverter stations, respectively.
DC power value P k dr0 at a given DC power support value in percentage for the kth HVDC link without leading phase operation for SCs can be obtained by time-domain simulations [25].Without loss of generality, constant current control and constant extinction angle control are assumed to be applied to rectifier and inverter stations of the kth (k = 1, 2, . . ., N dc ) HVDC link.Therefore, the steady-state values of DC current I k d1 and extinction angle γ k i1 after modulation are approximately equal to the set values I k d0 and γ k i0 .Then, the variation of DC power support value (∆P k dr0 = P k dr1 − P k dr0 ) can be expressed as where n k i0 represents the ratio of transformer at inverter station of the kth HVDC link and is invariant within a short time.
Since the relationship between ∆V k i0 (the voltage variation of AC bus at the inverter station) and ∆Q k i0 (reactive power absorbed by SCs installed close to the inverter station) of the kth HVDC link is approximately linear, ∆P k dr0 can be further approximated as where a k i represents the sensitivity coefficient between ∆V k i0 and ∆Q k i0 .According to Equation (3), the relationship between ∆P k dr0 and ∆Q k i0 and the relationship between ∆P k dr0 and ∆V k i0 are approximately linear in a small range, which can be formulated by using the sensitivity information as where b k Qi and b k Vi represent the variation of DC power support value towards the unit reactive power variation absorbed by SCs and the unit voltage variation of AC bus at the inverter station of the kth HVDC link, respectively.
The linear relationship between ∆P k dr0 and ∆Q k r0 (∆V k r0 ) can be formulated in a similar way as expressed in Equation ( 3) depicting the relationship between ∆P k dr0 and ∆Q k i0 (∆V k i0 ).∆Q k r0 and ∆V k r0 represent the variation of reactive power values absorbed by SCs installed close to the rectifier station and voltage variation of AC bus of the kth HVDC link, respectively.The linear relationship is viable assuming the variation of firing angle caused by the leading phase operation for SCs is not significant.With SCs absorbing an amount of reactive power, there is a decrease of DC power support values for each HVDC link.It is necessary to limit the negative variation of DC power support values for each HVDC link during the leading phase operation of SCs.A factor of q k sc is defined as follows, which reflects the impact of the leading phase operation for SCs on the variation of DC power support values for the kth HVDC link, The factor accounts simultaneously for the impacts of SCs installed close to the rectifier station and inverter station.The lower is the value of q k sc , the lesser is the negative impact of the leading phase operation of SCs on the variation of DC power support values.Then, [∆Q k r0 , ∆Q k i0 ] for the kth HVDC link corresponding to the minimum q k sc should be obtained.However, the reactive power values absorbed by SCs solved according to q k sc are lack of coordination and difference among variations of DC power support values may be large.To address this issue, an optimal leading phase operation (OLPO) model is proposed, which attempts to limit the variations of DC power support values for all HVDC links in a relatively balanced way.
The optimization model is as follows min where Energies 2018, 11, 1637 6 of 20
•
The objective function in Equation ( 6) is to minimize differences in the variations of DC power support values among all HVDC links.The variation of DC power support value for the kth HVDC link can be expressed as ∆Q k i /∆Q k i0 × ∆P k dr0 when reactive power value absorbed by SCs installed close to the inverter station is ∆Q k i by using the linear relationship between ∆P k dr0 and ∆Q k i0 .The difference in the variations of DC power support values between the kth and mth HVDC links is Inequations in Equation ( 6) ensure that voltage values (and their variations) of buses and reactive power absorbed by SCs in both sending-end and receiving-end systems are within the limits.
•
The equation of ∆Q r = w∆Q i relates variations of reactive power absorbed by SCs in sending-end and receiving-end systems.The kth element of w is set to be equal to the ratio of ∆Q k r0 and ∆Q k i0 corresponding to the minimum q k sc .
•
Other equations describe the linear relationships between voltage variations of buses and variations of reactive power absorbed by SCs in sending-end and receiving-end systems, where the sensitivity matrixes S r and S i are presented in Appendix A.1.
•
Detailed symbol explanations in Equation ( 6) are presented in Appendix A.2.
Reactive power values absorbed by SCs installed close to converter stations of all HVDC links can be obtained by solving the OLPO model in (6).
Adjustment of Excitation Voltage Reference Values for SCs
Inappropriate adjustments of excitation voltage reference values (V ref ) may decrease the DC power support values.Therefore, an adjustment of V ref (AVREF) model is proposed to coordinate adjustments of V ref for SCs installed close to converter stations of HVDC links participating in DC power support.
To formulate AVREF model, sensitivity information is used representing the linear relationships between voltage variations of buses at converter stations and variations of DC power support values described in Section 3.1.Since interactions among HVDC links are complicated, the effectiveness of AVREF on DC power support is significant.Assuming that there are N p dc HVDC links participating in DC power support, the AVREF model can be expressed as where m and n represent the mth and nth HVDC links, respectively; ∆V n ei is the adjustment percentage of V ref for SCs installed close to the inverter station of the nth HVDC link; and c mn represents the variation of DC power support value ∆P m dr0 for the mth HVDC link when ∆V n ei is 1% of the steady-state value of V ref for SCs installed close to the inverter station of the nth HVDC link.
According to the AVREF model defined in Equation ( 7), the estimated adjustments ∆V ei_es of V ref vector for SCs installed close to inverter stations of N p dc HVDC links can be expressed as where P S is the target DC power vector for N p dc HVDC links during DC power support; P dr0 is the DC power vector without adjustments of V ref for SCs during DC power support; ∆P dr1 is the difference vector between P S and P dr0 ; c is the sensitivity matrix whose dimension is N p dc × N p dc ; and c mn is the mth row and nth column element of c.
It is required that SCs installed close to the inverter stations are coordinated with SCs installed close to the rectifier stations for adjustment.Voltages V k i and V k r of AC buses at the inverter station and rectifier station of the kth HVDC link should keep sufficiently high values to ensure that the DC power Energies 2018, 11, 1637 7 of 20 of the link can always achieve the preset value.It is through coordinately adjusting V ref of SCs installed close to the inverter station and rectifier station to maintain values of V k i and V k r simultaneously.Since ∆V ei_es solved in Equation ( 8) is only the estimated adjustment vector of V ref for SCs, fine adjustments of ∆V ei_es are needed.For instance, the search range of adjustment vector of V ref for SCs installed close to rectifier stations ∆V er is set to be [T r1 × ∆V ei_es , T r2 × ∆V ei_es ] and the search range of adjustment vector of V ref for SCs installed close to inverter stations ∆V ei to be [T i1 × ∆V ei_es , T i2 × ∆V ei_es ].The search step vector is set to be T × ∆V ei_es .Then, the adjustment percentages of V ref for SCs installed close to rectifier and inverter stations of all HVDC links participating in DC power support can be obtained by time-domain simulations.
Selection of HVDC Links Participating in DC Power Support
To select HVDC links participating in DC power support, two principles are proposed as follows 1.
Select priority HVDC links that have smaller electrical distance to fault AC tie-lines or HVDC links.
2.
Select priority HVDC links that have stronger support from AC systems.
Indices depicting the above two principles are used to quantify and rank various HVDC links.Certain HVDC links will be selected to participate in DC power support according to the rank order.
Group of CIGRE WG B4 has proposed the multi-infeed interaction factor (MIIF) to measure interactions of voltage variations of AC buses among converter stations due to variations of reactive power and reflect the degree of coupling of multiple HVDC links, which can be defined as [26] where ∆V i is the voltage variation of AC bus at the converter station i which is 1% of the steady-state value; and ∆V j is the voltage variation of AC bus at converter station j.
To account for interactions among HVDC links and AC tie-lines, the extended AC/DC interaction factor (ADIF) is applied which reflects the degree of coupling among voltage variations of AC buses at converter stations or substations.ADIF can be defined as where ∆V i is the voltage variation of AC bus at the converter station i which is 1% of the steady-state value; and ∆V j is the voltage variation of AC bus at converter station or substation j.Short circuit ratio is generally used to evaluate interactions between AC and DC systems.Multi-infeed effective short circuit ratio (MESCR) is proposed to reflect impacts of multiple HVDC links on AC systems [27].The MESCR for the ith HVDC link is as follows where S i is three-phase short-circuit capacity of AC bus at the converter station i; Q fi is the reactive power value supported by filters or capacitors at the converter station i; and P di and P dj are DC power of the ith and jth HVDC link, respectively.
Energies 2018, 11, 1637 Since the ADIF index is the extended modification of MIIF, DC power support factor can be expressed by the product of ADIF and MESCR as where K ji represents the support effectiveness of the ith HVDC link on the jth fault AC tie-line or HVDC link.The larger K ji is, the better support effectiveness is.DC power support factors of non-fault HVDC links in the sending-end and receiving-end systems are denoted as K r and K i , respectively.Elements of K r and K i are used to rank non-fault HVDC links, i.e., the larger the value of the element, the higher rank of the corresponding HVDC link.For elements in K r , if the ith non-fault HVDC link and jth fault HVDC link or AC tie-line are in different sending-end systems which are asynchronously interconnected, K rji is equal to 0. K r is of higher priority than K i in ranking.Whenever some elements of K r are of the same value, K i will be employed to rank these corresponding HVDC links further.
The non-fault HVDC links are selected to participate in DC power support and compensate the active power loss of the fault HVDC link or AC tie-line.If the sum value of the maximum DC power support provided by all the non-fault HVDC links is larger than the active power loss, those links with the sum value of DC power support approaching to the active power loss will be selected in accordance with the rank order.Otherwise, if the sum value is smaller than the active power loss, all the non-fault HVDC links will be selected.
Transient Stability of Sending-End Systems
According to EEAC theory, generators in a sending-end system can be divided into two subsets after faults, which are the cluster of critical machines (Cluster S) and cluster of remaining machines (Cluster A).These two subsets can be transformed into two equivalent machines, and the equivalent two-machine system can be further transformed into a one-machine-infinite-bus (OMIB) system.The dynamic equation of the OMIB system is expressed as [17] M .. δ = P m − P e (13) where M, δ, P m and P e are the inertia coefficient, rotor angle, mechanical power and electromagnetic power of the equivalent machine, respectively.An evaluation index (EI) set proposed in [28] for seriously disturbed generators is adopted in this paper, which is expressed as where t c is the fault clearing time; ∆P Gi (t) (∆ω i (t)) is the difference between real-time active power output (rotor speed) of generator i at time t and steady-state active power output (rotor speed) of generator i after clearing the fault; EI ij represents transient potential energy of generators i and j; and N G is number of generators in the sending-end system.
The serious degree of disturbed generator i (i = 1, 2, . . ., N G ) can be reflected by the sum value of transient potential energy of generator i and other generators, which is defined as Energies 2018, 11, 1637 The larger D i is, the more seriously generator i is disturbed.By using D i in a time window after clearing the fault, e.g., 0.2 s, the cluster A and cluster S can be obtained by using the K-means algorithm [29].
Assuming that the mechanical power keeps constant during the transient period, the transient stability margin index of sending-end systems can be defined as where δ DSP is the rotor angle of the dynamic saddle point; δ 0 is the steady-state rotor angle before the fault; and δ tc is the rotor angle at the fault clearing instant.The denominator and numerator in Equation ( 16) represent the acceleration area and the difference between the deceleration area and acceleration area of the equivalent machine, respectively.The larger the value of η t is, the larger the transient stability margin is.
Frequency Security of Receiving-End Systems
The frequency security index proposed in [19] is of good monotonicity and low computation cost, which can be used to depict the frequency security margin of the receiving-end system.
The frequency security margin index is defined as where f cr is the critical value of frequency; t s and t cr are the starting instant and duration time of a time window, respectively; and f N is the rated frequency.The binary table of [f cr , t cr ] is the key parameters in expressing the index.The larger the value of η f is, the larger the frequency security margin is.
Comprehensive Stability Margin Index
A comprehensive stability margin index accounting for transient stability margin index and frequency security margin index is proposed, which is expressed as where η t and η f are transient stability and frequency security margin indices of the sending-end and receiving-end systems, respectively; h t and h f are positive weight coefficients of transient stability and frequency security margin indices, respectively; there are N t sending-end systems and N f receiving-end systems; η k t and η k f are transient stability and frequency security margin indices of the kth sending-end and receiving-end system, respectively; and z k t and z k f are positive weight coefficients of transient stability and frequency stability margin indices of the kth sending-end and receiving-end system, respectively.
The value of the comprehensive stability margin index η is made up of two parts, which are the sum value of transient stability margin index of the sending-end systems and sum value of frequency security margin index of the receiving-end systems.With the search space of DC power support values determined and values of η obtained, the optimal DC power support values for HVDC links participating in DC power support corresponding to the largest value of η can be derived.
Optimal Load Shedding Model
There are scenarios when the DC power support cannot compensate the active power shortage in the receiving-end systems.The load shedding approach should be employed in the receiving-end systems under such scenarios.An optimal load shedding model is used to address the issue with the lowest control cost.
The frequency-load sensitivity depicting the relationship between the load shedding value and the value of the frequency security margin index is used, which is defined as where ∆P LSi is the variation of the load shedding value in the ith load shedding area; ∆η f is the variation value of the frequency security margin index; and N LS is the number of load shedding areas.
The optimal load shedding (OLS) model is formulated as [30] min where The objective function is to minimize the sum of load shedding values assigned into the load shedding areas.
•
The first constraint describes the value of the frequency security margin η f reaching the requirement through load shedding, in which η f0 is the initial value of η f , A and P LS are the N LS -dimensional vectors of frequency-load sensitivity and load shedding values, and ε is the requirement for the frequency security margin.
•
The second constraint set the limits for the load shedding values, in which 0 and P LSmax set the lower and upper limits, respectively.
Detailed Steps of DC Power Support Strategy
The main steps of the DC power support strategy are summarized as follows (Figure 2): 1.
Solve the OLPO model and obtain the reactive power values ∆Q r and ∆Q i absorbed by SCs installed close to the rectifier and inverter stations of all HVDC links.2.
Select the HVDC links participating in DC power support by ranking the non-fault HVDC links according to the DC power support factors K r and K i in the sending-end and receiving-end systems.
3.
Determine the search space of DC power support values for the HVDC links participating in DC power support where the AVREF model is used to coordinately adjust V ref for SCs installed close to the rectifier and inverter stations and ensure DC power to achieve the target values.
4.
Obtain the optimal DC power support values corresponding to the largest value of comprehensive stability margin index η.
5.
Solve the OLS model and obtain the minimum sum of load shedding values P L_sum with the value of the frequency security margin index η f reaching the requirement for the margin ε if the initial value of η f is smaller than ε.
to the rectifier and inverter stations and ensure DC power to achieve the target values.4. Obtain the optimal DC power support values corresponding to the largest value of comprehensive stability margin index η. 5. Solve the OLS model and obtain the minimum sum of load shedding values PL_sum with the value of the frequency security margin index ηf reaching the requirement for the margin ε if the initial value of ηf is smaller than ε.
Test Power System
A provincial power system in China is simulated, which is a typical receiving-end system with multiple HVDC links and AC transmission lines integrated into it, as shown in Figure 3.There are three HVDC links, in which Links 2 and 3 are of hierarchical connection mode, i.e., the inverter stations are connected to two voltage-level buses.The rated power values of HVDC Links 1, 2 and 3 are 4000 MW, 10,000 MW and 10,000 MW, respectively.Besides, five double-circuit 500 kV and 1000 kV AC transmission lines from external power systems are connected.The reference value of system apparent power is set to be 100 MVA.
Test Power System
A provincial power system in China is simulated, which is a typical receiving-end system with multiple HVDC links and AC transmission lines integrated into it, as shown in Figure 3.There are three HVDC links, in which Links 2 and 3 are of hierarchical connection mode, i.e., the inverter stations are connected to two voltage-level buses.The rated power values of HVDC Links 1, 2 and 3 are 4000 MW, 10,000 MW and 10,000 MW, respectively.Besides, five double-circuit 500 kV and 1000 kV AC transmission lines from external power systems are connected.The reference value of system apparent power is set to be 100 MVA.In practice, the DC power support values of HVDC systems are usually set with discrete levels.For instance, the DC power support values of Tian-Guang HVDC project in the China Southern Power Grid are set with five levels as 10%, 20%, 30%, 40% and 50% of the rated power [31].The proposed strategy adopts the power system practice with the selection of the 10% step in the search space.The maximum value in percentage is generally limited to 50% for the reliability of the HVDC In practice, the DC power support values of HVDC systems are usually set with discrete levels.For instance, the DC power support values of Tian-Guang HVDC project in the China Southern Power Grid are set with five levels as 10%, 20%, 30%, 40% and 50% of the rated power [31].The proposed strategy adopts the power system practice with the selection of the 10% step in the search space.The maximum value in percentage is generally limited to 50% for the reliability of the HVDC link.A quantity of reactive power must be absorbed for the secure operation of an HVDC link, which is usually 40-60% of the DC power value.Regular reactive power compensation devices equipped at rectifier and inverter stations (e.g., capacitors) can satisfy the requirement for normal operation states.Nevertheless, provision of DC power support by the HVDC link will pose extra reactive power compensation requirements.
To enhance reactive power support for the HVDC links, plenty SCs have been or will be installed at rectifier stations in the sending-end systems and inverter stations in the test power system (the receiving-end system).The number of SCs and their optimal locations should be planned accounting for technical and economic factors comprehensively [32].The rated apparent power of a SC is 500 MVA.There are 6, 10 and 9 SCs installed close to Rectifier Stations 1, 2 and 3, respectively; Inverter Station 1 has 6 SCs; and each of Inverter Stations 2-5 has 5 SCs.The maximum adjustment percentage of excitation voltage reference value V re f for each SC is set to be 20%.Accounting for the reactive power support by the SCs, the maximum DC power support values in percentage for HVDC Links 1, 2 and 3 are 50%, 40% and 30%, respectively.
OLPO for SCs at Converter Station of Single HVDC Link
The leading phase operation for SCs is simulated to verify its effectiveness in supplementing an HVDC link to provide DC power support.The value of DC power support for HVDC Link 1 is set to be 30% of the rated value, i.e., 1200 MW.
The reactive power value ∆Q DC1 r0 absorbed by SCs installed close to Rectifier Station 1 is set to be 3 p.u. and the search range of the reactive power value ∆Q DC1 i0 absorbed by SCs installed close to Inverter Station 1 is set to be 1.5-3 p.u.Then, variations of DC power support values ∆P DC1 dr0 and values of influence factor q DC1 sc can be obtained, which are listed in Table 1.As highlighted using boldface font in Table 1, values of [∆Q DC1 r0 , ∆Q DC1 i0 ] corresponding to the minimum value of q DC1 sc are obtained, which are the reactive power values to be absorbed by SCs installed close to the rectifier and inverter stations of HVDC Link 1.Note that although the variations are only a small fraction of the total installed reactive power from the SCs, their marginal effect is significant.
OLPO for SCs at Converter Stations of Three HVDC Links
The OLPO model is solved for SCs installed close to the converter stations of three HVDC links in the test system.The model is built and solved by MATLAB/YALMIP and Gurobi.
OLPO for SCs at Converter Stations of Three HVDC Links
The OLPO model is solved for SCs installed close to the converter stations of three HVDC links in the test system.The model is built and solved by MATLAB/YALMIP and Gurobi.
Selection of HVDC Links Participating in DC Power Support
HVDC links participating in DC power support are selected by using DC power support factors K r for three rectifier stations in the two sending-end systems and K i for five inverter stations in the receiving-end system.The values of K r and K i are listed in Tables A1 and A2, respectively.
To compute the values of K r and K i , the MIIF and MESCR indices are required.First, values of the MIIF index is obtained according to Equation (9) by using the perturbation approach where MIIF rDC3-DC1 , MIIF rDC3-DC2 , MIIF rDC1-DC3 and MIIF rDC2-DC3 are 0 due to the large electrical distances between Rectifier Stations 1 and 3 and that between Rectifier Stations 2 and 3.Then, values of the MESCR index is calculated according to Equation (11).
For any disturbance scenario, HVDC links participating in DC power support can be determined.For instance, a disturbance scenario is the situation that HVDC Link 2 is bipolar blocking, which is termed as Scenario 1.Under Scenario 1, values of K rDC2-DC1 and K rDC2-DC3 in the sending-end systems are 0.2199 and 0, respectively.Therefore, the rank order of non-fault HVDC links is HVDC Link 1 followed by Link 3.For Scenario 2 that HVDC Link 3 is bipolar blocking, the rank order of non-fault HVDC links cannot be determined according to K r in the sending-end systems since values of K rDC3-DC1 and K rDC3-DC2 are both 0. The values of K i in the receiving-end system are used instead.The value of K iDC3-DC1 is 0.8368 (=0.4288 + 0.4080) and K iDC3-DC2 is 1.6512 (=0.2894 + 0.3567 + 0.3750 + 0.6301).Thus, the rank order of non-fault HVDC links is HVDC Link 2 followed by Link 1.Since the maximum sum values of DC power support for Scenarios 1 and 2 are 5000 MW and 6000 MW, respectively, both of which are smaller than the values of power loss 10,000 MW for the two scenarios, all non-fault HVDC links are selected to participate in DC power support for both scenarios.To reduce the computational burden for the test system, the search space of DC power support values is expressed as combinations of DC power support values in percentage for the selected HVDC links accounting for the levels and limits of DC power support.Combinations of DC power support values in percentage for Scenarios 1 and 2 are listed in Table A3.A part of combinations can prevent the test system from losing stability, which are highlighted in Table A3 by using the boldface font.The criteria for the system losing stability are differences among rotor angles of generators are larger than 500, voltages of buses are smaller than 0.7 p.u. lasting for more than one second or frequency is smaller than 45 Hz lasting for more than one second.
The AVREF model is used to coordinately adjust V ref for SCs installed close to the rectifier and inverter stations and ensure DC power to achieve the target values.For example, under Scenario 1, the combination [50, 30] (%) of DC power support values should be achieved by adjusting V ref for SCs, in which the target DC power support values for HVDC Link 1 and Link 3 are 2000 (=4000 × 50%) MW and 3000 (=10,000 × 30%) MW, respectively.As shown in Figure 5a, with adjusting V ref , the DC power support value for HVDC Link 1 does achieve 2000 MW (DC power is increased from 4000 MW to 6000 MW).However, the DC power support value can only approach 1500 MW without adjusting V ref .
As shown in Figure 5b, the DC power support value for HVDC Link 3 can actually achieve 3000 MW with adjusting V ref and can only approach 2000 MW without adjusting V ref .
Optimal Combination of DC Power Support Values
The optimal combination of DC power support values for any disturbance scenario is obtained by employing the comprehensive stability margin index η.First, the combinations preventing the system from losing stability are extracted.Then, the values of transient stability margin index ηt and the frequency security margin index ηf are computed for each of the combinations.To derive the comprehensive stability margin index η, a normalization process is used through which the values of ηt and ηf are normalized and fall into the range between 0 and 1.The weight coefficients in Equation (18), i.e., ht, hf, z 2, the value of η corresponding to each combination of DC power support values is the sum of the normalized values of ηt and ηf.
The optimal combination of DC power support values for each scenario can be obtained, which corresponds to the largest value of the comprehensive stability margin index η, as highlighted in Table 2 using the boldface font.The optimal combination is [50, 30] (%) for Scenario 1 and is [10,30] (%) for Scenario 2. For Scenario 1, DC power support values provided by HVDC Links 1 and 3 are 2000 MW and 3000 MW, respectively, which can effectively compensate the power loss caused by the bipolar blocking condition of HVDC Link 2. The DC power support provided by HVDC Links 1 and 2 also contributes greatly to balance the power loss for Scenario 2.
The transient stability margin index ηt is a correct indicator quantifying the impacts of DC power support to the sending-end systems.For Scenario 1, the increase of DC power support from HVDC Link 1 is positive to the transient stability since HVDC Link 1 is in the same sending-end system with HVDC Link 2 and it can mitigate the negative effect caused by the bipolar blocking condition of HVDC Link 2. As shown in Table 2, the value of ηt for the combination of [50, 30] (%) is larger than that of [40, 30] (%),
Optimal Combination of DC Power Support Values
The optimal combination of DC power support values for any disturbance scenario is obtained by employing the comprehensive stability margin index η.First, the combinations preventing the system from losing stability are extracted.Then, the values of transient stability margin index η t and the frequency security margin index η f are computed for each of the combinations.To derive the comprehensive stability margin index η, a normalization process is used through which the values of η t and η f are normalized and fall into the range between 0 and 1.The weight coefficients in Equation ( 18), i.e., h t , h f , z 1 t , z 2 t and z f are set to be 0.5 and binary tables for Scenarios 1 and 2 are [49.8Hz, 3 s] and [49.8 Hz, 5 s], respectively.As shown in Table 2, the value of η corresponding to each combination of DC power support values is the sum of the normalized values of η t and η f .
The optimal combination of DC power support values for each scenario can be obtained, which corresponds to the largest value of the comprehensive stability margin index η, as highlighted in Table 2 using the boldface font.The optimal combination is [50, 30] (%) for Scenario 1 and is [10,30] (%) for Scenario 2. For Scenario 1, DC power support values provided by HVDC Links 1 and 3 are 2000 MW and 3000 MW, respectively, which can effectively compensate the power loss caused by the the frequency security of the test system is enhanced dramatically by load shedding, which is just within the security range of [49.8 Hz, 50.2 Hz] for the two scenarios.Nevertheless, the frequency security of the test system is significantly degraded without actuating the load shedding step.The results demonstrate the supplementary effect of the load shedding to the DC power support.
value in each area is set to be 15% of the load level in the area. ε is set to be 1 × 10 −3 . Load shedding is actuated at 0.2 s after HVDC Link 2 or 3 is bipolar blocking.
The optimal solutions for the two scenarios are 6415 MW and 6038 MW, respectively.Dynamic simulations are conducted to verify the efficacy of the load shedding step.As shown in Figure 6, the frequency security of the test system is enhanced dramatically by load shedding, which is just within the security range of [49.8 Hz, 50.2 Hz] for the two scenarios.Nevertheless, the frequency security of the test system is significantly degraded without actuating the load shedding step.The results demonstrate the supplementary effect of the load shedding to the DC power support.
Conclusions
A coordinated DC power support strategy for multi-infeed HVDC systems is proposed in this paper.The DC power of HVDC links can be effectively modulated to achieve the set values while ensuring the power system stability.The optimal leading phase operation and adjustment of excitation voltage reference values of SCs are solved by the proposed OLPO and AVREF models, which can provide the required dynamic reactive power.The HVDC links participating in DC power support are ranked and selected according to the DC power support factor.The comprehensive stability margin index is used to obtain optimal DC power support values of the participating links.The optimal load shedding can supplement the DC power support to ensure the frequency security of the receiving-end system.Simulation results of a provincial power system in China demonstrate the effectiveness and performance of the proposed DC power support strategy.
Figure 2 .
Figure 2. Flowchart of DC power support strategy.
Figure 3 .
Figure 3. Topological structure diagram of provincial power system.
Figure 3 .
Figure 3. Topological structure diagram of provincial power system.
Figure 4 .
Figure 4. Effectiveness of the leading phase operation for SCs installed close to converter stations of HVDC Link 1.
Figure 4 .
Figure 4. Effectiveness of the leading phase operation for SCs installed close to converter stations of HVDC Link 1.
4 .
Optimization of DC Power Support Values 6.4.1.Search Space of DC Power Support Values
Figure 5 .
Figure 5. Effectiveness of adjusting Vref during DC power support.(a) DC power curves of HVDC Link 1 for Scenario 1; and (b) DC power curves of HVDC Link 3 for Scenario 1.
1 t , z 2 t
and zf are set to be 0.5 and binary tables for Scenarios 1 and 2 are [49.8Hz, 3 s] and [49.8 Hz, 5 s], respectively.As shown in Table DC power of HVDC link 3 for scenario 1 (WM) Without adjusting V ref for SCs With adjusting V ref for SCs
Figure 5 .
Figure 5. Effectiveness of adjusting V ref during DC power support.(a) DC power curves of HVDC Link 1 for Scenario 1; and (b) DC power curves of HVDC Link 3 for Scenario 1.
With load shedding for scenario 2 Figure 6 .
Figure 6.Frequency curves for Scenarios 1 and 2.
Table 1 .
Reactive power values absorbed by SCs and variations of DC power support values for HVDC Link 1. values for SCs can be decreased, which are 1.4 p.u. (decreased from 4.5 p.u.) at the rectifier station and 8.9 p.u. (decreased from 10.6 p.u.) at the inverter station.The control margin of SCs installed close to the rectifier and inverter stations are increased by 12% and 9%, respectively.
Table A1 .
K r for rectifier stations of non-fault HVDC links in sending-end systems.
Table A2 .
K i for inverters stations of non-fault HVDC links in receiving-end systems.
Table A3 .
Combinations of DC power support values when HVDC Link 2 or 3 is bipolar blocking. | 11,275.6 | 2018-06-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
A Deep Learning Method for Lightweight and Cross-Device IoT Botnet Detection †
: Ensuring security of Internet of Things (IoT) devices in the face of threats and attacks is a primary concern. IoT plays an increasingly key role in cyber–physical systems. Many existing intrusion detection systems (IDS) proposals for the IoT leverage complex machine learning architectures, which often provide one separate model per device or per attack. These solutions are not suited to the scale and dynamism of modern IoT networks. This paper proposes a novel IoT-driven cross-device method, which allows learning a single IDS model instead of many separate models atop the traffic of different IoT devices. A semi-supervised approach is adopted due to its wider applicability for unanticipated attacks. The solution is based on an all-in-one deep autoencoder, which consists of training a single deep neural network with the normal traffic from different IoT devices. Extensive experimentation performed with a widely used benchmarking dataset indicates that the all-in-one approach achieves within 0.9994–0.9997 recall, 0.9999–1.0 precision, 0.0–0.0071 false positive rate and 0.9996–0.9998 F1 score, depending on the device. The results obtained demonstrate the validity of the proposal, which represents a lightweight and device-independent solution with considerable advantages in terms of transferability and adaptability.
Introduction
The Internet of Things (IoT) is intertwined with many critical assets of our daily lives, and it plays an increasingly key role in cyber-physical systems (CPSs).In fact, CPSs are becoming more and more advanced-integrating industrial IoT, Edge and Cloud computing-although their protection mainly relies on physical protection and isolation, which makes security an open topic [1].Assuring security of IoT devices in the face of threats and attacks is a primary concern.To this aim, intrusion detection systems (IDS) are a key component in IoT security as they support online detection and response to incidents.The body of scientific literature on IDS for IoT is huge and ever-increasing.IDS for IoT is often addressed through machine learning and (deep) neural networks, e.g., [2,3].This trend is pushed by: (i) the availability of commercial and open-source products to transform raw network packets into ready-to-use records suited for machine learning, (ii) the large number of public IoT datasets, such as MedBIoT (https://cs.taltech.ee/research/data/medbiot/, accessed on 3 January 2023), N-BaIoT (http://archive.ics.uci.edu/ml/datasets/detection_of_IoT_botnet_attacks_N_BaIoT, accessed on 3 January 2023) and IoTID20 (https://sites.google.com/view/iot-network-intrusion-dataset/home,accessed on 3 January 2023)-just to mention a few-and (iii) specialized hardware and deep learning frameworks (e.g., Keras, TensorFlow and PyTorch).It is a fact that IoT network traffic-transformed into fixed-length records of features-can be successfully leveraged to recognize potential attacks, which is the primary aim of an IDS.As a result, a wide community of academics and practitioners has the chance to conduct measurement studies at the intersection of machine learning and intrusion detection for IoT.
In spite of the large availability of scientific proposals, there is a gap between "labmade" machine learning and real-life operations.For example, highly complex deep networks proposed so far for intrusion detection, such as convolutional neural network (CNN), long short-term memory (LSTM) and cascades/ensembles of autoencoders (AE), might find no or limited adoption in production IoT environments.Another point is that recent contributions in the area use the training data to learn a separate IDS model per IoT device [2] or per attack [3].Different from several related proposals, the aim of this work is not the mere application of increasingly complex deep learning models for intrusion detection.Rather, the novelty of our proposal is the application of well-founded principles to a cross-device method,which allows us to learn a single IDS model-as opposed to many separate models-atop the traffic of different IoT devices.The method stems from the following principles.A "usable" IDS should opt for unsupervised and semi-supervised approaches over supervised ones.It is unlikely that attacks are known beforehand; as such, unsupervised and semi-supervised approaches are more widely applicable.As for IoT-specific constraints, intrusion detection should pursue simplicity over complexity (e.g., small-footprint neural networks) in order to assure low detection latency, portability and energy efficiency.More importantly, given the ever-growing number and the dynamicity and complexity of devices in an IoT network, IDS models should be scalable and maintainable: learning separate models per device clashes with the everincreasing scale of current IoT networks.
This study instantiates the cross-device method in the context of the detection of IoT botnets by deep autoencoders (AE).The use of deep learning and autoencoders is motivated by several reasons.First, while understanding the "power of depth" in deep neural networks is an ongoing challenge in learning theory [4], deep networks perform better than traditional shallow neural networks in many practical applications (e.g., [5,6]).Furthermore, multiple AEs-possibly complemented by sophisticated feature selection methods-are often used in complex cascades/ensembles for IoT intrusion detection; as such, they are suitable to explore whether the complexity of related deep learning IDS proposals is actually justified.More importantly, AEs can be conveniently trained only by means of normal network traffic in order to learn a semi-supervised IDS model, i.e., one of the principles driving our proposal.The study is based on the widely used N-BaIoT dataset, which provides normal and attack traffic data collected with 9 IoT devices ranging from a thermostat to webcams and security cameras and arranged into separate datasets-1 per device-in the form of records of 115 features.Our study is twofold.First, we conduct a fine-grain experiment aiming to pursue a "conventional" approach by training a distinct AE per IoT device, i.e., separate autoencoding.Second, we train a single AE with the normal traffic of all the IoT devices, i.e., all-in-one autoencoding, which provides a single cross-device IDS model for botnet detection.Both separate and all-in-one models are tested by means of the typical metrics of recall (R), precision (P), false positive rate (FPR) and F1 score computed through a test set of normal and attack traffic held-out from training.These metrics-extensively described in Section 5-provide insights into the effectiveness of the classification: values of R, P and F1 score close to one and FPR close to zero indicate that the IDS is effective.
The results indicate that it is relatively easy to achieve impressive detection figures by separate training-testing an autoencoder on top of each individual device.In fact, a "minimal" AE with three hidden layers could be successfully applied-although after retraining from device to device-to seven out of nine devices with no changes of the implementation; moreover, the same AE could be extended to all the devices by means of a minor addition of neurons.Overall, separate autoencoding achieves 0.9995-1.0recall, 0.9997-0.9999precision, 0.0002-0.0417FPR and 0.9997-0.9999F1 score, depending on the device.Although remarkable, the separate approach underlies the need for maintain-ing one model per device, which poses major scalability and maintainability issues in large-scale IoT networks.As for the all-in-one autoencoding, we observe that an AE of 5 hidden layers is enough to obtain a single cross-device IDS model, which achieves within 0.9994-0.9997recall, 0.9999-1.0precision, 0.0-0.0071FPR and 0.9996-0.9998F1 score, depending on the device.The results indicate that it is possible to train a single model with normal traffic collected from different devices, which is strongly beneficial to the FPR.The cross-device learning method paves the way for more scalable intrusion detection solutions in the context of IoT.As for the Cloud-Edge-IoT paradigm, Edge nodes are suited to host our all-in-one IDS, which means IoT devices are not subjected to extra computational and energy burden.Moreover, the proposed approach does not interfere with IoT operations.As only passive tracing of network traffic-also known as sniffing-is required, the approach is inherently nonintrusive.
In a previously paper [7], we documented a preliminary experimentation of the "all-in-one" notion.The novelty of this study with respect to the previous paper is a better exploration of the design space of the autoencoder, a more comprehensive set of experiments-leading to refinements and improvements of the original results-and additional findings along different directions on the subject.For example, here we address a larger number of IoT devices and an improved data partitioning scheme that aims to preserve sequences of related records.The findings of this paper should be contextualized with respect to the attacks and data available in the N-BaIoT dataset.Moreover, it is worth noting that privacy issues due to the adoption of the cross-device method are not in the scope of this paper.Our long-term objective is to capitalize on federated learning [8] and to leverage the decentralized data concept to cope with privacy facets.The rest of the paper is organized as follows.Section 2 discusses related work in the area.Section 3 provides the background on deep autoencoders and our semi-supervised intrusion detection approach.Section 4 addresses the IoT devices and datasets and presents data partitioning, training and implementation of the autoencoders.Section 5 presents the results of our study.Section 6 discusses the limitations and threats to validity of our study and how they have been mitigated, while Section 7 concludes the paper.
Related Work
This section presents related studies, surveying the state of the art of intrusion detection in IoT environments, with emphasis on the methods based on machine learning techniques.
Nowadays, the Internet of Things (IoT) has spawned a new ecosystem of connected devices, and an increasing number of organizations are using it to improve their performance, e.g., to operate more efficiently or to improve decision making.However, while the IoT has gained popularity, security challenges pose a significant barrier to widespread adoption and deployment of these devices.The security vulnerabilities introduced by complexity and interconnectivity of IoT devices and applications pave the way for the development of increasingly sophisticated anomaly detectors.Over the last few years, the use of machine learning to aid security and anomaly detection in IoT environments has become extremely important to face their security issues [9] and to develop subsequently appropriate lines of defense [10].Al-Fuqaha et al. [11] surveyed some challenges and issues for the design and the deployment of IoT applications.
Since intrusion detection in the IoT domain is increasingly addressed through machine learning and (deep) neural networks, many ready-to-use public intrusion detection IoT datasets have been produced to support the testing of new designs.Most of these datasets are collected in synthetic environments-across various IoT domains-under normative conditions and multiple intrusion scenarios.They attempt to emulate real network traffic, and they do not contain any confidential data.Popular public IoT intrusion detection datasets are MedBIoT [12], N-BaIoT [2] and IoTID20 [13].
IDS with IoT, Machine and Deep Learning
Lopez-Martin et al. [14] propose a novel network intrusion detection method specifically developed for an IoT network.The approach is based on a conditional variational autoencoder (CVAE) that integrates the intrusion labels inside the decoder layers.The proposed method is less complex than other methods based on a variational autoencoder, and it provides better classification results than other familiar classifiers.The authors of [15], instead, propose a network intrusion detection system design for the IoT, which is based on a deep learning model comprising a customized feed-forward neural network.They tested the efficacy of the models for binary and multi-class classification.The results obtained show the efficacy of the proposed technique.In particular, the performance of the binary classifier was found to be close to 99.99%, while a detection accuracy of approximately 99.79% was achieved for multi-class classification.
Albulayhi et al. [16] proposed and implemented a novel extraction approach and feature selection (FS) for anomaly-based IDS in the IoT domain.The method starts by utilizing two entropy-based concepts (gain ratio (GR) and information gain (IG)) to extract and select appropriate characteristics in different ratios.A comparison of various deep learning algorithms such as convolutional neural networks (CNN) and recurrent neural networks (RNNs) such as long short-term memory (LSTM) and gated recurrent unit (GRU) networks is instead proposed by Ahmad et al. [17] and used to find zero-day anomalies within an IoT network with a false alarm rate (FAR) ranging from 0.23% to 7.98%.The authors in [18] propose a semi-supervised learning method for detecting intrusions, which is very similar to our approach.However, their experimentation is not conducted in an IoT context.In particular, an autoencoder and a variational autoencoder are used to extract flow-based features from network traffic data of the CICIDS2017 dataset.
The Fog computing domain along with elasticity of cloud and auto-scaling techniques are also active research topics [19].Almieani et al. [20] show a model which uses multilayered recurrent neural networks designed to be implemented for Fog computing security that is very close to the end-users and IoT devices.However, the authors show the validity of the proposal by using a balanced version of the NSL-KDD dataset.It is an obsolete dataset, not specifically conceived for IoT applications.As highlighted in [21], this issue might also lead to the lack of transferability of the impressive results obtained on reference datasets (possibly outdated and not free from statistical biasing) in even slightly different data collection settings.
In the last few years, federated learning (FL) has gained importance in the field of cybersecurity, with several works already using this paradigm for IoT security.The works presented in [22,23], for example, are specifically conceived for industrial IoT devices, and they analyze application samples and sensor readings, respectively, rather than network data.In [8], FL was studied through the use case of intrusion detection systems.This work also includes blockchain technology to mitigate the problems faced in adversarial FL.However, it concentrates on the early steps of intrusion detection rather than detecting already running malware, and it does not focus specifically on IoT devices.
Work on N-BaIoT Dataset
In [3], an IoT micro-security add-on is presented.The model comprises two key security mechanisms working cooperatively, namely: (i) a distributed convolutional neural network (DCNN) model for detecting phishing and application layer DDoS attacks, and (ii) a cloud-based temporal long short-term memory (LSTM) network model for detecting botnet attacks and ingesting CNN embeddings to detect distributed phishing attacks across multiple IoT devices.The N-BaIoT dataset is used for training the backend LSTM model.
The work closest to our proposal is Kitsune [24], an unsupervised learning approach to detect attacks online.Kitsune's core algorithm is KitNet, which uses a collection of autoencoder neural networks to distinguish between normative and abnormal traffic.The approach involves the integration of multiple autoencoders into a classifier.The experimental results show that Kitsune is effective with different attacks, and its performance is as outstanding as offline detectors.Similarly, the authors in [2] propose a network-based anomaly detection method which extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic emanating from compromised IoT devices.It is worth pointing out that the aforementioned methods, differently from our approach, create individual models per IoT device [2] or per attack [3], which are not suited to the ever-evolving IoT environments and security threats.Moreover, authors of [25] show that a single AE can obtain classification accuracy comparable to the ones published in the research literature for supervised networks and for more complex designs built around one or several AEs.
In [26], the authors propose an ML-based method for efficient botnet detection in IoT networks.The approach uses a hybrid model which pipelines a trained variational autoencoder (VAE) for meaningful latent feature extraction, and a one-class classifier OCC to classify network traffic from the IoT devices.The experimental results on the N-BaIoT dataset show that the latent representations generated from VAE help the OCC to perform better for botnet detection in terms of AUC metric with an acceptable detection time.Reference [27] proposes a federated-based approach which uses a deep autoencoder to detect botnet attacks using on-device decentralized traffic data.The proposed model, evaluated by means of the N-BaIoT dataset, differentiates benign patterns of behavior from malicious activities by means of decentralized on-device data at the edge layer.
In [28], the authors present a novel deep ensemble learning model framework called DeL-IoT for IoT anomaly detection.They use the deep and stacked autoencoders to extract features for stacking into an ensemble of probabilistic neural networks (PNNs) learning model for performance improvement while addressing the data imbalance problem.The N-BaIoT dataset is used as a benchmark dataset.The approach can detect anomalies with 0.99 detection rate for this dataset.In [29], the authors propose nested Log-Poly, a communicationally efficient model for distributed density estimation in naive Bayes classification.The method is evaluated on the N-BaIoT dataset.
Al Shorman et al. present in [30] a botnet detection mechanism based on a oneclass support vector machine (OCSVM).The approach incorporates an unsupervised evolutionary IoT botnet detection method using a grey wolf optimization (GWO) algorithm to optimize the hyperparameters of the OCSVM and simultaneously identify features that best describe the IoT botnet problem.The system is tested using the N-BaIoT dataset and shows the best recall for the Samsung SNH 1011 N webcam device.Finally, Kan et al. [31] propose an adaptive particle swarm optimization convolutional neural network (APSO-CNN) to detect intrusions in the IoT networks.The model achieves the best accuracy on the N-BaIoT dataset if compared with three popular algorithms such as SVM, FNN and R-CNN.
Although deep learning approaches are extensively used for intrusion detection in IoT domains-as for many of the papers referenced above-we take a different perspective by considering well-founded principles and IoT-specific constraints in order to pave the way to better IDS design.Primarily, our approach capitalizes on a cross-device method, which allows us to learn a single IDS model atop the traffic of different IoT devices.Existing approaches to intrusion detection in the IoT domain, such as [2,3], differ from our solution because they provide one separate model per device [2] or per attack [3].Furthermore, some of these approaches capitalize on training sets made up of normal and attack data.For example, in [3], the authors train one model for each attack by merging the normal traffic along with the malicious traffic representing the attack of interest.Therefore, they train, validate and test the model ten times, since in the reference dataset there are ten attack categories.We remark that our method leverages a semi-supervised learning approach, and it does not require anomalies during training.This makes it potentially effective for the detection of zero-day attacks, since the model is not constrained by specific attack data.One more key point is that we developed around the intuition that complexity is not justified because a single autoencoder is enough to obtain similar (if not better) performance figures compared to existing proposals.Among the existing proposals, multiple autoencoders are often used in complex and mixed configurations (e.g., [24,26,28]).Our approach with a single AE ends up with a small-footprint neural network without any assistance from other external components, such as feature selection: this is a clear advantage in terms of simplicity of training and tuning.
Background on Deep Autoencoders
Our cross-device method is based on the use of deep autoencoders, i.e., a specific type of neural network where the input layer has the same length as the output layer.The middle, hidden, layer of an autoencoder is also known as the bottleneck layer, and its dimension is lower than the input/output layer.An autoencoder consists of two parts: encoder and decoder.Let x be a vector of n real numbers [x 1 ,x 2 ,...,x n ], such as the records representing IoT traffic for the dataset used in the experiments.The encoder maps x to a code vector-or hidden representation-y at the bottleneck layer.On the other hand, the decoder transforms y into a vector of n, i.e., the same size of x, real numbers z = [z 1 , z 2 , ..., z n ]. Figure 1 represents an autoencoder with three hidden layers.Encoding-decoding formulas are given in Equations ( 1) and ( 2).They represent the case of an autoencoder with only one hidden layer: where W, W , b and b are weight matrices and bias vectors, while σ and σ are activation functions.An autoencoder compresses the input into a lower-dimensional representation at the bottleneck layer, and then it reconstructs the output from the representation.Deep learning can be applied to autoencoders.In particular, multiple hidden layers can be used to provide depth: the resulting network is known as a deep or stacked autoencoder [32].In the autoencoder terminology, z is called the reconstruction of the input vector x.The "quality" of the reconstruction is summarized by the reconstruction error (RE), which measures the difference between the output z and the originating input x: where z i and x i (with 1≤i≤n) denote the components of the output and input vector, and n is the dimensionality.
Autoencoder-Based IDS
An autoencoder is trained by means of a given set of points, i.e., the typical training set of a machine learning experiment.Each point x of the training set is fed to the autoencoder, and weight matrices and bias vectors are progressively adjusted in order to minimize the difference between x and its reconstruction z.After training, the autoencoder will reconstruct accurately, i.e., low RE, future points "similar" to those used for training.Based on this principle, in order to pursue an IDS, we train the autoencoder solely by means of normal data points, i.e., records related to network traffic generated by the IoT devices under regular operations, which means the autoencoder learns a latent subspace of normal data points [33].After training, the autoencoder-embedding a model of the "normal profile"-can identify any instance not conforming to the model as a potential intrusion.
The reconstruction error (RE) is a viable indicator to detect intrusions.Since the autoencoder is trained using only normal data points, it will generate (i) low RE (good reconstructions) for future normal inputs, and (ii) high RE (bad reconstructions) for intrusions.In fact, when the autoencoder attempts to process a data point, i.e., an IoT traffic record, that deviates from the norm, it will generate a high RE because it was never trained to reconstruct intrusions.The approach adopted in this paper falls within the larger scope of semi-supervised anomaly detection [34].Moreover, as for any anomaly detection technique assigning a score to data points (RE in this study), we need a cut-off detection threshold to discriminate normal points from intrusions.In particular, intrusion detection is based on the use of the threshold value: the data points producing RE values under the threshold are considered normal, and those with REs above the threshold are marked as intrusions.The value of the detection threshold is an outcome of the training phase.As such, it is inferred from normal data points as described in the following.
Selection of the Detection Threshold
There are several practical challenges that undermine the selection of a suitable threshold in a semi-supervised training setting, such as for our study.An incorrectly selected threshold (either too low or too high) might cause misclassifications.It must be noted that assembling a "reliable" database of normal points for training purposes is a complex matter.For example, the normal points might be fraught with uncommon behaviors or outliers being accidentally included within the normal points.The labeling might be imperfect, i.e., intrusions being occasionally labeled as normal behaviors.The rationale behind our method is to clear out as many "strange"-although normal-training points as possible before computing the threshold: though belonging to normal data, spurious "out-of-the-crowd" behaviors will be more similar to intrusions than to normal points.
The threshold is computed by relying on a small, i.e., 10%, disjoint subset of the training set, which we call the threshold set.A representation of the method is in Figure 2 and consists of four steps: 1.
AE training: the autoencoder undergoes the typical semi-supervised training procedure described above, which allows it to learn the normal profile of the data points; 2.
Outlier detection: an outlier detection algorithm is applied to the threshold set in order to discriminate inliers from outliers; 3.
RE computation: inliers and outliers are fed to the autoencoder: this step produces two separate vectors of reconstruction errors, i.e., RE I N and RE OUT of inliers and outliers, respectively; 4.
Threshold selection: the detection threshold is obtained through a sensitivity analysis performed with RE I N and RE OUT .
At first, the threshold is initialized with the maximum RE in RE OUT ; then, the threshold is progressively lowered until it finds an "equilibrium" between inliers and outliers, i.e., inliers whose RE is below the threshold against outliers characterized by an RE above the threshold.In this study, we use the isolation forest [35], although the threshold selection method does not mandate a specific outlier detection algorithm.
Dataset, AE Design and Implementation
The validity of the cross-device method is assessed by a twofold experimentation.First, we pursue a "conventional" approach, which consists of training-testing the autoencoderbased IDS with the data of one IoT device each time, which leads to one separate model per device.It is worth pointing out that this approach is used by several related papers presented above.Later, the autoencoder-based IDS is trained with a dataset consisting of the normal traffic of all the IoT devices in hand-and therefore all-in-one-that leads to a single model for all the devices.It is worth noting that the notion of all-in-one autoencoder underlies the availability of training data pertaining to different devices.Consequently, training is not expected to happen "in place" on single IoT devices.Rather, the proposed approach is suited to the nodes traversed by the traffic generated from different network devices.A typical example is the Cloud-Edge-IoT paradigm: Edge nodes-at the boundary between two networks -implement common functions, such as routing, monitoring, and storage of data passing between networks.In the context of the IoT, Edge nodes encompass a broad range of devices and are eligible to host our all-in-one IDS.As a consequence of deploying the all-in-one IDS onto Edge nodes, IoT devices are not subjected to extra computational and energy burden; as said above, energy efficiency is among the principles that underlie the design of our proposal.Moreover, as only passive tracing of network traffic is required, the approach is inherently nonintrusive.In the following, we describe the datasets, tuning and training of both separate and all-in-one autoencoding.
Reference Dataset and Partitioning
The dataset considered in this paper is N-BaIoT [2].It provides a public botnet IoT dataset available at the University of California at Irvine (UCI) machine learning repository.The authors deployed all of the components of two botnets in an isolated lab and used them to infect nine commercial IoT devices listed in Table 1.All the devices execute the attacks of two botnets-namely Mirai and BASHLITE-that have previously infected the IoT devices.For each device, the data were obtained under both normal operations and attack conditions.The dataset consists of fixed-length records of features computed from the lower-level network activity.In particular, each record is identified by 115 features with a systematic structure plus a class label (e.g., "normal" or "TCP attack").The features model traffic statistics over several temporal windows.Each statistic is further temporally aggregated using a weighted sum that progressively leads to the decay of the oldest contributions to the sum.It is worth pointing out that botnet infections consist of multiple steps, such as propagation, bot infection, communication with C and C server, and performing other types of malicious activities [36].According to [2], it is not sufficient to determine only the early stages of infection.The data contained in the N-BaIoT dataset pertain only to the last stage of botnet construction, when IoT bots begin launching attacks.Interested readers are referred to [2] for any additional information on the N-BaIoT dataset.
Our experiments are based on all the devices described in the N-BaIoT dataset.Experiments are performed in a binary classification scenario.All the records produced within different types of attacks are considered as belonging to a unique generic class named ATTACK-encoded with the numeric label 1-whereas NORMAL records are assigned 0 as label.It is worth remarking that the N-BaIoT dataset is organized into separate datasets, each containing both normal and attack traffic corresponding to a single IoT device.For our experiments, we split the original datasets into three disjoint splits: training set, validation set and test set.While splitting the dataset corresponding to an IoT device, we preserve the original sequence of the records because-as said above-the features of N-BaIoT are based on the temporal windows and are aggregated using weighted sums.Moreover, each record of the original dataset is assigned to a unique split.For each IoT device, we obtain:
•
Training set.It contains 70% of the total NORMAL records; moreover, 10% of the training set, according to the threshold selection criteria described in Section 3, is the "threshold set", meant for the threshold selection process; • Validation set.It contains 15% of the total NORMAL records as for any machine learning experiments, it is used to provide an unbiased evaluation of the model in order to find the optimal values for the hyper-parameters; • Test set.It contains 15% of the total NORMAL records and all ATTACK records.The records in this set are accompanied by the corresponding labels that are used to assess the correctness of the predictions.
Table 2 shows the cardinality of the sets for the first group of experiments, i.e., separate autoencoding.Table 3 shows the cardinality of the training, validation and test sets for the second group of experiments, i.e., all-in-one autoencoding, where a single training/validation set-meant to come up with one model-sum up to the cardinalities of the training/validation sets in Table 2.It must be noted that test sets are "held-out" from training/validation, and they will be used in Section 5 for measuring the detection capabilities of the autoencoder-based IDSes.
AE Design
The design of a deep neural network, such as the autoencoder, is based on choosing the values of many hyperparameters-number of layers, neurons per layers, activation functions and so forth-that are subject to fine-grain tuning.Other relevant and critical parameters, such as the number of epochs, bath size and optimization algorithms, pertain to the learning process.For the time being, there is no scientific rule for optimizing the hyperparameters of an AE.In this work, the selection of the hyperparameters is driven by our previous experience and good practices on tuning deep autoencoders in a close IDS domain [25].After having set a "reasonable" initial configuration of the autoencoder (e.g., number of neurons at the bottleneck much less than the number of features, use of the rectified linear unit activation function for the hidden layers, hyperbolic tangent activation function at the output layer and RMSProp optimizer), additional fine tuning was performed through manual search.To this aim, the manual search was validated by experimental tests carried out by analyzing the outcome of the autoencoder-RE in our study-with respect to the abovementioned validation set, i.e., the independent set of data points purposely intended to support the selection of the hyperperameters.The use of a validation set makes it certain that the final values of the hyperperameters are not biased by the test sets; rather, the test sets contain "held-out" data points, i.e., not used at all for training and hyperperameters selection.
Separate Autoencoding
We found that the configuration reported in Table 4 guarantees an effective design, i.e, low RE on the validation set, for the separate AEs setting.The selected AE is made up of three hidden layers and a different number of neurons depending on the configuration (A Table 4a or B Table 4b).We ended up with two different configurations because, despite the numerous fine tuning operations, it was not possible to find a single working configuration for all IoT devices.In particular, the three densely connected layers include N-48-6-48-N neurons for Configuration A and N-64-6-64-N neurons for Configuration B. In the rest of the paper, N (i.e, the number of neurons of the input/output layer of the autoencoder) is equal to 115, which is the dimensionality of the traffic records of the N-BaIoT dataset.The classical rectified linear unit (ReLu) has been selected for the encode layer, the decode layer and the bottleneck layer, while for the output layer the hyperbolic tangent (Tanh) activation function has been used.We train an AE on NORMAL data points of each IoT device for 100 epochs with batch size 512 using the RMSProp optimizer with learning rate value lr = 0.0001.For the second set of experiments, we tested different network designs before choosing a good configuration of the parameters.We found that adding width and depth to the AE is beneficial.In particular, Table 5 shows the design used for the all-in-one autoencoding experiment.The chosen AE is made up of five hidden layers.These layers are densely connected and include N-64-24-6-24-64-N neurons, where N is the number of features of the data points.Again, the rectified linear unit (ReLu) has been selected for the encode layer, the decode layer and the bottleneck layer, while for the output layer the hyperbolic tangent (Tanh) activation function has been used.We train the autoencoder on NORMAL data points for 100 epochs with batch size 2048 using the RMSProp optimizer with learning rate value lr = 0.0001.
Implementation and Training
We implement the autoencoders in python with Keras (https://keras.io/,accessed on 3 January 2023) (Version 2.6.0) and TensorFlow TensorFlow (https://www.tensorflow.org/,accessed on 3 January 2023) (Version 2.6.0)libraries on a Lambda workstation provided with an AMD Threadripper 3975WX processor with 32 cores.During the training phase, the weights and biases of the encoder and decoder are calculated and optimized with respect to NORMAL training data.When the training is started, the AE neurons are randomly initialized, and input data are presented in batches (512 for the separate approach and 2048 for the all-in-one approach) and through a given number of epochs (100 for both approaches).The system tries to minimize the loss, setting aside a small ratio of reserved data to validate the optimization actions performed-modifications of the weights in the network-so as to signal overfitting.A solution is to compute the loss as the mean squared error at the output units; this matches the definition of reconstruction error (RE) presented above.As outlined in Section 3, during training the autoencoder learns the relationships among the features in the training set; it is worth pointing out that the training phase of an AE takes around 1 minute in the worst case.Our semi-supervised training process (no anomalies at training time) is potentially valuable to complement current technologies that rely on pre-established specifications of anomalies.
Results
We run the test sets of the devices in hand against the AE described in Section 4. In order to evaluate our proposal, we focus on the following two points: (i) the performance of separate AEs, individually trained with the normal traffic of each IoT device, and (ii) the performance of the all-in-one AE (i.e., adopting all-in-one autoencoding) trained once with the normal traffic of all devices and then applied to the test of the devices.Since the test sets are labeled, each RE produced by the AEs in both configurations can be linked to the label of the corresponding data point, which can be used for evaluation.It is worth pointing out that the AE saw no attack at training time.
The detection performance is measured by computing the typical metrics of recall (R), precision (P), false positive rate (FPR) and F1 score.These metrics are computed as follows: where true positive (TP) and true negative (TN) represent the points that are correctly classified, while false positives (FP) and false negatives (FN) indicate misclassifications.For example, TP is the set of ATTACK points whose RE is higher than the threshold; similarly, TN is the set of NORMAL points whose RE is lower than the threshold.In particular, recall is the ratio of the number of true positives to the sum of the number of true positives and false negatives, precision is the ratio of the number of true positives to the sum of the number of true positives and false positives, while the false positive rate is the ratio of the number of false positives to the total number of normal data points in the test set.Finally, F1 score is the harmonic mean of precision and recall.
Separate Autoencoding
We process the test set of a given device after the AE is retrained for that specific device beforehand.Table 6 provides the evaluation metrics for all the IoT devices.The results show that the use of individual AEs leads to high detection figures.The recall values range from 0.9995 (Samsung SNH 1011 N webcam) to 1.0 (Danmini doorbell).The minimum false positive rate (0.0002) is obtained for the Provision PT-737E security camera.Overall, the results are outstanding for all the IoT devices, with recall and precision close to 1.0 with a reasonable false positive rate.Moreover, the F1 score-the harmonic mean of precision and recall-is computed as a key metric, since precision and recall are both relevant indicators of the model performance.From the analysis of the values in Table 6, it is possible to note that the F1 score is within 0.9997-0.9999,i.e., close to 1.0.This result indicates a good trade-off between precision and recall.6 reports the network configuration used for each IoT device.In particular, as highlighted in Section 4, Configuration A refers to a N-48-6-48-N network design, while Configuration B refers to a N-64-6-64-N network design.We emphasize that the use of multiple AE designs is almost mandatory, since it was not trivial to find a "one fits all" configuration for all the IoT devices.As previously explained, the selection of two "best" configurations-A and B-is guided by experimental tests carried out by analyzing the RE figures obtained by multiple AE models.
Finding: Different from similar proposals in the area, which rely on complex cascades and ensembles of autoencoders-possibly complemented by the use of feature selection methods-if not other schemes, such as CNNs and LSTMs, a "minimal" and simple autoencoder with three hidden layers is more than enough to obtain remarkable results when train-test is performed separately for each device.
Overall, the results obtained are notable.However, the hypothesis of training an AE for each device remains unrealistic, especially if the intrusion detection system is intended to be deployed in time-dependent and rapidly evolving IoT environments.
All-in-One Autoencoding
The use of an all-in-one detector-applied to the devices assessed with no change to its architecture (layers, number of units and weights)-is more suited to large-scale and dynamic IoT environments.We evaluated the performance of the all-in-one AE, trained with the merged normal traffic of all IoT devices.Table 7 provides the evaluation metrics.It turns out that the detection performance of the all-in-one model with five hidden layers is in line with the figures obtained by the separate autoencoding solution.The best results in terms of recall (R = 0.9997) are obtained with the test sets of the Danmini doorbell, Ecobee thermostat, Philips B120N/10 baby monitor, Provision PT-838, SimpleHome XCS7-1002-WHT and SimpleHome XCS7-1003-WHT security cameras; the FPR is reasonably low-exactly 0.0 for the Provision PT-838 and Provision PT-737E security cameras-for most devices.The F1 score, instead, is 0.9996 for the Ennio doorbell, 0.9997 for the Samsung SNH 1011 N webcam, and 0.9998 for all the other devices.This confirms we reached an acceptable trade-off between precision and recall.For the sake of completeness, we also show in Table 8 the evaluation metrics for one "smaller" configuration, the aforementioned Configuration A, which was mostly outstanding in the separate autoencoding experiment; however, it performs worse in the all-in-one test.For example, the recall drops to 0.3549 for the Samsung SNH 1011 N webcam device and to 0.3506 for the Ennio doorbell device.This indicates that with respect to the problem addressed and the data in hand, deepening and widening the autoencoder can improve intrusion detection in the all-in-one setting.Regarding the effects of the deeper and wider configuration on processing times, we found that for the N-64-24-6-24-64-N, the detection latency per record is reasonable (about 1 microsecond per record).
Finding:
The cross-device training method assures remarkable detection figures when compared to the separate autoencoders.Although the best all-in-one AE is a wider and deeper network than separate autoencoding, its training time and detection latency per record are acceptable.
Our results show that it is possible to train a single model with normal traffic collected from different devices.While the findings of this paper should be contextualized with respect to the attacks and devices of N-BaIoT, we believe there is room to conceive more scalable and centralized intrusion detection solutions in the context of IoT, based on the notion of all-in-one models.An AE model trained "once and for all" represents a lightweight and device-independent solution.This is a considerable advantage in terms of transferability and adaptability.
Discussion of the Results
In the following, we complement the results presented above by a visual analysis and a discussion on the operation of the two considered approaches.
Figure 3 shows the reconstruction error (RE) obtained on the test set of the Danmini doorbell and the Provision PT-838 security camera devices by considering the separate autoencoding approach (Figure 3a,c) and the all-in-one autoencoding approach (Figure 3b,d).We limit the visual analysis to these two devices because all the remaining cases lead to similar findings.The REs of normal and attack points of the test sets refer to the separate and all-in-one experiment, respectively.Each data point is marked by either •-normal data point-or -attack data point-for better visualization; the x-axis is the id of the point in the test set; the y-axis is the corresponding RE.A semi-logarithmic scale (x-axis in linear scale and y-axis in log scale) is used to better visualize the REs.The horizontal dashed line shows the anomaly threshold (a by-product of the training phase).In general, according to Figure 3, it can be noted that both autoencoding approaches-separate and all-in-one-return a low RE for almost all the normal points, which are thus below the detection threshold; on the other hand, most of the attack points lead to high RE and are well above the threshold.Overall, Figure 3 is useful to gain a visual understanding of true negatives (NORMAL points below the threshold), false positives (NORMAL points above the threshold), false negatives (ATTACK points below the threshold) and true positives (ATTACK points above the threshold) of both autoencoding approaches.
It is easy to link the visual results and the AE performance figures by analyzing the recall and FPR values-Danmini and Provision PT-838 devices-reported in Table 6 (separate autoencoding) and Table 7 (all-in-one autoencoding).In particular, for the Danmini doorbell device, Figure 3a (separate autoencoding) shows a higher number of NORMAL points above the threshold-false positives-if compared with Figure 3b (all-in-one autoencoding); in fact, the FPR values for this device are 0.0108 (separate autoencoding) and 0.0004 (all-in-one autoencoding).On the other hand, Figure 3a (separate autoencoding) shows no attack points under the threshold-false negatives-if compared with Figure 3b (all-in-one autoencoding).Again, this visual finding is confirmed by the performance figures.As a matter of fact, the recall values for this device are 1.0 (separate autoencoding) and 0.9997 (all-in-one autoenconding).The same considerations can be extended to the Provision PT-838 security camera device (Figure 3c,d).It is worth pointing out that this trend-lower FPR at the expense of lower recall for the all-in-one approach-is preserved on almost all devices (as shown in Tables 6 and 7).
Overall, we believe that the all-in-one approach has remarkable strengths.Although the recall values obtained with this approach are slightly lower than the separate approach ones, they are surely acceptable (in the range 0.9994 to 0.9997).Moreover, the FPR is always close-or even equal-to 0; on the other hand, the FPR values obtained with the separate autoencoding are worse, and they range from 0.0002 to 0.0417.In real-life environments, false positives can be cumbersome to deal with for network administrators.A high number of false positives leads to lost confidence in the alerts and to lower defense levels.Therefore, in addition to the advantages related to the scalability and the portability (a single network topology for all the devices), the all-in-one approach might also be beneficial to mitigate the false positive problem.
Comparison with the Related Proposals Using N-BaIoT
It is worth noting that N-BaIoT is a widely used dataset in the IoT-based anomaly detection field.Consequently, we can compare the metrics obtained in our study with other similar proposals in the literature.Authors in [2,3,24], as mentioned in Section 2, assess different state-of-the-art anomaly detection methods on the same N-BaIoT dataset used in our paper.However, different from our study, which shows the performance metrics per IoT device, the authors report the performance figures per attack.For example, in [3], the authors obtain a recall of 0.99 for the gafgyt combo attack and of 1 for the mirai_ack attack.Again, the authors of the detector proposed in [26] show only the ROC curves-and AUC values-as performance figures; therefore, they cannot be directly compared with our results.As for the use of the autoencoder, this type of neural network is leveraged to perform automatic feature extraction with the aim of reducing the dimensions of the data being processed.Therefore, differently from our study, the autoencoder is simply a building block of a more complex detection architecture.As for the federated-based approach proposed in [27], the recall values are below 0.75 under the 50 epoch range; the score jumps up to 0.99 from 60 epochs onward and remains at that high level while using the federated learning approach, for all experiments.These results are in line with our recall values.The performance obtained in DeL-IoT [28], instead, is comparable to the ones obtained in our study.However, the authors use the autoencoder to perform feature extraction and not anomaly detection (as intended in our study).As for the approach described in [30] mentioned in Section 2, tested using the N-BaIoT dataset, the best recall-0.999-isfound for the Samsung SNH 1011 N webcam device.We also obtained a recall equal to 0.9994 for this device.Similar values of recall are preserved for all the IoT devices in our study, different from the results shown in [30].Moreover, according to the figures reported in [30], the best FPR-0.009-isobtained for the Ecobee thermostat device.For the other devices, the FPR ranges from 0.016 to 0.098.The FPR obtained with our approach, instead, ranges from 0.0 to 0.0071.
The results obtained with our cross-device method are inline-if not even better-with those obtained by other anomaly detection techniques assessed through the N-BaIoT dataset.Besides the inherent benefits of deep learning, the improvement over existing methods is explained by the interplay of two factors, which are the novel contributions of this work.The former is the cross-device notion: learning a single model atop the normal traffic related to different IoT devices allows for the recognition of a greater number of nominal behavioral patterns and related variants.This makes our approach valuable in terms of adaptability and transferability across the devices, and it leads to better performance.The latter is the proposed outlier-based threshold selection method.In this respect, there exist several "fixed" criteria, such as mean, median, mean plus standard deviation and n th percentile, that can be found in the related literature to determine a threshold from normal data.For example, the authors in [2] consider the mean and the standard deviation to select the threshold.It should be noted that a "fixed" criteria approach may not be the best fit for all the datasets in hand.In order to overcome this limitation, we proposed a data-driven method that can adapt to potential outliers.Our method improves the results over traditional techniques because it allows mitigating the inherent imperfections of the training data and occasional inconsistencies of the autoencoders at providing good reconstructions also for outliers, which are well-known in the literature [37].
Limitations and Threats to Validity
The evaluation of an IDS is a complex subject, and it depends on many factors including-but not limited to-the underlying network devices, topology and speed, the nature of the attacks and the quality of the legitimate traffic being used to infer the "normal profile" of the devices.For example, the normal traffic might be fraught with spurious, out-of-the-crowd behaviors and by imperfect labels.The proposed outlier-based threshold selection method mitigates the effect of the imperfections of the normative traffic.As much related work in the area, the findings of this paper must be contextualized with respect to the attacks and data of the adopted dataset.In this respect, many public security datasets have been proposed over the years; some of them have gained significant popularity and become widely consolidated benchmarks for intrusion detection algorithms and tools, such as N-BaIoT.We are aware that a "lab-made" dataset, such as N-BaIoT and many others, might be a simplification of real-life production networks; however, adopting a reference benchmark makes it possible to compare the results obtained with those of the related proposals (as discussed in Section 2).As for the application of the autoencoder and outlierbased method to a further dataset, we used the 2021 version of the CICIDS2017 dataset (https://downloads.distrinet-research.be/WTMC2021/tools_datasets.html,accessed on 3 January 2023) in a previous paper that investigates the issues of training effective IDS models by a single autoencoder [25].Besides the use of a synthetic dataset, another simplification is in the format of data addressed by our study.N-BaIoT is based on a network made of different IoT devices, which range from a thermostat, to doorbells and security cameras-possibly from different vendors-and network equipment (e.g., WiFi access point, switch and router); the "nature" of the IoT devices in real-life network environments is even more exacerbated when compared to a synthetic dataset.The model proposed in this paper can detect threats in a network made of different devices under the assumption that all the traffic is transformed into a standard data format consisting of fixed-length records irrespective of the device.Nowadays, there exist many products for capturing network traffic and generating fixed-length records suited for machine and deep learning purposes.The availability of a standard data format simplifies the use of typical IDS approaches applied to common networks.As a final remark, the proposed model is not meant to cover the entire life cycle of an infection; rather, our work focuses on the last stage of botnet detection that pertains to IoT bots launching the attacks.As for any data-driven, as those involving deep learning methods, there may be concerns regarding the validity and generalizability of the results.We discuss them based on four aspects of validity listed in [38].
Construct validity.The study builds around the intuition that it is possible to learn a cross-device IDS model only on top of normative training traffic records related to several IoT devices.Our study, based on deep autoencoders, has the potential to drive scalable and maintainable IDS solutions that can cope with the ever-growing complexity of IoT networks.This construct has been investigated with normal and attack traffic from a widely accepted benchmark by the related IoT literature.Experiments are based on the ubiquitous Keras-tensorflow deep learning framework.Overall, the study is supported by wellfounded theory and practice, and the typical evaluation metrics of recall, precision, false positive rate and F1 score for comparative purposes.
Internal validity.Our study implements several countermeasures aiming to mitigate internal validity threats.For example, we made sure to test our autoencoder-based IDS by means of held-out data, i.e., not used at all for training and hyperperameters optimization.The reference dataset used to conduct the experiments contains attacks that follow up BASHLITE and Mirai botnet infections.Overall, the attacks are well-established in the literature and have been used to validate many existing IDS proposals.The traffic is based on several devices; more importantly, the training of both separate and all-in-one autoencoders-semi-supervised-is not biased by the specific attack types.The use of such a diverse mixture of controlled conditions and techniques aims to mitigate internal validity threats.
Conclusion validity.Conclusions have been inferred through a careful design of the experiments.The conclusions of the study are consistent along the different dimensions of our experiments, which makes our finding perfectly reasonable and technically sound.For example, we assess the sensitivity of the IDS performance with respect to different configurations of the autoencoders, i.e., in terms of the number of hidden layers and neurons.We present an extensive discussion of the results.The key findings of the study are consistent across the devices.The proposed approach detects the attacks at hand irrespective of devices, which means the results are not biased by a specific attack or device.More importantly, the IDS performance achieved is in line with the related literature, which provides a reasonable level of confidence in our analysis.
External validity.The cross-device model can be applied to other similar systems, types of neural networks and attacks.Nowadays, there exist many public datasets and attack tools, which make our approach definitively feasible in practice.Our approach does not interfere with system operations: as only passive tracing is required, the approach is inherently nonintrusive.More importantly, there exist many products for capturing network packets and generating fixed-length records, which allow porting our method to other systems.In fact, in this paper we successfully applied the method to the network traffic-transformed into a data format suited for machine learning experiments-related to several IoT devices and attack types to mitigate external validity threats.We are confident that the experimental details provided in the paper would support the replication of our study by future researchers and practitioners.
Conclusions
Nowadays, the IoT paradigm is enabling new application scenarios.However, simultaneously to the advances in new technologies, the number and variety of cyberattacks have grown.Ongoing projects for enhancing IoT security include methods for providing data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things and the enforcement of security and privacy policies.Nevertheless, even with these mechanisms, IoT networks are vulnerable to multiple attacks, such as botnets, aimed to disrupt the network.For this reason, another line of defense, designed for detecting attackers is needed.IDS is designed to fulfill this purpose.Traditional machine learning techniques have been widely applied in the literature to detect attacks in IoT scenarios.Frequently, IoT intrusion detectors are implemented by means of deep learning techniques with individual models per IoT devices or per attack.However, these assumptions might be not suited to high-scalable and dynamic IoT environments.
This paper proposes a novel cross-device method, which allows learning a single IDS model atop the traffic of different IoT devices included in the widely used N-BaIoT dataset.In particular, the proposed approach-based on the use of an all-in-one deep autoencoder-differs from the other contributions proposed in this area, which use the training data to learn a separate IDS model per IoT device or per attack.Our results show that it is relatively easy to achieve remarkable detection results by training-testing a model on the top of individual devices.The all-in-one deep autoencoding approach, instead, proves that it is possible to preserve the overall performance within 0.9994-0.9997recall, 0.9999-1.0precision, 0.0-0.0071FPR and 0.9996-0.9998F1 score, depending on the device, by training a single model with the normal traffic collected from different devices.The method paves the way for more scalable intrusion detection solutions in the context of IoT; moreover, it is suited to the Cloud-Edge-IoT paradigm.
In the future, we will extend our analysis to further datasets and devices in order to discover potential limitations of the approach.In this respect, we believe that transferability of the models is a primary concern.Although the IoT devices might belong to the same category (e.g., doorbells or security cameras) with common physical characteristics, these can be released by different manufacturers.The model may not necessarily be transferable to all the devices of the same category but of different manufacturers.Therefore, we will test and tune our approach with a wide set of devices of the same category but from different manufacturers in order to develop both a device-independent and manufacturer-independent, all-in-one, model.With data privacy and integrity becoming a major concern, in recent years new technologies such as federated learning have emerged.They allow training machine learning models with decentralized data while preserving its privacy by design.Federated learning is a collaborative learning approach where the devices interact with a centralized entity but without the need to share their data.In the future, we will also extend our approach in the context of federated learning, which can be used for intrusion detection.Furthermore, the continuing increase of new unknown attacks requires corresponding improvements to the performance of the IDS solutions to identify zero-day attacks.Future research will also investigate to discover and mitigate the actions attackers might take to evade detection.
Figure 2 .
Figure 2. Steps underlying the threshold selection method.
Table 1 .
IoT devices in the N-BaIoT dataset.
Table 3 .
Training, validation and test set size (all-in-one autoencoding).
Table 4 .
Separate autoencoding: layering structure of the AE (all the layers are dense).
Table 5 .
All-in-one autoencoding: layering structure of the AE (all the layers are dense). | 12,889.8 | 2023-01-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A Reliable Method for Fabricating sub-10 nm Gap Junctions Without Using Electron Beam Lithography
We demonstrate a high yield production scheme to fabricate sub-10 nm co-planar metal-insulator-metal junctions without using electron beam lithography. The fabricating procedure contains two photolithographys followed by shadow evaporation. Ultrasmall gaps were formed in the crossing region of the two metal layers during the evaporation of the second layer. The sizes of the gaps were estimated using scanning electron microscopy images. Poly (3-hexylthiophene-2,5-diyl) layers were deposited on the junctions using a special ink-jet technique. The results of the conductivity measurement of the molecular layer indicate that these junctions can be used in the study of molecular sensors.
I. INTRODUCTION
Molecules were proposed for use as active electrical devices as early as 1974, when Aviram and Ratner theoretically demonstrated that uinimolecular rectification, or an asymmetrical electrical property, should be obtained for a single molecule [1].Because organic molecules are one of the smallest functional materials and can be massproduced as Avogadro constant order with molecular synthesis techniques, molecular devices have attracted attention as minute and low-price devices.
Recently, the miniaturization limits of fabrications are approaching the molecular size.Narrow gap electrodes have found wide acceptance in the study of the electrical properties of molecules [2][3].Meanwhile, a variety of techniques for narrow gap junction fabrication have already been demonstrated.One of the most popular techniques is electron beam lithography (EBL) [4][5][6][7][8][9].Commercial EBL systems have the ability to focus electrons to diameters less than 10 nm, enabling the fabrication of structures at the nanometer level.However, because the EBL is highly cost-and-time-consuming, procedures that do not use EBL are preferred for industrialization.Other techniques applicable to nano-fabrication include electromigration [10], shadow evaporation [11], mechanical break junction [3], and electroplating [12].In these techniques, the gaps were fabricated without EBL.However, all the above techniques require nano-scale prestructures fabricated by EBL.No fabrication process that re-quires preparation without using EBL while using only µm order patterning has been reported as far as we know.
In this communication, we demonstrate a high yield production scheme to fabricate sub-10 nm co-planar metal-insulator-metal junctions without using EBL.The fabricating procedure contains two photolithographys followed by shadow evaporation and does not contain any EBL process.Figure 1 shows a schematic representation of the nanogap electrodes.Ultrasmall gaps are formed in the crossing region of the two metal deposited layers during the evaporation of the second layer.The sizes of the ultrasmall gap can be controlled by the height of the first metal layer and the angles of the two evaporations.With this method, we could make metal electrodes with intervals ranging from 5 to 60 nm.The fabricated gaps showed very high resistance of ∼TΩ.While with molecules connecting the electrodes over the gap, we observed characteristic conductance reflecting the molecular properties.Poly (3-hexylthiophene-2,5-diyl) layers were deposited on the electrodes using a special ink-jet technique.The measured resistance of the molecular layer was clearly changed with the gap sizes.We considered that the change reflects the characteristic behavior of the nano-scale systems.The results indicate that these junctions can be used in the study of molecular devices.
II. ELECTRODE PROCESSING
Figure 2 shows a schematic diagram of fabrication process.A 1000-nm thick uniform film of resist material (AZ5214E, Clariant Japan) was spun on SiO 2 (300 nm)/Si wafers.They were patterned by conventional photolithography.After the first patterning, the first metal layer (2 nm Cr and 25-60 nm Au) was deposited with an angle θ 1 from the substrate surface (Figure 2 (a)).After liftoff, the second lithography process was performed with a mask of slits orthogonal to the slit on the first layer (Figure 2(b)).The width of the second slit is 2 µm, which is the minimum line width of our photolithography.The second metal layer (2 nm Cr and 20 nm Au) was deposited with an angle θ 2 from the opposite side of the first deposition (Figure 2(c)).Because the height of the first strip prevented the formation of a continuous second strip at θ 1 < θ 2 , a narrow gap was formed between the two strips.These angles of the step edges almost agree with the angle of evaporation.We adopted the condition of both angles (θ 1 , θ 2 ) to be less than 90 degree.The reason for using θ 1 < 90 • for the angle θ 1 is that a flat and sharp step wall of the first metal strip is necessary for generating a constant width gap along the µm length.When we evaporate the first layer at θ 1 =90 degrees, we cannot avoid the part where the evaporation angle exceeds 90 degrees, because evaporation sources have a finite distribution.The evaporation at over 90 degrees produces a protruding part along the sidewalls of the photo-resist.This protrusion prevents the formation of a constant gap width.The gap-size G can be controlled by the thickness of the first layer H and the edge angle of the two strips θ 1 and θ 2 .If we neglect the migration of metal atoms, the gap-width can be estimated as follows: After lift-off, the electrodes were cleaned in oxygen plasma before imaging by scanning electron microscopy (SEM) in order to estimate the small gap width.An estimation of the leak current was then carried out on a probe station.http://www.sssj.org/ejssntA small gap was observed at the intersection of metal strips A and C. The yield of the 20-nm gaps is over 90%. Figure 3 (c) shows an image of the 10-nm gap electrodes (H = 40 nm, θ 1 = 75 • , and θ 2 = 60 • ).Under this condition, we observed several points showing narrower intervals than the value expected from equation ( 1) along the edges.We considered that a migration of metal or an uneven edge of the first metal strip caused such narrower intervals.The yields of the 10-nm and 5-nm gaps (H = 25 nm, θ 1 = 75 • , and θ 2 = 60 • ) were about 55% and 20%, respectively.Most of the remaining electrodes showed a short-circuit (under 10 3 Ω order).Because the short-circuit might be caused by bridging metal atoms between the intervals, the migration and the uneven edges are a serious problem for the fabrication of sub10-nm gap electrodes.However, such bridges are only small parts of the entire gap.The short-circuit points are easily removed using an electromigration method [10].The final yields of the sub 10-nm gap fabrication are improved to over 80% by the combination of the electromigration and our procedure.With our fabrication procedure, we made metal electrodes with gaps ranging from 5 to 60 nm with a high reproduction.
Next, we demonstrated an application of the fabricated electrodes.Poly (3-hexylthiophene-2, 5-diyl) layers were deposited using a special ink-jet technique on three electrodes with G = 10, 20, and 60 nm. Figure 4 (a) shows an SEM image of the molecular layer on the fabricated electrodes.The thickness of the molecular layer was about 150 nm, which was measured using atomic force microscopy.Figure 4 (b) shows the representative I-V characteristics of the molecular layer on the three gap electrodes.The I-V measurements were carried out in air and in the dark at room temperature with the voltage applied between electrode 1 and electrode 2. The molecular layers were supposed to be doped with oxygen.The resistances of the bare gap electrodes were over 10 TΩ.After the deposition of the molecular layers, the measured resistances were 5 kΩ, 280 kΩ, and 200 MΩ for the 10-nm, 20-nm, and 60-nm gap electrodes, respectively.The resistivity of the molecular layer on the 60-nm gap electrodes was on the order of 10 5 Ωcm, which agreed with our previous measurements using the 10-µm gap electrodes.For a bulk material, the resistance is in inverse proportion http://www.sssj.org/ejssnt to the gap size.However, the resistances of the gaps of nearly molecular length are clearly lower than the value expected from the scale change.We conjectured that the deposited molecular layers were amorphous consisting of small domains of ordered molecules.When the small do-mains directly bridge between gaps, the resistances should become lower than that of the amorphous molecular layer, which may be dominated by the inter-domain resistances.The low resistances observed for the 10-nm and 20-nm gap electrodes might be caused by the direct bridging of the small domains.Such details are now under investigation, but we considered that the result is a characteristic behavior of nano-scale systems.
IV. CONCLUSION
In conclusion, we have developed a high yield production scheme to fabricate sub-10-nm co-planar metalinsulator-metal junctions.Our fabrication procedure contains two photolithographys followed by shadow evaporation.Electron beam lithography was not needed in our procedure.Ultrasmall gaps were formed in the crossing region of two metal layers during the evaporation of the second layer.The yields of the sub 10-nm gaps are improved to over 80% by the combination of this procedure and electromigration.Poly (3-hexylthiophene-2,5-diyl) layers were deposited on the junctions using a special inkjet technique.The measured resistances of the molecular layers were lower than the values expected from the bulk property.The results of the conductivity measurements of the molecular layer indicate that these junctions can be used in the study of molecular sensors.
FIG. 1 :FIG. 2 :
FIG.1: Side view and top view of two perpendicular metal strips evaporated on a substrate.The step height of the first layer causes a discontinuity in the second layer in the crossing region.
FIG. 3 :
FIG. 3: Characterization of the electrodes.(a) Representative SEM image of the G=20 nm electrodes.(b) A magnified image of the area indicated by the white dashed rectangle in (a).(c) Representative SEM image of the G ∼10 nm electrodes.
FIG. 4 :
FIG. 4: (a) SEM image of a deposited molecular layer on the fabricated electrodes.(b) I-V characteristic of bare electrodes and molecular layers deposited on G=10, 20, and 60 nm electrodes. | 2,248.4 | 2003-01-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Characterizing the Typical Information Curves of Diverse Languages
Optimal coding theories of language predict that speakers will keep the amount of information in their utterances relatively uniform under the constraints imposed by their language, but how much do these constraints influence information structure, and how does this influence vary across languages? We present a novel method for characterizing the information structure of sentences across a diverse set of languages. While the structure of English is broadly consistent with the shape predicted by optimal coding, many languages are not consistent with this prediction. We proceed to show that the characteristic information curves of languages are partly related to a variety of typological features from phonology to word order. These results present an important step in the direction of exploring upper bounds for the extent to which linguistic codes can be optimal for communication.
Introduction
One of the defining features of human language is its power to transmit information. We use language for a variety of purposes like greeting friends, making records, and signaling group identity. These purposes all share a common goal: Transmitting information that changes the mental state of our listener [1]. For this reason, we can describe language as a cryptographic code, one that allows speakers to turn their intended meaning into a message that can be transmitted to a listener, and subsequently converted by the listener back into an approximation of the intended meaning [2].
How should we expect this code to be structured? If language has evolved as a code for information transmission, its structure should reflect this optimization [3], and the optimal code would have to work with two competing pressures: (1) the need for listeners to decode the speaker's message easily and successfully and (2) for the speaker to code a message to transmit it to a listener with minimal effort and error. A fundamental constraint on both of these processes is the linear order of spoken language-sounds are produced one at a time, so each is perceptually unavailable once it is no longer being produced.
Humans accommodate this linear order constraint through incremental processing: People process speech continuously as it arrives, predicting upcoming words and building expectations about the meaning of an utterance in real time rather than at its conclusion [4][5][6]. This solution creates a new guidance for speakers. Since prediction errors can lead to severe processing costs and the listeners' difficulty integrating new information, speakers should seek to minimize prediction errors. However, the cost of producing more predictable utterances entails using more words. Thus, the most efficient strategy for speakers to minimize production costs is to produce utterances that are just at the prediction capacity of listeners but do not exceed it [7][8][9]. In other words, a speaker should maintain a constant transmission of information that is as close to the listener's fastest decoding rate as possible. The hypothesis that speakers follow this optimal strategy is known as the "Uniform Information Density (UID)" hypothesis.
Using information theory, a mathematical framework for formalizing predictability, researchers have tested and confirmed this optimal coding prediction across several levels and contexts in language production. For example, Genzel and Charniak [8] provided a clever indirect test of UID across sentences in a paragraph. They showed that the predictability of successive sentences, when analyzed in isolation decreases, which would be expected if readers used prior sentences to predict the content of subsequent sentences. Thus, based on the increasing amount of context, they found that total predictability remained constant. At the level of individual words, Mahowald et al. [10] showed that speakers used shorter alternatives of more predictable words, maximizing the amount of information in each word while minimizing the time spent on those words.
Other research has suggested that efficient encoding influences how speakers structure units between words and sentences. The inclusion of complementizers in relative clauses [11] and the use of contractions [12] are two situations in sentence formation where speakers can omit or reduce words to communicate more efficiently and maximize use of the communication channel without exceeding the listener's capacity.
How languages evolve is shaped by efficient communication as well. Piantadosi et al. [13] showed that the more easily predictable words in a language tend to become shorter over time, maximizing the amount of information transmitted per second. Semantic categories of words across languages can also evolve to be structured efficiently. Categories such as kinship terms [14] maintain a trade-off between informativeness and complexity. Structure in language evolved from a trade off between efficient and learnable encoding on the one hand and an expressive and descriptive lexicon on the other [15]. Languages may come to describe efficiently the particular environment in which they are spoken over the course of evolution: features of the world that are relevant to speakers become part of a language, while irrelevant features are disregarded [16].
However, speakers are still bound by syntactic rules. While complementizers are often optional, determiners are not. Similarly, speakers may have a choice about which of several near-synonyms they produce, but they cannot choose the canonical order of subjects, verbs, and objects. Properties of a language, like canonical word order, impose top-down constraints on how speakers can structure what they say. While speakers may produce utterances as uniform in information density as their languages will allow, these top-down constraints may create significant and unique variation across languages. How significant are a language's top-down constraints on determining how its speakers structure their speech? Yu et al. [17] analyzed how the information in words of English sentences of a fixed length varies with their order in the sentence (e.g., first word or second word). They found a surprising non-linear shape, and argued that this shape may arise from top-down grammatical constraints. In this paper, we replicate their analysis and extend their ideas. We ask: (1) Whether this shape is affected by the amount of context in prediction, (2) whether this shape varies across written and spoken language, and (3) whether this shape is broadly characteristic of a diverse set of languages or varies predictably from language to language. We found that languages are characterized by highly reliable but cross-linguistically variable information structures that co-vary with top-down linguistic features. However, using sufficient predictive context partially flattens these shapes across languages in accord with predictions of the UID hypothesis.
Study 1: The Shape of Information in Written English
Genzel and Charniak [8] performed an influential early test of the Uniform Information Density hypothesis, by analyzing the amount of information in successive sentences in a corpus. They reasoned that if speakers kept the amount of information in each sentence constant, but readers of the text processed each successive sentence in the context of the prior sentences, then a predictive model that does not have access to this prior context should find each successive sentence more surprising. To test this hypothesis, they used a simple n-gram model in which the surprisal of each word is computed as the probability of it following each of the prior n-words. They found that the surprisal-per-word of sentences later in an article was higher than for earlier sentences.
Yu et al. [17] applied this same logic to each word in a sentence. They reasoned that if speakers were processing each word in the context of all prior words in the sentence, but their predictive model ignored this context by considering the base-rate entropy of each word, they would observe the same monotonically increasing surprisal as a function of word position within each sentence. However, this is not what they observed. Instead, they found a characteristic three-step distribution: Words early in an utterance had very low information, then information was constant throughout most of the utterance; then there was a slight dip; finally there was a steep rise for the final word. Yu et al. [17] interpreted this uneven information curve as evidence against the UID Hypothesis. Unlike Genzel and Charniak's results, information plateaued in the middle of sentences and then dipped instead of rising throughout.
However, Yu et al. [17]'s analysis left some open questions. First, they used an unusual metric to quantify the information in each word. They computed the average surprisal of all the words in a given word position weighed by their frequency of appearance in that position. Their formula is given by H(X) = − ∑ w P(w ∈ X) log P(w), where X is a word position (e.g. first, fifth, final) and w is a word occurring in position X. If the goal is to consider the surprisal of each word in a model that does not use prior context, then the model should not consider sentence position either. Second, with the exception of the small dip that appears near the end of the sentence, the shape is roughly consistent with the predicted monotonically increasing per-word surprisal. Third, it is difficult to know whether the characteristic shape generalizes across sentences of all lengths rather than just the three particular lengths that Yu et al. [17] analyzed. Finally, it would be ideal to estimate the characteristic information profiles of sentences when context is considered, not just in the absence of the context of prior words; that is, ideally we could observe the uniform surprisal of words directly for a model of a reader and not just indirectly for a model of a context-less reader.
In Study 1, we attempted to resolve these issues. We first replicated Yu et al.'s analysis using a standard unigram language model and then used a trigram model to show that information in English sentences is significantly more uniform when words are processed in context. Finally, we introduced a method for aggregating across sentences of different lengths to produce a single characteristic profile for English sentences and show that they are broadly consistent with a Uniform Information Density prediction akin to Genzel and Charniak [8].
Data
Following Yu et al. [17], we selected the British National Corpus (BNC) to estimate the information in English sentences [18]. The BNC is an approximately 100 million word corpus consisting mainly of written (90%) with some spoken transcriptions (10%) of English collected by researchers at Oxford University in the 1980s and 1990s. The BNC is intended to be representative of British English at the end of the 20th century and contains a wide variety of genres (e.g., newspaper articles, pamphlets, fiction novels and academic papers).
We began with the XML version of the corpus, and used the justTheWords.xsl script provided along with the corpus to produce a text file with one sentence of the corpus on each line. Compound words (like "can't") were combined, and all words were converted to lowercase before analysis. This produced a corpus of just over six million utterance of varying lengths. From these, we excluded utterances that were too short to allow for reasonable estimation of information shape (fewer than 5 words), and utterances that were unusually long (more than 45 words). This exclusion left us with 89.83% of the utterances ( Figure 1).
Models
To estimate how information is distributed across utterances, we computed the lexical surprisal of words in each sentence position under two different models. First, we estimated a unigram model which considered each word independently. This unigram surprisal measure was a direct transformation of the word's frequency and thus less frequent words were more surprising.
Second, we estimated a trigram model in which the surprisal of a given word (w i ) encoded how unexpected it was to read after reading the prior two words (w i−1 and w i−2 ): This metric encodes the idea that low-frequency words in isolation (e.g., "meatballs") may become much less surprising in certain contexts (e.g., "spaghetti and meatballs") but more surprising in others (e.g., "coffee and meatballs"). Because the problem of estimating probabilities grows combinatorially with the number of prior words due to sparsity, we chose a trigram rather than a quadgram model as in Genzel and Charniak [8]. In practice, trigram models perform well as an approximation (see e.g., [19,20]).
We estimated these models using the KenLM toolkit [21]. Each utterance was padded with a special start-of-sentence token " s " and end of sentence token " /s ". Trigram estimates did not cross sentence boundaries, for example: the surprisal of the second word in an utterance was estimated as surprisal(w 2 ) = −P(w 2 |w i , s ).
Naïve trigram models will underestimate the surprisal of words in low-frequency trigrams (e.g. if the word "meatballs" appears only once in the corpus following exactly the words "spaghetti and", it is perfectly predictable from its prior two words). To avoid this underestimation, we used modified Kneser-Ney smoothing as implemented in the KenLM toolkit [21]. Briefly, this smoothing technique discounted all n-gram frequency counts, which reduced the impact of rare n-grams on probability calculations, and interpolated lower-order n-grams, which were weighted in the calculations according to the number of distinct contexts in which they occurred as a continuation. For example, "Francisco" may be a common word in a corpus, but likely only occurred after "San" as in "San Francisco", so it received a lower weighting. For a thorough explanation of modified Kneser-Ney smoothing, see Chen and Goodman [19].
To prevent overfitting, we computed the surprisal of words in sentences using crossvalidation. We divided the corpus into 10 sub-corpora of equal length. For each sub-corpus, we fit the unigram and trigram models on all other subcorpora, and then used this model to estimate the surprisal of words in this corpus.
Characteristic Information Curves
To develop a characteristic information curve for sentences in the corpus, we needed to aggregate sentences that varied dramatically in length (Figure 2A). We used Dynamic Time Warping Barycenter Averaging (DBA), an algorithm for finding the average of sequences that share an underlying pattern but vary in length [22].
Dynamic time warping is a method for calculating an alignment between two sequences in which one can be produced by warping the other. Canonically, there is a template sequence (e.g., a known vowel's acoustic profile, a known walker's motioncapture limb positions) and a set of unknown sequences that may be new instances of the template sequence produced by perturbations in speed or acceleration (e.g., extending pr shortening the vowel, walking faster or slower). Dynamic time warping works by finding a partial match between the known template and unknown instance under the constraint that each point in the instance must come from a point in the template, and that the ordering of the points must be preserved; however, multiple points in the sequence can match one point in the template (i.e., lengthening) and multiple points in the template can match one point in the sequence (i.e., shortening).
Dynamic time warping barycenter averaging inverts standard dynamic time warping: it discovers a latent invariant template from a set of sequences rather than identifying new instances of a known template. We used DBA to discover the short sequence of values that characterized the surprisal curves common to sentences of varying lengths. We first averaged individual sentences of the same length together and then applied the DBA algorithm to this set of average sequences.
We used the implementation of DBA in the Python package tslearn [23], which fit the barycenter to a time-series dataset through the expectation-maximization algorithm (EM) [24]. DBA in this implementation allowed us to specify the size of the barycenter. Because of the characteristic shape observed by Yu et al. [17] and found in our data (Figure 1), we chose a barycenter of length 5 to capture the varying information slopes across sentences. However, all of the results we report in this Study and in others were similar for barycenters of varying lengths.
Results and Discussion
We began by replicating Yu et al.'s analyses with a standard unigram model, examining the surprisal of words in sentence of length 15, 30, and 45 as they did. In line with their findings, we found a reliably non-linear shape in sentences of all 3 lengths, with the information in each word rising for the first two words, plateauing in the middle of sentences, dipping in the penultimate position, and rising steeply on the final word ( Figure 2A). Qualitatively, we found the same shape in utterances of all other lengths we sampled, from utterances with 5 words to utterances with 45 words.
In comparison, under the trigram model, we observed 3 major changes. First, each word contained significantly less information. This was to be expected as knowing the two prior words made it much easier to predict the next word. Second, the fall and peak at the ends of the utterances was still observable, but much less pronounced. Finally, the first word of each sentence was now much more surprising than the rest of the words in the sentence, because the model had only the start-of-sentence token s to use as context. Thus, the trigram model likely overestimates the information for humans reading the first word. Together, these results suggest that Yu et al. [17] overestimated the non-uniformity of information in sentences. Nonetheless, the final words of utterances do consistently contain more information than the others. Figure 2B shows the barycenters produced by the dynamic time warping barycenter averaging algorithm (DBA). It correctly recovers both the initial and final rise in information under the unigram model and the initial fall and smaller final rise in the trigram model. We took this as evidence that (1) these shapes were characteristic of all lengths, and (2) that DBA effectively recovered the characteristic information structure.
In sum, the results of Study 1 suggested that sentences of written English have a characteristic non-uniform information structure, with information rising at the ends of sentences. This structure is more pronounced when each word is considered in isolation, but some of the structure remains even when each word is considered in context. These results are broadly consistent with the predictions of a Uniform Information Density account: information increases over the course of sentences but approaches uniformity as more context is considered.
Is this structure unique to written English, or does it characterize spoken English as well? In Study 2, we applied this same analysis to two corpora of spoken English-the first of adults speaking to other adults, and the second of adults and children speaking to each other.
Study 2: Information in Spoken English
Spoken language is different from written language in several respects. First, the speed at which it can be processed is constrained by the speed at which it is produced. Second, speech occurs in a multimodal environment, providing listeners with information from a variety of sources beyond the words conveyed (e.g., prosody, gesture, world context). Finally, both words and sentence structures tend to be simpler as they must be produced and processed in real time [25]. Thus, sentences of spoken English may have different information curves than those of written English.
The language young children hear is further different from the language adults speak to each other. Child-directed speech tends to simpler in a number of dimensions including the lengths and prosodic contours of utterances, the diversity of words, and the complexity of syntactic structures [26]. The speech produced by young children is even more distinct from adult-adult speech, replete with simplifications and modifications imposed by their developing knowledge of both the lexicon and grammar [27]. In Study 2, we asked whether spoken English-produced both by adults and children-had the same characteristic information shape as written English.
Data
To estimate the information in utterances of adult-adult spoken English, we used the Santa Barbara Corpus of Spoken American English, a ∼250,000 word corpus of recordings of naturally occurring spoken interactions from different regions of the United States [28]. For parent-child interactions, we used all of the North American English corpora in the Child Language Data Exchange System (CHILDES) hosted through the childes-db interface [29,30]. We selected for analysis ∼1 million utterances produced by children (mostly under the age of five), and ∼1.7 million utterances produced by the parents of these children.
Models
All pre-processing and modeling details were identical to Study 1 except for the selection of sentences for analysis. Because the utterances in both the Santa Barbara Corpus and CHILDES were significantly shorter than the sentences in the British National Corpus, we analyzed all utterances of at least 5 and most 15 words (see Figure 3A). Models were estimated separately for each of the 3 corpora.
Results and Discussion
The information curves found in adult-adult utterances were quite similar to those for parent-child and child-parent ( Figure 3B). Under the unigram model, information rose steeply at the beginning of utterances, was relatively flat in the middle, and the rose even more steeply at the end. Under the trigram model, the curve was nearly identical. Both curves were similar in shape to the characteristic curve for written English estimated in Study 1.
Curves for parent and child speech were similar to those of adults with some small differences. Unigram curves for both monotonically increased although the largest jump in the parent curve occurred in the 4th barycenter position rather than the third. Trigram curves were also broadly similar although the parent curve dipped between the first and second position. This decreasing slope was inconsistent with the UID prediction of monotonic increase. It is possible that this decrease was a real feature of speech to children-parents may begin utterances to children more variably than when they speak to other adults.
Overall, the preponderance of evidence suggested that the characteristic shape of the information curve for spoken English is similar to that of written English, which appears to be approximately true in speech to and by children. All of these curves are broadly consistent with predictions from a UID account, according to which information measured without context should increase, and that including some context will reduce the rate of increase.
In Study 3 we applied this technique to a diverse set of written languages of different families to ask whether these structures vary cross-linguistically.
Study 3: Language Structures and Large-Scale Data Analysis
In Study 3, we applied the same method as in Studies 1 and 2 to a diverse set of over 200 natural languages represented on Wikipedia. In our prior studies, we found that the distribution of information in English sentences is broadly consistent with predictions of a Uniform Information Density account: information roughly rises over sequential words in a sentence, and further information rises more slowly when more prior context is used to predict the next word. In Study 3 we asked whether this same pattern characterizes natural languages in general and whether variability in the characteristic information curves of languages is related to their typological features.
Data
To measure cross-linguistic variation in the structure of information across sentences, we constructed a corpus of Wikipedia articles using the Wikiextractor tool from the Natural Language Text Analytics (TANL) pipeline [31]. We retained all languages with at least 10,000 articles, resulting in data from a set of 234 languages from 29 families.
To understand how potential variations in information curve shape are related to the structure of these languages, we used the available typological feature information in the World Atlas of Language Structures (WALS) [32], which has data for 144 typological features in thousands of languages. These features describe aspects of morphology, syntax, phonology, etymology and semantics-in short the features describe the structures in each language.
There are several categories of WALS features. Phonology describes sounds, stress, intonation, and syllable structure in each language. Nominal categories describe the morphology and semantics of nouns, including features for cases, definiteness and plurals. Verbal categories describe analogous verb features, focusing on tense, mood and aspect. Nominal syntax features describe a heterogeneous collection of noun phenomena, focusing on possessives and adjectives. Word order features describe not only canonical ordering of the subject, object and verb but also the orderings of heads and modifiers, relative clauses and other orderings. Simple clause features describe the syntax and organization of single clauses, such as passive and negative constructions.
Models
Models were estimated separately for each language using the same procedures as in Study 1. To accommodate the variety of lengths across the language corpora, we analyzed sentences of 5-45 words, the boundaries of which were assumed to be identified by spaces as in Studies 1 and 2. The Wikipedia entries for all languages were hand-checked to the best of our abilities to ensure that this assumption was appropriate, but it is a potential source of noise in the analyses.
For each pair of languages, we derived two pairwise similarity measures. To estimate information structure similarity, we first centered each language's 5-point barycenter curve (since surprisal highly correlates with corpus size), and then computed the cosine similarity between the two centered curves. To compare typological similarity, we computed the proportion of features in the WALS on which the two languages shared the same feature value. Because WALS is a compiled collection from the fieldwork of many linguists rather than a complete analysis by a single group, its features vary in the number of values they take. Some, like Vowel Quality Inventories (2A), have multiple features but also a natural ordering (small inventory, medium inventory, large inventory). However, others, like Position of Tense-Aspect Affixes (69A) have multiple values with no obvious ordering (prefixes, suffixes, tone, combination with no primary, none). For this reason, we used an exact match as our distance function, but other more sensitive metrics could be used by other researchers.
Finally, because of the collaborative construction of WALS, many features are missing in many languages. For this reason, only features present in WALS for both languages in a pair were considered for estimating their proportion of shared features. We also attempted to address this sparsity problem by imputing missing feature data where possible using multiple imputation with multiple correspondence analysis (MIMCA) [33]. It begins with mean imputation, converts the categorical WALS features into a numerical contingency table with dummy coding then repeatedly performs principle component analysis (PCA) and reconstructs the contingency table. Results of analyses using imputed features were qualitatively comparable to those performed on raw features, but with weaker correlations as described below. Nonetheless, we reported both measures as they may be of use in future research.
Results and Discussion
In Studies 1 and 2, we observed that characteristic information curve of English to be generally increasing in both unigram and trigram models. In Study 3, we first replicated this analysis on the larger English language sample in the Wikipedia corpus ( Figure 4). The unigram information curve for English estimated on Wikipedia was nearly identical to the curve observed for the British National Corpus in Study 1 (see Figure 2B). The trigram curve had a more pronounced dip in the 4th coordinate, but otherwise maintained the general shape and characteristic final rise we had previously observed. This shape did not, however, characterize all of the 234 represented in Wikipedia. Languages varied widely in the shape of their information curves, both in the unigram and in the trigram model. Under the unigram model, some languages, like Spanish and German, were generally increasing like English. Others, like Hindi and Chinese, had negative slopes in their characteristic curves, while others like Urdu had a mix of positive and negative slopes. Under the trigram model, languages also varied, with reliable negative slopes in a number of languages including Russian and Urdu.
In lieu of presenting characteristic curves for all languages, Figure 5 shows them for each of the six language families in which at least 10 languages were represented in Wikipedia. These curves were produced first by centering the surprisal for each language so that the surprisal of the mean word was 0, thereby deconfounding the differences due to corpus size, and then applying the barycenter averaging algorithm to the curves from all languages in the relevant family. Two main trends were apparent. First, no language families had curves that were monotonically increasing, neither for the mean curve over all languages for either the unigram or the trigram model. Thus, the UID prediction did not appear to hold for languages other than English. Second, information curves for distinct families were more different from each other under the trigram model. To confirm this statistically, we computed the variance in surprisals at each of the 5 barycenter positions across all 234 languages. The average variance across the positions under the unigram model was 0.077 with a 95% confidence interval (0.05, 0.104). The average variance under the trigram model was 0.163 (0.113, 0.218). Matching our observation, these confidence intervals did not overlap.
Together, these results suggested that the characteristic information curves of different languages may diverge from the UID prediction because of the influence of idiosyncratic syntactic properties that prevent them from being optimal codes. If this is the case, then languages that are more similar phylogenetically may also have more similar information curves. Figure 6 shows a dendrogram produced by hierarchical clustering on the basis of the cosine similarity of information curves. All of the languages that belong from the families shown in Figure 5 are represented. Although not all members of each language family are clustered closely together, some structure is certainly apparent. For instance, all of the Austronesian languages are quite similar.
To test the hypothesis that linguistic similarity leads to information curve similarity, we considered the relationship between the typological features of languages and their characteristic information curves. For each pair of languages, we computed curve similarity using cosine distance, and typological similarity using the number of shared WALS features both raw and imputed. Under the unigram model, the two similarity measures were significantly but very weakly correlated (r raw = 0.056, t raw = 3.786, p raw < 0.001; r imputed = 0.025, t imputed = 2.715, p imputed = 0.007). Under the trigram model, this correlation was still low but an order of magnitude stronger (r raw = 0.203, t raw = 13.899, p raw < 0.001; r imputed = 0.124, t imputed = 13.349, p imputed < 0.001). To understand which typological features contributed to these similarities, we split the WALS features by type such as nominative categories and nominative syntax describing morphology, while word order described subject-verb-object and head-modifier word orders. Figure 7 shows the correlation between the similarity of information curves under both the unigram and trigram models and the number of features of each of type that the two-languages shared. Under the unigram model, word order features, and possibly nominal category features, appeared to predict information curve similarity. In contrast, under the trigram model, all feature types except for possibly nominal syntax were reliably correlated with information curve similarity. Thus, almost all of the typological features represented in WALS were reflected in the characteristic information curves. These analyses suggest that languages may be pressured to be optimal codes, but that the historical influences on the structures of languages may put in place limits on the efficiency of these codes.
General Discussion
Why do languages have the regularities we observe? The particular features of any one language may owe their origin to a diverse set of ecological pressures, features of the population of speakers, or other idiosyncratic causes [34,35]. Despite the difficulty of explaining variability across languages, significant progress in explaining universals was made by taking an optimal coding perspective on language. If languages evolved to be efficient codes for transmitting information, they should all have certain predictable features [2,3].
One such predicted feature has been called Uniform Information Density-speakers should try to keep the amount of information they transmit constant rather than produce spikes that lead to difficulties in comprehension [7][8][9]. Genzel and Charniak [8] developed a clever method to confirm this prediction at the sentences level in an article, and a variety of other results have confirmed similar results at a variety of other language levels [11,36]. However, recent work from Yu et al. [17] suggested that this prediction may not hold at the level of subsequent words within a sentence.
In this paper, we built a method employed by Yu et al. [17] to develop a novel means of quantifying how information is typically distributed across the words of a sentence. We showed that English-whether written or spoken or produced by adults or children-has a prototypical information structure, which is broadly in line with the predictions of the Uniform Information Density Hypothesis.
However, the same prediction did not hold for many other languages. Instead, the characteristic information curves of languages are at least partially influenced by a variety of features of their typological structure (e.g., word order). These top-down constraints appear to place limits on the extent to which languages can approximate optimal codes. These results represent a small first step towards answering the question of how much these constraints shape a speaker's production and how a speaker interacts with them. Data Availability Statement: All data and code for this study are available in a public GitHub repository at https://github.com/jklafka/language-modeling (accessed on 26 September 2021).
Conflicts of Interest:
The authors declare no conflict of interest. | 7,700 | 2021-10-01T00:00:00.000 | [
"Linguistics",
"Computer Science"
] |
Deep Predictive Video Compression Using Mode-Selective Uni- and Bi-Directional Predictions Based on Multi-Frame Hypothesis
Recently, deep learning-based image compression has shown significant performance improvement in terms of coding efficiency and subjective quality. However, there has been relatively less effort on video compression based on deep neural networks. In this paper, we propose an end-to-end deep predictive video compression network, called DeepPVCnet, using mode-selective uni- and bi-directional predictions based on multi-frame hypothesis with a multi-scale structure and a temporal-context-adaptive entropy model. Our DeepPVCnet jointly compresses motion information and residual data that are generated from the multi-scale structure via the feature transformation layers. Recent deep learning-based video compression methods were proposed in a limited compression environment using only P-frame or B-frame. Learned from the lesson of the conventional video codecs, we firstly incorporate a mode-selective framework into our DeepPVCnet with uni- and bi-directional predictive modes in a rate-distortion minimization sense. Also, we propose a temporal-context-adaptive entropy model that utilizes the temporal context information of the reference frames for the current frame coding. The autoregressive entropy models for CNN-based image and video compression is difficult to compute with parallel processing. On the other hand, our temporal-context-adaptive entropy model utilizes temporally coherent context from the reference frames, so that the context information can be computed in parallel, which is computationally and architecturally advantageous. Extensive experiments show that our DeepPVCnet outperforms AVC/H.264, HEVC/H.265 and state-of-the-art methods in an MS-SSIM perspective.
I. INTRODUCTION
Conventional video codecs such as AVC/H.264 [45], HEVC/H.265 [38] and VP9 [29] have shown significantly improved coding efficiencies, especially by enhancing their temporal prediction accuracies for the current frame to be encoded using its adjacent frames. In particular, there are three coding modes of frames used in video compression: I-frame (intra-coded frame) mode that is compressed independently from its adjacent frames; P-frame mode that is compressed through the forward prediction using motion information; and B-frame mode that is compressed with bi-directional prediction for the current frame. P-frame coding is suitable for low latency in video compression. In perceptive of coding efficiency, B-frame coding provides The associate editor coordinating the review of this manuscript and approving it for publication was Hualong Yu . the highest coding efficiency compared to the I-frame and P-frame coding. Therefore, the standard codecs [38], [45] use both P-frame and B-frame coding methods for video coding.
Deep learning-based approaches have recently shown significant performance improvement in image processing. Especially, in the field of low-level computer vision, intensive research has been made for deep learning-based image super-resolution [12], [18], [20], [24] and frame interpolation [15], [28], [30]- [32]. In addition, there are many recent studies on image compression using deep learning [5], [6], [16], [21], [23], [27], [35], [40]- [42] which often incorporate auto-encoder based end-to-end image compression architectures by attempting to improve compression performance. These works showed outperformed results of coding efficiency compared to the traditional image compression methods such as JPEG [43], JPEG2000 [37], and BPG [7]. While the image compression tries to reduce only spatial redundancy around the neighboring pixels with limited coding efficiency, traditional video compression can achieve significant compression performance because it can take advantage of temporal redundancy among neighboring frames. Also, by exploiting the temporal redundancy, deep learning-based video compression has been studied in two main directions: First, some components (or coding tools) in the conventional video codecs are replaced with deep neural networks. For example, Park and Kim [33] first tried to improve compression performance by replacing the in-loop filters of HEVC with a CNN-based in-loop filer. In [10], Cui et al. proposed intra-prediction method with CNN in HEVC to improve compression performance. In [51], Zhao et al. replaced the bi-prediction strategy in HEVC with CNN to improve coding efficiency; Second, there are studies to improve the compression performance by using auto-encoder based end-to-end neural network architectures [4], [9], [11], [13], [25], [26], [36], [46], [47]. Although deep learning-based image compression has been intensively studied, video compression has drawn less attention. In this paper, we propose an end-to-end deep predictive video compression network, called DeepPVCnet, using mode-selective uni-and bi-directional predictions based on multi-frame hypothesis with a multi-scale structure and a temporal-context-adaptive entropy model. The contributions of our proposed DeepPVCnet are as follows: • We first show a mode-selective framework with both uni-and bi-directional predictive coding structures for deep learning-based predictive video compression in the rate-distortion minimization sense, thus achieving the improved coding efficiency. The selected mode information for frame prediction is transmitted to decoder sides with a negligible amount of bits; • We propose a temporal-context-adaptive entropy model that utilizes temporally coherent context information from the multiple reference frames to estimate the parameters of Gaussian entropy models for the quantized latent representation of the current frame. While the autoregressive entropy models for CNN-based image compression suffer from serialized processing, our temporal-context-adaptive entropy model allows for context computation in parallel; • Our DeepPVCnet tries to jointly compresses motion and residual information based on a multi-scale structure for the current frame and its reference frames via learned feature transformation in encoder sides. This structure can effectively reduce the coupled redundancy of motion and residual information; • Contrary to the deep neural network-based state-of-theart (SOTA) methods [26], [46] that reply on a single reference frame for each prediction direction, our method improves prediction accuracy for the current frame by utilizing multiple frames for both uni-and bi-directional prediction modes. This paper is organized as follows: Section II introduces the related work with deep neural network-based image/video compression, optical flow estimation and frame interpolation; In Section III, we introduce the details of our proposed deep video compression network, called DeepPVCnet; Section IV presents the experimental results to show the effectiveness of our DeepPVCnet compared to the conventional video codecs and SOTA methods [4], [13], [25], [26], [46], [47]; Finally, we conclude our work in Section V.
II. RELATED WORK
Both conventional image compression (such as JPEG, JPEG2000, and BPG) and video compression (AVC/H.264, HEVC, and VP9) methods have shown high compression performance. Recently, deep learning-based image and video compression methods have been actively studied. The key element that brings up high coding efficiency in video coding is temporal prediction to reduce temporal redundancy. Therefore, we also review deep learning-based optical flow estimation and frame interpolation networks that are essential elements for predictive coding.
Deep Learning-Based Image Compression: Unlike conventional image compression based on transform coding, recent deep learning-based image compression methods often adopt auto-encoder structures that perform nonlinear transforms. First, there are several works on image compression using Long Short Term Memory (LSTM)-based auto-encoders [16], [41], [42] where a progressive coding concept is used to encode the difference between the original image and the reconstructed image. In addition, there are studies on image compression using convolutional neural network (CNN) based auto-encoder structures by modeling the feature maps of the bottleneck layers for entropy coding [5], [6], [21], [23], [27], [35], [40]. In [6], Ballé et al. introduced an input-adaptive entropy model that estimates the scales of the latent representations depending on the input. In [21], Lee et al. have proposed a context-adaptive entropy model for image compression which uses two types of contexts: bit-consuming context and bit-free context. Their models in [6], [21] outperformed the conventional image codecs such as BPG. Our DeepPVCnet also adopts such an auto-encoder structure used in [6] as the baseline structure combined with our temporal-context-adaptive entropy model.
Deep Learning-Based Video Compression: There are two main directions of deep learning based video compression research: The first is to replace the existing components of the conventional video codecs with deep neural networks (DNN). For example, there are some works to replace in-loop filters with deep neural networks [14], [17], [33], [49], and post-processing to enhance the resulting frames of the conventional video codecs [22], [48]. The intra/inter predictive coding modules have also been substituted with DNN modules for video coding [10], [33]; And, the second direction includes CNN-based auto-encoder structures without the coding tools of conventional video codecs involved. In [26], Lu et al. proposed the first end-toend deep video compression network that jointly optimizes FIGURE 1. Overall architecture of our proposed DeepPVCnet. The notations of the parameters of the convolutional layers are denoted as: number of filters × filter height × filter width / the up-or down-scale factor, where ↑ and ↓ denote the up-and down-scaling, respectively. s denotes bilinear down-sampling factor for reference frames X R and current frame x 0 . The 'Motion compensation' process is performed by Eq. 2. AE and AD represent an arithmetic encoder and an arithmetic decoder, respectively. Also, Q represents a quantizer for latent representations y 0 .
all the components for low-latency scenarios of video compression. Then, Lin et al. in [25] extended it by utilizing multiple reference frames for low-latency scenarios of video compression. In [36], Rippel et al. proposed a novel video compression framework with propagation of the learned state and ML-based spatial rate control. In [4], Agustsson et al. proposed a low-latency video compression model based on the scale-space flow for better handling disocclusions and fast motion. However, the methods in [4], [25], [26], [36] have been proposed for P-frame predictive coding which only use previously encoded frames to predict the current frame. In [46], Wu et al. proposed a ConvLSTM-based video compression method to improve the coding efficiency. However, this method used conventional block motion estimation and compression methods, which will degrade the coding efficiency. In [9], Cheng et al. proposed a frame interpolation network based deep video compression. Since this work utilized a pre-trained frame interpolation network [32] from two frames which are far from each other, the prediction performance will be significantly reduced. The method in [11] uses both an interpolation-and residual-based autoencoder for B-frame coding. It shows good performance in high bit ranges, but not in low or mid bit ranges. In [47], Yang et al. proposed a hierarchical learned video compression method with the hierarchical quality layers and a recurrent enhancement network. The method in [13] incorporates a 3D auto-encoder that does not use the P-frame and B-frame coding concept. However, it requires lots of memory and computational complexity. In contrast to the SOTA methods using a single frame as reference for prediction, our method utilizes multiple frames as reference to predict the current frame. While these SOTA methods take either a P-frame or a B-frame coding structure for video compression, in this paper, we propose a mode-selective framework with uniand bi-directional prediction modes where the best mode is selected in a rate-distortion optimization sense and is signaled to the decoder side.
Deep Learning-Based Optical Flow Estimation and Frame
Interpolation: Optical flow estimation and frame interpolation can be used for predictive video coding. There have been many studies related to optical flow estimation using deep neural networks. In [34], Ranjan et al. introduced SpyNet that uses a spatial pyramid network and warps the second image to the first image with the initial optical flow. Also, the PWC-Net [39] was introduced with a learnable feature pyramid structure that uses the estimated current optical flow to warp the CNN feature maps of the second image. Their model outperformed all previous optical flow methods. Since they use the feature pyramid structures, the optical flow estimation is robust to large motion over other deep neural network-based optical flow methods. Therefore, we incorporate the pre-trained PWC-Net as an initial parameters of the optical flow estimation network into our DeepPVCnet. Recent CNN-based frame interpolation methods include convolution filtering-based [31], [32], phase-based [28], and optical flow-based [15], [30] approaches. The convolution filtering-based frame interpolation is based on the frame prediction between adjacent frames through convolution filtering operation without using optical flow. The phase-based frame interpolation uses a CNN to reduce the reconstruction loss in the phase domain rather than in the image domain. Finally, the optical flow-based interpolation generates the frames between two frames through a CNN after warping with optical flow between two frames. In this paper, we adopt an optical flow-based prediction scheme [15] for our DeepPVCnet.
III. PROPOSED METHOD
Our proposed DeepPVCnet for both uni-and bi-directional predictive coding is illustrated in Fig. 1. As depicted in Fig. 1, our DeepPVCnet uses the neighboring frames as reference frames to compress the current frame. The reference frames, denoted as X R , for current frame x 0 are composed of for uni-and bi-directional 74 VOLUME 9, 2021 FIGURE 2. Compression performance comparison for a single reference frame and multiple reference frames. The bitrates of the P-frame coding models are about 0.37 bpp and those of the B-frame coding models are about 0.59 bpp for HEVC Class B dataset [38].
coding, respectively. For X R and x 0 , the bilinear downsampling with a scale index s is performed for multi-scale motion estimation and compensation as follows: where X R,s and x s 0 denotes the down-scaled reference frames and the down-scaled current frame with the scale index s, respectively. Down(·, s) denotes a bilinear downsampling process with a scale factor 2 s (s = 0, 1, 2 for our experiments).
Each reference frame in X R,s is concatenated to x s 0 for estimating the optical flow between these frames using the fine-tuned PWC-Net [39]. The resulting opti- for uni-and bi-directional coding, respectively. Then, the prediction frames P R,s are calculated by a backward warping function w(·, ·) [15] with X R,s and F R,s . The resulting prediction frames P R,s are composed of {p s 0←−2 , p s 0←−1 } and {p s 0←−2 , p s 0←−1 , p s 0←1 , p s 0←2 } for uni-and bi-directional coding, respectively. The residual frames R R,s can be expressed as follows: where k denote an relative index of the reference frame for the current frame The joint information of F R,0 and R R,0 for scale 0 is mapped to a latent representation y 0 through the encoder network g a with five feature transformation (FT) layers. Similarly, the joint information of F R,s and R R,s for scales s = 1, 2 are concatenated into the feature maps of the same sizes in the encoder network g a as depicted in Fig. 1.
After the quantization step, we can obtain the quantized latent representationŷ 0 . Then, the reconstructed optical flowŝ F R , the reconstructed residual framer 0 and the synthesis coefficientsα i are estimated by the decoder network g s with the entropy model ofŷ 0 . The reconstructed framex 0 is given byx where the set, N R , of reference frame indices are composed of {−2, −1} and {−2, −1, 1, 2} for uni-and bi-directional predictive coding, respectively.
for uni-and bi-directional predictive coding, respectively. Finally, the Enhancement Net outputs enhanced framex 0 fromx 0 . The details of the proposed network are described in Section III.A-III.E.
A. MULTIPLE REFERENCE FRAMES
In general, the conventional video codecs like AVC/H.264 and HEVC/H.265 compress the current frame using multiple reference frames for each prediction direction. The usage of multiple reference frames allows to effectively deal with occlusion problems, thus resulting in accurate prediction for the current frame. In video compression, the quantization errors are propagated as subsequent frames are compressed. By using multiple reference frames, such a quantization error propagation can be alleviated for the prediction of the current frame, thus increasing the prediction accuracy and coding efficiency. By compromising the complexity of incorporating multiple reference frames, our DeepPVCnet utilizes two reference frames for uni-directional predictive coding (P2 in Fig. 2) and four reference frames for bi-directional predictive coding (B4 in Fig. 2) where two for forward and the other two for backward prediction, in contrast to the state-of-theart methods [9], [11], [26], [46] of deep learning-based video compression. The effectiveness of using multiple reference frames is shown in Fig. 2.
B. COMPRESSING JOINT INFORMATION WITH FEATURE TRANSFORMATION (FT) LAYERS
We incorporate five feature transformation (FT) layers into the encoder side of the CNN-based auto-encoder structure g a that jointly compresses the multi-scale motion information and residuals. To cope with various amounts of motion for different video sequences, multi-scale motion estimation and compensation are performed at the encoder side. Then, the generated multi-scale motions and residuals are concatenated to the output feature maps of each convolution layer, which are then fed as input into the following FT layer of the encoder network. By doing so, the multi-scale joint information of motions and residuals can be effectively fused for better compression. The recent methods [9], [11], [26], [46] in deep video compression are designed to compress single-scale optical flow and residual separately, but our proposed DeepPVCnet jointly compresses multi-scale motion information and residual with compactization towards improving coding efficiency under the assumption that the redundancy between motion information and residual exists. Also, the FT layers alter interim feature output by the learned transformation under the guidance of multiple reference frames, which can help reducing the coupled redundancy between the multi-scale motion information and residual. The details of the FT layers are depicted in Fig. 3-(a). As shown in Fig. 3-(a), the FT layers serve to perform affine transform of each element of the input feature map. The parameters of the affine transform are learned with respect to the reference frames via two convolutional layers.
C. TEMPORAL-CONTEXT-ADAPTIVE ENTROPY MODEL
We propose a temporal-context-adaptive entropy model for the quantized latent representationŷ 0 . Our proposed entropy model adopts the basic structure [6] with the hyperprior z 0 and the hyper encoder-decoder network pair (h a , h s ) as shown in Fig. 1. The output feature map of the hyper encoder-decoder network is the context information c c of current frame x 0 . Since there exists a contextual similarity between x 0 and X R , we propose a Context-Net to estimate the mean µ and standard deviation σ of a Gaussian model for y 0 as follows:ŷ where our proposed Context-Net extracts the context information c c of x 0 and the temporal context information c t of X R . Then, it concatenates c c and c t to obtain the µ and σ as the same spatial size asŷ 0 . The Context-Net is illustrated in Fig. 3-(b). As shown in Fig. 3-(b), c t is generated using the reference frames via four convolutional layers. Then, µ and σ are estimated for c t and c c via three convolutional layers. More details of the entropy coding model are represented in Appendix B.
D. MODE-SELECTIVE FRAMEWORK
The SOTA deep learning-based video compression methods [4], [9], [11], [25], [26], [36], [46] tend to have a limitation that only compresses all of the frames in either a P-frame or a B-frame coding structure. Fig. 4 depicts a GOP (Group of Pictures) structure for our mode-selective framework with uni-or bi-directional predictions in a similar way as traditional video codecs. In our mode-selective framework, each frame can be encoded with an intra-mode, a uni-directional prediction mode or a bi-directional prediction mode. The uni-directional prediction mode has two sub-modes: M f uni for forward prediction and M b uni for backward prediction, and the bi-directional prediction mode is denoted as M bi . For the GOP structure in Fig. 4, I 1 and I 13 are encoded as the intra-mode using a pre-trained image compression network [21] while all other frames between I 1 and I 13 are encoded in either M f uni , M b uni or M bi . It should be noted in Fig. 4 that the frame I 4 is encoded by referencing I 1 which is encoded a priori. Next, I 7 is compressed, followed by I 10 . Depending on the availability of neighboring encoded frames, one or two encoded frames are referenced to encoded each frame between I 1 and I 13 as shown in Fig. 4. I 3 and I 6 may use the same reference frames as I 2 and I 5 , respectively. I 8 and I 9 have the same referencing structures as I 6 and I 5 , respectively. Also, I 11 and I 12 have the same referencing structures as I 3 and I 2 , respectively. The details of the coding order and the selection rule of the reference frame for our proposed method in a test phase are represented in Table 1. Note that for frames in which the reference frames are not multiple, duplicated reference frames are utilized as the inputs of deepPVCnet in our experiment.
The selected mode information is sent to the decoder sides as two-bit data which is a negligible bit amount. Based on this mode-selective framework, we train each deepPVCnet for where R n,m andÎ n,m denote the bitrate and the reconstructed frame of I n with mode m, respectively.
E. ENHANCEMENT NET
To further improve the qualities of the reconstructed frames, the Enhancement Net shown in Fig. 1 is incorporated into the decoder side of our DeepPVCnet to enable the role of an in-loop filter as in the traditional video codecs. We utilize the residual dense network (RDN) [50] which consists of five residual dense blocks (RDB) with three convolution filters per each block for our Enhancement Net which is described in details in Appendix C.
IV. EXPERIMENTS A. EXPERIMENTAL CONDITIONS
To show the effectiveness of our DeepPVCnet, extensive experiments are carried out to measure the performance of coding efficiency, and our method is compared with other video coding methods. For intra coding, we used a pre-trained CNN-based image compression model in [21]. For uni-and bi-directional predictive coding, we train our DeepPVCnet models for different bitrate ranges and test the trained models for each bitrate range. Note that we set the GOP size G to 12 for all experiments. Datasets: We train the DeepPVCnet with the UGC dataset [1]. For pre-processing, we excluded HDR, vertical video, interlaced video and the video that are smaller than 720p from the UGC dataset. The number of frames used for training is about 466K. For evaluation, we test the DeepPVCnet on the raw video datasets such as Ultra Video Group (UVG) [2] and the HEVC Standard Test Sequences (Class B, C, D and E) [38]. The UVG dataset contains seven videos of size 1920 × 1080. The videos in the HEVC dataset have different sizes depending on their class types.
Implementation: The proposed DeepPVCnet is trained in an end-to-end manner, based on the rate-distortion loss L as: (6) where λ controls the trade-off between rate and distortion terms, and d is the distortion measure, e.g. (1 -MS-SSIM). In Eq. 6, the first term indicates the conditional entropies ofŷ 0 givenẑ 0 and X R , and the second term is the entropy ofẑ 0 . For several bitrate ranges, we train the DeepPVCnet separately for different values of λ where the number of channels of the convolution filters is N except the convolution layer that has M filters to output the latent representation. We set N = 128 and M = 256 for three lower bitrates and N = 192 and M = 384 for two higher bitrates. Our DeepPVCnet is trained from scratch with the fixed PWC-Net [39] for 1M iterations using ADAM [19] with the initial learning rate 0.0001. Then, we fine-tune the PWC-Net with other components of our DeepPVCnet for additional 0.5M iterations. In addition, we used a batch size of 8 and a patch of 256 × 256 randomly cropped from the 466K training frames extracted from the UGC video dataset.
Evaluation: We measure both distortions and bitrates simultaneously. The multi-scale structural similarity index (MS-SSIM) [44] for an RGB color space, which is known to as a better metric for subjective image quality than PSNR, are used to measure the distortions in our experiments. We use bits per pixel (bpp) to measure the bitrates.
B. EXPERIMENTAL RESULTS
The DeepPVCnet is compared with the conventional video codecs such as AVC/H.264 and HEVC/H.265, as well as three deep learning-based video compression methods in [4], [13], [25], [26], [46], [47]. For fair comparison, the GOP size of the conventional video codecs is fixed to 12. We used the ffmpeg coding tool [38] and x265 [3] for H.264 and H.265, respectively. We use several settings of the conventional video codecs where the details of settings are described in details in Appendix A. Fig. 5 shows the rate-distortion (R-D) curves produced by our DeepPVCnet, H.264 and H.265, Wu's method [46], DVC [26], Habibian's method [13], M-LVC [25], Yang's method [47] and Agustsson's method [4] for VOLUME 9, 2021 FIGURE 5. MS-SSIM performance comparison of our DeepPVCnet, H.264 [45], H.265 [38], and CNN-based SOTA methods [4], [13], [25], [26], [46], [47] for the UVG dataset, HEVC Class B and E dataset. the UVG and HEVC datasets (Class B and E). It can be seen in Fig. 5 that our DeepPVCnet outperforms all the methods over most of the bitrate ranges while other SOTA methods in [13], [25], [26], [46], [47] show the limited results at only low bitrate ranges. In particular, our method shows significantly better compression performance than the other methods for the medium or high bitrate range. More experimental results for the HEVC datasets (Class C and D) and analysis are provided in Appendix D.
C. ABLATION STUDY
For our DeepPVCnet, ablation study is performed for some key components: the multi-scale motion estimation and compensation, the fine-tuned PWC-Net, the multiple reference frames, the temporal-context-adaptive entropy model using the multi-frame hypothesis, mode-selective framework, FT layers and Enhancement Net. In order to demonstrate the contribution of each component, we performed the experiments by excluding the key components one by one from the entire structure of the DeepPVCnet. Fig. 7 represents the resulting MS-SSIM performances of the ablation study.
Multi-Scale
Motion Estimation and Compensation: In order to effectively cope with various motions of different video sequences, we perform motion estimation and compensation based on a multi-scale structure. As can be seen in Fig 7, the multiscale motion estimation compensation improves the coding gain compared to the single-scale case.
Fine-Tuned
PWC-Net: In [39], the pre-trained PWC-Net has been trained to obtain only high accuracy of optical flows between frames. However, for the video compression problem, the motion estimation network must be trained not only to increase the accuracy of motion estimation, but also to compress the generated motion with high coding efficiency. Therefore, we fine-tuned the PWC-Net to be optimized for video compression in the rate-distortion optimization sense. As shown in Fig. 7, our DeepPVCnet with the fine-tuned PWC-Net outperforms that with the pre-trained PWC-Net for the whole bitrate range.
Multiple Reference Frames: As shown in Figs. 2 and 7, the multiple reference frames contribute to gain high coding efficiency. This gain is achieved thanks to effectively dealing with object occlusions, thus reducing the propagation error. In particular, the multi-frame hypothesis shows better performance in the high bitrate range because our DeepPVCnet can Ablation study on the effectiveness of (a) the multi-scale motion estimation and compensation, (b) fine-tuned PWC-Net, (c) multiple reference frames, (d) a mode-selective framework, (e) feature transformation layers, (f) a temporal context-adaptive entropy model, and (g) an Enhancement Net for HEVC Class B dataset. We have the experiments of excluding these components one by one in a row from the DeepPVCnet.
fully utilize neighboring information from multiple reference frames in removing temporal redundancy.
Mode-Selective
Framework: It improves the coding gain especially for low and mid bitrate range as depicted in Fig. 7. Poor prediction in a low bitrate range can be compensated by selectively performing the prediction based on the best prediction mode with our proposed mode-selective framework by Eq. 5. As shown in Fig. 7, our proposed mode-selective framework is a key component to gain high coding efficiency along with the multiple reference frames.
Temporal-Context-Adaptive Entropy Model: As shown in Fig. 7, our DeepPVCnet with the temporal-contextadaptive entropy model achieved coding effiency improvement by reducing the redundancy of the latent representation with the temporal context information of the reference frames.
In addition, the proposed entropy model has a structural advantage that it can be computed in parallel in contrast to the autoregressive-based video compression methods [11], [13].
Other Components: The FT layers and an the Enhancement Net have a few parameters compared to those of the entire network. Nevertheless, a slightly improved coding gain has been achieved. In particular, the FT layers allow the encoder to compress joint information effectively. Table 3, the total numbers of parameters of our DeepPVCnet are about 25.5M and 39.6M for low and high bitrate models, respectively. For testing, the runtime of our DeepPVCnet was measured in a platform with Intel I9-9900X CPU, 128GB RAM and a single Titan TM RTX GPU. For sequences of sizes 416×240, 832×480, 1280×720 and 1920 × 1080, the encoding and decoding speeds of our DeepPVCnet are (5.9 fps, 44.2fps), (3.9 fps, 15.0fps), (2.2 fps, 6.7fps) and (1.1 fps, 3.2fps), respectively. Especially, the decoding speed is considerably faster than other autoregressive based entropy coding model methods [11], [13]. This is because parallel processing is not possible on the decoder side for these methods [11], [13].
E. VISUAL COMPARISONS
In this section, we visualize the interim results by our Deep-PVCnet. Then, we visualize the pre-trained and fine-tuned optical flows by PWC-Net. Also, some reconstructed frames by the H.264, H.265 and our DeepPVCnet are presented for subjective comparison.
Visualization of Feature Maps and Reconstructed Frames: Fig. 6 visualizes the optical flow maps, the output residual frame, a reconstructed frame and an enhanced frame for an VOLUME 9, 2021 FIGURE 8. Visual comparison of optical flows from a pre-trained PWC-Net and a fine-tuned PWC-Net. The pre-trained PWC-Net generates a lot of smooth area of optical flows because it is trained to only accurately obtain motion between frames, not the direction in which the frame is compressed well. However, since the fine-tuned PWC-Net is trained in the direction in which the frame is well compressed, it also generates the optical flows with the texture area. input frame of a Beauty sequence obtained via the pipeline of our DeepPVCnet. The optical flow F 0→−2 in Fig. 6 is an input to the encoder network of the DeepPVCnet, which is obtained from the PWC-Net.F 0→−2 is the output optical flow of the decoder network, which is used to synthesize the current input frame for reconstruction. The output residual framer 0 is the difference between the outputx 0 and the blended output (x 0 −r 0 in Eq. 3) of warped frames, as shown in Fig. 1. Fig. 6 that the optical flows F 0→−2 and F 0→−2 look significantly different becauseF 0→−2 are generated to improve compression efficiency. Then,F 0→−2 includes more texture parts than F 0→−2 . Also, the output residual framer 0 contains texture parts, which makes it possible to reconstruct the areas that are difficult to recover by optical flows only. Finally, the enhanced framê x 0 is generated from the reconstructed framex 0 by the Enhancement Net, which is visually much closer to the input frame x 0 .
Visualization of Pre-Trained and Fine-Tuned Optical
Flows: Fig. 8 presents visual comparison of motion information for the pre-trained PWC-Net and the fine-tuned PWC-Net. As shown in Fig. 8, the motion information from the pre-trained PWC-Net contains large-sized fields of smooth motion since the pre-trained PWC-Net does not consider compression efficiency, only focusing on motion information between frames. Also, the pretrained PWC-Net is trained with a smooth motion constraint. However, the motion information from the fine-tuned PWC-Net contains both smooth and textured motion fields since it extracts motion information in a rate-distortion sense. Therefore, the optical flow with the texture parts that are generated by the fine-tuned PWC-Net is more suitable for video compression than the optical flow with the smooth parts that are generated by the pre-trained PWC-Net.
Subjective Visual Comparisons: Fig. 9 shows some cropped regions of decoded frames of HoneyBee and YachtRide sequences by H.264, H.265 and our method for visual comparisons. Our method yields decoded frames with higher contrast and less artifact than H.264 and H.265. The decoded results by H.264 and H.265 show that the wing and leg of the honey bee are poorly reconstructed, but our DeepPVCnet reconstructs those with a higher contrast and less artifacts. Also, similar results in a low bitrate range are observed for BQTerrace and Cactus sequences as shown in Fig. 10. Similarly, Fig. 11 shows some cropped regions of decoded frames of Bosphorus and Kimono sequences by H.264, H.265 and our method for visual comparisons. As shown in Fig. 11, while the H.264 and H.265 produce the the decoded regions with blocking artifacts in a low bitrate range, our method yields the decoded region of higher fidelity without such artifacts.
V. CONCLUSION
We propose an end-to-end deep predictive video compression network, called DeepPVCnet, based on multi-frame hypothesis with a multi-scale structure and a temporal-contextadaptive entropy model. Our DeepPVCnet incorporates a mode-selective framework with uni-and bi-directional predictive codings in a rate-distortion optimization sense by jointly compressing optical flows and residual data that are generated from the multi-scale structure via the FT layers in an encoder side. In addition, our DeepPVCnet with the temporal-context-adaptive entropy model has a much faster decoding speed because it can be performed in parallel unlike the recent video compression methods [11], [13] using the autoregressive-based entropy coding model. Based on these advanced components in a combination, the DeepPVCnet shows better compression performance than the existing video standard compression codecs (AVC/H.264 and HEVC/H.265) and recent SOTA methods in terms of MS-SSIM. In our future work, our DeepPVCnet is extended to learn a fully automatic selection of the best prediction modes during training.
APPENDIX B THE IMPLEMENTATION OF OUR PROPOSED ENTROPY CODING MODEL
For more details of the implementation of our proposed entropy coding model, we follow the same concept and notations in the CNN-based image compression methods [6], [21]. In the main paper, we provided the training loss L as the rate-distortion optimization problem for video compression.
Since the quantization of the latent representation is discrete, we substitute additive uniform noise for the quantization process during training. Then the approximated latent representationsỹ 0 andz 0 are used instead of the quantized latent representationsŷ 0 andẑ 0 , respectively, in the training loss L as follows: L ≈ E x 0 ∼p x 0 Eỹ 0 ,z 0 ∼q [− log 2 p˜y 0 |(z 0 ,X R ) (ỹ 0 |(z 0 , X R )) − log 2 pz 0 (z 0 ) + λ · d(x 0 ,x 0 )], (7) where x 0 ,x 0 and X R denote the current frame to be encoded, the reconstructed frame and the reference frames for x 0 , respectively. The joint factorized posterior with the additive uniform noise for the quantization process as in [6], [21] can be expressed as follows: q(ỹ 0 ,z 0 |x 0 , φ g , φ h ) = i U(ỹ 0,i |ỹ 0,i − 1 2 ,ỹ 0,i + 1 2 ) · i U(z 0,i |z 0,i − 1 2 ,z 0,i + 1 2 ) with y = g a (x 0 ; φ g ), z = h a (y 0 ; φ h ), where U, φ g and φ h denote a uniform distribution, the parameters of g a and h a , respectively. Our proposed entropy coding model approximates the required bits forŷ 0 andẑ 0 as in Eq. 7. The entropy coding model forŷ 0 is based on Gaussian model with mean µ i and standard deviation σ i . Our proposed Context-Net C and the hyper encoder-decoder network pair (h a , h s ) with the multiple reference frames X R estimate the values of µ i and σ i . The Context-Net C generates the temporal context information c t from X R and the hyper encoder-decoder network generates the context information c c from y 0 . Then the Context-Net concatenates c t and c c to estimate the values of µ i and σ i . The expression for this process is as follows: where θ c and θ h denote the parameters of the Context-Net C and the hyper decoder network h s . Note that our proposed VOLUME 9, 2021 entropy coding model can estimate mu and sigma in parallel during decoding process since the entropy coding model with the multiple reference frames is not autoregressive. We utilized the same entropy coding model forẑ 0 which follows a zero-mean Gaussian model with standard deviation σ as in [6]. Sinceẑ 0 has little effect on the total bit-rate of the current frame coding, we use a simpler entropy coding model forẑ 0 than the entropy coding model ofŷ 0 as follows: pz 0 (z 0 ) = i (N (0, σ 2 i ) * U(− 1 2 , 1 2 ))(z 0,i ).
APPENDIX C THE ARCHITECTURE OF ENHANCEMENT NET
In the main paper, we described the overall structure of our DeepPVCnet that consists of an encoder-decoder network pair (g a , g s ) with the feature transformation layer, a hyper encoder-decoder network pair (h a , h s ), the pre-trained PWC-Net [39], a Context-Net and an Enhancement Net. The Enhancement Net is incorporated into the decoder side of our DeepPVCnet to enhance the image quality of the reconstructed framex 0 . Fig. 12 shows the details of the Enhancement Net that consists of the residual dense network (RDN) [50]. As depicted in Fig. 12, the Enhancement Net consists of five residual dense blocks (RDB) with three convolution filters per each block.
APPENDIX D THE EXPERIMENTAL RESULTS FOR HEVC CLASS C AND D IN MS-SSIM
In the main paper, we showed the results of the rate-distortion (R-D) curves for our DeepPVCnet, H.264, H.265, Wu's method [46], DVC [26] and Habibian's method [13] with the UVG [2] and HEVC datasets [38] (Class B and E) that are consist of high-resolution sequences. Additionally, Fig. 13 shows the results of the R-D curves in terms of MS-SSIM for the HEVC datasets (Class C and D) that are low-resolution sequences. Our DeepPVCnet outperforms H.264, H.265 and DVC for most bitrate ranges in terms of MS-SSIM. Note that the experimental results for the HEVC datasets are provided only by DVC [26] among the recent deep video compression methods. | 8,920.8 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Quantum Lagrangian of the Horava theory and its nonlocalities
We perform the BFV quantization of the 2+1 projectable and the 3+1 nonprojectable versions of the Horava theory. This is a Hamiltonian formalism, and noncanonical gauges can be used with it. In the projectable case, we show that the integration on canonical momenta reproduces the quantum Lagrangian known from the proof of renormalization of Barvinsky et al. This quantum Lagrangian is nonlocal, its nonlocality originally arose as a consequence of getting regular propagators. The matching of the BFV quantization with the quantum Lagrangian reinforces the program of quantization of the Horava theory. We introduce a local gauge-fixing condition, hence a local Hamiltonian, that leads to the nonlocality of the Lagrangian after the integration. For the case of the nonprojectable theory, this procedure allows us to obtain the complete (nonlocal) quantum Lagrangian that takes into account the second-class contraints. We compare with the integration in general relativity, making clear the relationship between the underlying anisotropic symmetry of the Horava theory and the nonlocality of its quantum Lagrangian.
Introduction
Several studies have been devoted to the consistent quantization of the Hořava theory [1]. Some of the analyses performed under the framework of quantum field theory can be found in Refs. [2,3,4,5,6,7,8,9,10,11,12,13,14]. Other approaches of quantization, as causal dynamical triangulations and loop quantum gravity has been done, for example in Refs. [15,16,17,18,19]. A fundamental advance is the renormalizability proof of the projectable version presented in Ref. [2]. The difference between the projectable and the nonprojectable versions of the Hořava theory is that in the former the lapse function is restricted to be a function only on time, a condition that can be imposed consistently in the Hořava theory, whereas in the latter it can be a general function of time and space. An interesting feature of the proof of renormalizability is the introduction of nonlocal gauge-fixing conditions, which leads to a nonlocal quantum Lagrangian. The nonlocal gauges were motivated by the goal of obtaining regular propagators for all quantum modes, such that the renormalizability can be achieved in a similar way to the case of Lorentz-violating gauge theories [22,23,24]. The condition of regularity implies that the propagators have no divergences in space valid for each time and viceversa. For the case of the Hořava theory, the propagators acquire anisotropic higher order in momentum space.
Due to the emphasis on the symmetry, quantization of gauge field theories are usually performed in the Lagrangian formalism, rather than in the Hamiltonian formalism. The standard procedure for fixing the gauge is the Faddeev-Popov method [20], together with its associated Becchi-Rouet-Stora-Tyutin (BRST) symmetry [21]. Nevertheless, the quantization of the Hořava theory using the Hamiltonian formalism deserves to be considered. In particular, the quantization of the nonprojectable case is a delicate issue since it is a theory with second-class constraints. The analogous of the Hamiltonian constraint of general relativity acquires a second-class behavior in the nonprojectable Hořava theory, which can be related to the reduction of the gauge symmetry. The Hamiltonian formalism provides a natural framework for the quantization of theories with second-class constraints. Indeed, the contribution to the measure of these constraints is defined in the phase space [25]. Analyses on the Hamiltonian formulation and the dynamics of the degrees of freedom of the Hořava theory can be found in Refs. [26,27,28,29,30,31].
The nonlocal gauge-fixing conditions introduced in the projectable case are noncanonical gauges, in the sense that they involve a Lagrange multiplier. If one wants to use this kind of gauges in the Hamiltonian formalism, then an extension of the phase space is required. Motivated by this, two of us presented the Batalin-Fradkin-Vilkovisky (BFV) quantization of the 2 + 1 nonprojectable Hořava theory in Ref. [32]. The BFV formalism provides a quite general framework for quantization of systems with constraints, with the particularity that first-class constraints are not imposed explicitly and their Lagrange multipliers are promoted to be part of the canonical variables. The BFV formalism was first presented in Ref. [33] as a way to introduce noncanonical gauge-fixing conditions in the Hamiltonian formalism. This extension al-lows us to introduce relativistic gauges in the phase space, which is a way to establish 2 Projectable Hořava theory
Classical theory
The Hořava theory [1], both in the projectable and nonprojectable cases, is based on a given foliation that has an absolute physical meaning. The aim is to get an anisotropic scaling at the ultraviolet that favor the renormalizability of the theory, where a parameter z measures the degree of anisotropy. To hold this anisotropic scaling, the dimensions of the space and time are defined to be The order z is fixed by the criterium of power-counting renormalizability, which yields z = d, where d is the spatial dimension of the foliation. The Arnowitt-Deser-Misner variables N, N i and g ij are used to describe the gravitational dynamics on the foliation. The allowed coordinate transformations on the foliation, lead to the gauge symmetry of the foliation-preserving diffeomorphisms, (strictly, the spatial diffeomorphisms are the gauge transformations). The condition that defines the projectable version is that the lapse function is restricted to be a function only of time, N = N(t), a condition that is preserved by the transformation (2.3). In this section we summarize the canonical formulation of the projectable case, dealing with an arbitrary number d of spatial dimensions. The Hamiltonian analysis of the projectable case, taking the infrared effective action, was done in Ref. [26]. Further analyses, with different boundary conditions, can be found in Ref. [28]. The quantization of the same model under the scheme of loop quantum gravity has been studied in Ref. [19]. The Lagrangian of the projectable theory is given by where the extrinsic curvature is defined by and V[g ij ], called the potential, is built from invariants of the spatial curvature and their derivatives, up to the order 2z.
In the Hamiltonian formulation the canonical pair is (g ij , π ij ), whereas N(t) and N i (t, x) enter as Lagrange multipliers. Since N(t) is a function only of time, there is an associated global constraint, given in terms of a spatial integral. This constraint is Throughout this paper we assume that λ does not take the critical value λ = 1/d. This global constraint does not eliminate a complete functional degree of freedom. The local constraint of the theory is the momentum constraint, The primary Hamiltonian is Since N is a function of time in the projectable theory, we take advantage of the symmetry of reparameterizing the time, Eqs. (2.2) and (2.3), to set N = 1. With this setting the primary Hamiltonian density is equivalent to H. Due to their importance in the BFV quantization, and since the Hamiltonian is equivalent to H, we show the following two brackets between constraints, In the above ρ is a test function only of time whereas ǫ k and η k are test functions of time and space.
BFV quantization
The initial consideration in the BFV formalism is that the constrained system under quantization must be involutive. This means that, given a Hamiltonian H 0 and a set of functions G a , the following relations are satisfied (2.14) To avoid writing huge expressions, we use a simplification on the notation of brackets: we insert densities instead of spatial integrals, such as {A , B} → { d d xA , d d yB}.
The first-class constraints are part of the definition of the G a functions. The other part is given by the canonical momenta conjugated to the Lagrange multipliers of the first-class constraints, since these multipliers are promoted to canonical variables in the BFV extension of the phase space. The extended phase space is completed with the canonical pair of fermionic ghosts (η a , P a ), where each pair is incorporated for each function G a .
To apply this formalism to the projectable Hořava theory, we identify the momentum constraint H i as the only first-class constraint, being the shift vector N i its Lagrange multiplier. By denote by π i the canonical momentum conjugated to N i . Thus, the functions are G a = (H i , π i ). Since π i commutes with itself and with H i , the algebra (2.13) reduces to the algebra of H i , This corresponds to the algebra of spatial diffeomorphisms, as shown in (2.11), and we take the definition of U k ij from it. U c ab = 0 for a, b, c > i. The primary Hamiltonian is identified in (2.10), hence the bracket (2.14) corresponds to (2.12), such that V b a = 0. By incorporating the ghost fields, the full BFV phase space of the projectable Hořava theory is given by the canonical pairs (g ij , π ij ), (N i , π i ) and (η a , P a ). The ghosts can be split in the two sets, (η i 1 , P 1 i ), (η i 2 , P 2 i ). The gauge-fixing condition is incorporated in the path integral by means of a fermionic function Ψ, which is a given functional on the extended phase space. Thus, the BFV path integral of the projectable Hořava theory is given by (2.16) In this formalism the ghosts eliminate the unphysical quantum degrees of freedom that should be eliminated by the first-class constraints. Indeed, in d spatial dimensions the canonical pairs (g ij , π ij ), (N i , π i ) amount for d(d + 3) degrees of freedom, and the ghosts (η i 1 , P 1 i ), (η i 2 , P 2 i ) sum 4d degrees. After subtracting, one gets d(d − 1) physical degrees of freedom in the phase space of the quantum theory. In d = 2 this yield 2 degrees of freedom, which represent the scalar mode of the 2 + 1 projectable theory in canonical variables. In d = 3 the degrees of freedom are six, which are the two tensorial modes plus the extra scalar mode. Since the Hořava theory has anisotropic scaling, it is important to write down the dimensions of the several fields. This is (2.17) In the general BFV formalism, the gauge-fixed quantum Hamiltonian is defined by The Poisson bracket is extended to include fermionic variables, where R and L denote right and left derivatives and n A is 0 or 1 depending on whether A is a boson or a fermion. Ω is the generator of the BRST symmetry. According to the extension of the BFV formalism presented in Ref. [35], Ω and H 1 are defined in terms of expansions in the ghost fields, where s represents the rank of theory. The coefficient functions of the first order in P a are given by The rest of coefficients, up to the order s of the theory, are obtained by recurrence relations, starting from the first-order ones [35]. An essential condition of the BFV formalism is that Ω and H 1 must satisfy The first one is a nontrivial condition since Ω is a fermionic variables. These conditions support the BRST symmetry of the quantum theory. The projectable Hořava theory is of first order, that is, Ω ends at the first order in the ghosts, whereas H 1 is of zeroth order, The conditions (2.23) are satisfied as follows. We have the bracket of Ω with itself, The first two brackets are equal, hence cancel themselves. The last bracket is proportional to the structure η j 1 η m 1 η n 1 U k ij U i mn , which is zero by the Jacobi identity. Therefore {Ω , Ω} = 0. Next, where the last equality follows from (2.12) 1 . Therefore, we obtain the BFV gauge-fixed Hamiltonian of the projectable Hořava theory, According to the original BFV formulation, Ψ can adopt a form suitable for relativistic gauges. It turns out that this form is also suitable for the anisotropic symmetry of the Hořava theory. First, we deal with gauge-fixing conditions of the general struc- where the phase-space functional χ i is the part of the gauge-fixing condition that can be chosen. Thus, the specific BFV fermionic gauge-fixing function is With this choice the gauge-fixed Hamiltonian becomes Throughout this paper we assume that the gauge-fixing condition χ l does not depend on the ghosts fields, then the Hamiltonian takes the form Therefore, the BFV path integral for the projectable Hořava theory in the gauge (2.30) -(2.31) becomes (2.34) The generator of the BRST symmetry Ω acts on the canonical fields by means of the canonical transformationφ = ϕ + {ϕ, Ω} ǫ , where ǫ is the fermionic parameter of the transformation. The transformation of the fields is
Quantum Lagrangian
We continue with working on an arbitrary spatial dimensionality d, eventually we specialize to the d = 2 case. For the BFV quantization we have defined the structure of the gauge-fixing condition (2.30), which has the part χ i unspecified. To arrive at the quantum Lagrangian, we impose conditions on the functional form of χ i that allow us to perform the integration on the several canonical momenta. These conditions allow us to make a connection with the same gauge fixing used in the proof of renormalizability of the projectable theory.
We start with the integration on the momentum π i . The term −π k χ k in the action of (2.34) suggests to demand that χ i has a linear dependence on π i , leading to a quadratic term in π i in the Hamiltonian, otherwise a higher order dependence on this variable could lead to a violation of unitarity, which is contradiction to the spirit of the Hořava theory and its anisotropic symmetry. Therefore, we assume the structure of the gauge-fixing condition where Γ k is a functional that may depend only on g ij and N k . The restriction that Γ k does not depend on the momentum π ij allows us to perform the integration straightforwardly. According to the anisotropic dimensional assignments (2.17), the gauge-fixing condition must satisfy [χ k ] = 2z − 1, hence the dimension of the operator D ij must be Below we give explicitly the operator D ij and the gauge-fixing form Γ i in the perturbative framework. Nevertheless, many operations can be carried out without recurring to perturbations and for general Γ i . Hence we stay for a while on nonperturbative variables, using only the fact that D ij is a flat operator (does not depend on any field variable). By setting the form (2.36) for the gauge-fixing condition, the last three terms of the action of Eq. (2.34) become We may complete the square involving π i and then integrate on the shifted variable, obtaining the path integral Since D ij is a local operator, its inverse D −1 kl , which has arisen by the integration, is a nonlocal operator. Now we move to the ghost sector. The following change of notation is useful for the final the quantum Lagrangian: We may perform the integration of the Grassmann variables P i andP i , which arise in the action (2.39) in the terms The bilinear −P k P k can be completed, such that the Gaussian integration on these Grassmann variables can be performed (without consequences on the measure). After these steps of integration, the path integral becomes (2.42) Now we focus the integration on π ij . A significant part of the computations can be continued on nonperturbative grounds. Since this is interesting on its own, in appendix A we show this nonperturbative integration for the case of the projectable theory. In what follows we adopt a perturbative approach. We consider perturbations around the analogous of the Minkowski spacetime, given by We comment that for the d = 2-dimensional case we take the operator D ij as where κ is an arbitrary constant. The inverse D −1 ij is a nonlocal operator of dimension −2 in d = 2. The operator D −1 ij (2.43) was introduced in the gauge-fixing condition used in Ref. [2], with the aim of introducing the nonlocality that finally leads to regular propagators. This version of the operator D ij for the d = 2 case arises in several steps of the integration for arbitrary dimension d, with a fixed value of κ. For this reason we denote these special cases as The inverse of D ij 2 is also required, Note that the operator D ij 2 cannot be extended to the relativistic limit λ = 1. We denote the perturbative variables and the ghostsC i , C i are considered perturbative variables of first order. The quantum action given in (2.42), expanded up to quadratic order, results We perform the transverse-longitudinal decomposition and similarly for p ij . In d = 2 dimensions the T T mode must be absent from this decomposition. Thus, the action (2.49) becomes (2.53) Note that the (p T ) 2 term disappears in the relativistic limit λ = 1, hence we assume that λ does not take this value. By integrating p ij T T and p T , the action takes the form 2 (2.54) The last integration is on p i . The square involving this variable can be completed, After the Gaussian integration, the action (2.54) becomes (2.58) So far, the potential V and the factor Γ i of the gauge-fixing condition have been left unspecified, hence all the above formulas for projectable Hořava theory are valid in any spatial dimension d, except for the fact that in the d = 2 case the h T T ij mode must be dropped from all expressions. Now, to continue on obtaining the quantum Lagrangian, we specialize to the d = 2 case, specifying the potential and the gauge-fixing condition completely. The potential of the d = 2 projectable Hořava theory, up to second order in perturbations, becomes √ gV = µ √ gR 2 = µ(∆h T ) 2 . (2.59) The operator D ij is defined in (2.43). For the factor Γ i we take the form introduced in Ref. [2], which was obtained by considering the anisotropic scaling of the variables of the Hořava theory, where c 1 , c 2 , c 3 are constants. In the transverse-longitudinal decomposition it takes the form where γ = c 1 + 2c 2 + c 3 . Now we may write explicitly several elements of the action (2.58) for the d = 2 case. We have the terms that involve the time derivative of the shift vector, where ρ = (1 + κ) −1 . In the ghost sector we have the bracket The action in 2 + 1 dimensions takes the form (2.64) We notice the presence of odd derivatives in time or space in (2.64), which are also the terms that mix n i and the components of h ij 3 . We see that these odd terms cancel if we set 4 By adjusting these constants, the final quantum path integral of the projectable 2 + 1 Hořava theory, written in Lagrangian variables and at second order in perturbations, is where we have also decomposed the vectors, The quantum Lagrangian of Eq. (2.66) coincides with the one presented in Ref. [2]. Those authors used a Faddeev-Popov procedure for fixing the gauge, hence they get the usual parameter σ associated to the averaging on the gauge-fixing condition. To match exactly both Lagrangians, we must set σ = 1/4. At the end, the nonlocality only affects the time-derivative of the shift vector (and all propagators are regular [2]). Finally, we make a comment on the cubic order in perturbations in the ghost sector. We take the ghost sector of the action given in (2.42). Its expansion up to cubic order, imposing the gauge (2.60), is
(2.68)
This is equal to the cubic order in the ghost sector of Ref. [2], except for an additional term we find, which is −C k ∂ iĊk n i .
+ 1 Nonprojectable theory 3.1 Classical theory
In the nonprojectable theory the lapse function N is allowed to depend on time and space, hence it represents a complete functional degree of freedom. In this case a large class of terms that depend on the vector a i = ∂ i ln N arise in the Lagrangian [37]. We focus the nonprojectable theory in 3 + 1 dimensions. The Lagrangian has the general form shown in (2.6), but eliminating the restriction of projectability on N.
The criterium of power-countig renormalizability requires us to include a term of order z = 3 in 3 + 1 dimensions. The total Lagrangian, containing the z = 1, 2, 3 orders has many terms. In this analysis we take for the potential only the z = 3 terms that contribute to the propagators, which are the dominant terms in the propagators in the ultraviolet regime. They are [9] where α 3 , α 4 , β 3 , β 4 are coupling constants. In the nonprojectable theory the lapse function N and its conjugate momentum P N are part of the canonical variables. There is no time derivative of N in the Lagrangian, hence is a constraint of the theory. The classical Hamiltonian, obtained by a Legendre transformation, is 3) The rest of constraints are the momentum constraint, H i = −2∇ k π ki , and the constraint In the definition of the phase space, the main qualitative difference between the projectable and nonprojectable cases is the activation of the lapse function as a degree of freedom and the arising of the constraint θ 2 (3.4) in the side of the nonprojectable theory. The last two terms of the constraint θ 2 are total derivatives of sixth order, hence the integral of θ 2 is equal to the primary Hamiltonian (3.3), Actually, when the z = 1 terms are included in the potential, there is a boundary contribution remaining from the integral of θ 2 . Moreover, a term proportional to the so called Arnowitt-Deser-Misner energy is required for the differentiability of one of the z = 1 terms. Therefore, the general statement is that the primary Hamiltonian of the 3 + 1 nonprojectable Hořava theory can be written as the integral of θ 2 plus boundary terms. Since in this analysis we focus on the z = 3 terms, we can discard these boundary terms.
BFV quantization
Since the nonprojectable theory has second-class constraints, the definitions of the BFV quantization must be adapted, according to Ref. [35]. The involution is defined in terms of Dirac brackets, where Dirac brackets are defined by The implementation of the BFV quantization of the 3 + 1 case is parallel to the 2 + 1 case shown in Ref. [32]. Here we present the summary. The matrix of Poisson brackets of the second-class constraints has a triangular form, Since the primary Hamiltonian H 0 is equivalent to the second-class constraint θ 2 , its Dirac bracket is zero with any quantity, hence V b a = 0. The Dirac bracket of the momentum constraint H i with itself is equivalent to its Poisson bracket,
9)
This leads to the algebra of spatial diffeomorphisms, as in the projectable case, hence the coefficients U k ij are the same, and U c ab = 0 for a, b, c > i. We perform the BFV extension of the phase space in a similar way to the projectable case. The Lagrange multipliers form a new canonical pair (N i , π i ). The ghosts are the canonical pairs (η a , P a ). Thus, the full phase space is given by the pairs (g ij , π ij ), (N, P N ), (N i , π i ) and (η a , P a ). The BFV path integral of the nonprojectable Hořava theory is given by (3.10) where the measure and the action are given by Unlike the projectable case, in this case the second-class constraints must be imposed explicitly. By comparing the quantum degrees of freedom with the projectable case, here we see that the canonical pair (N, P N ) has been added to the phase space, but at the same time the imposition of the two second-class constraints θ 1 , θ 2 compensates the pair (N, P N (3.13) The nonprojectable Hořava theory is a theory of rank one, then 14) In the case of the second-class constraints the consistency conditions for the BFV quantization are The first condition holds following the same steps of the projectable case, but operating with Dirac brackets in this case. The second condition holds because H 1 = H 0 , and this is equivalent to a second-class constraint, hence its Dirac bracket is always zero. The gauge-fixed quantum Hamiltonian takes the form (3.17) As we did in the projectable case, we can adopt the form of the gauge-fixing condition used in the general BFV formalism, originally introduced for relativistic theories.
Thus, the gauge-fixing condition Φ i = 0 and the associated fermionic function Ψ take the forms given in (2.30) and (2.31), respectively. The Hamiltonian takes the form (3.18) Due to the form (3.8), the measure of the second-class constraints simplifies to √ det M = det{θ 1 , θ 2 }. Thus, this measure can be incorporated to the Lagrangian by means of the ghosts fieldsε, ε, Taking the definition of θ 2 given in (3.4), the bracket {θ 1 , θ 2 } results where δ xy ≡ δ(x i − y i ). Once we have obtained this bracket, we may integrate the variable P N without further consequences, since it vanishes due to the constraint θ 1 = 0. The constraint θ 2 can be incorporated to the Lagrangian by means of a Lagrange multiplier, which we denote by ξ. Thus, the BFV path integral of the nonprojectable Hořava theory in 3 + 1 dimensions takes the form (3.21)
Quantum Lagrangian
By adapting the discussion done in section 2 about the structure of the gauge-fixing condition to the nonprojectable case, we set The Gaussian integration on π i leads to the path integral (3.23) The next integration we perform is over the BFV ghosts that are canonical momenta. We perform the same change of notation (2.40). For the terms of the action that depend on P a andP a it is possible to carry out the integration after completing the bilinear in these variables, as in (2.41). The action of the ghost sector results Now we adopt the perturbative variables defined in (2.48), adding N − 1 = n. For the d = 3 nonprojectable theory we take [2] (3.25) The momentum constraint H j is given in (2.50), and the Hamiltonian density H 0 takes the form (3.26) Therefore, the path integral becomes We make the decomposition (2.52) on the fields. The second class constraint θ 2 and the measure of the second-class constraints, given by the bracket (3.20), contribute to the perturbative action with the following terms, respectively, where the Lagrange multiplier ξ is regarded as a perturbative variable. After these steps, the Gaussian integration on p ij T T and p T can be done by completing squares (assuming again λ = 1). This yields the action where the operators D ij 1 and D ij 2 are the same of the d = 2 case defined in (2.44) and (2.45). The last integration is on p i . We integrate in a similar way to how it was done in the projectable case, obtaining where B i is defined in Eq. (2.56). Finally, we make the decomposition on the vector variables shown in (2.67). In particular, Now we define the factor Γ i of the gauge-fixing condition, adopting the analysis of Ref. [2]. Those authors found that the appropriate gauge fixing condition in d = 3, preserving the anisotropy of the Hořava theory, is given by (3.32) The notation on the constants c 1,2,3 has been put intentionally equal to the projectable case (2.60). In terms of the transverse-longitudinal decomposition (2.52), this is We have the expansion of the term, with ν = 2c 1 +2c 2 +c 3 . As in the projectable case, the terms with a odd time derivative in (3.31) and (3.34) can be canceled by an appropriate setting of the constants c 1,2,3 , which coincides with (2.65) since the notation on these constants is the same. With this choice, the final path integral in the Lagrangian formalism of the 3 + 1 nonprojectable Hořava theory, with the z = 3 potential, results (3. 35) In the set of propagators derived from this action, shown in appendix B, almost all of them are regular. The nonregular ones arise when the variables associated to the second-class constraints, ξ andε, ε, are involved. This confirms that the nonlocal Lagrangian (3.35) leads to regular propagators for the original field variables, including the ghosts associated to the gauge fixing [2], but the presence of nonregular propagators persists, associated to the fact that the theory has second-class constraints, unlike the projectable case.
Comparison with General Relativity
As it is well known, the classical canonical action of general relativity written in ADM variables is The constraints are given by and both constraints are of first class. N and N i play the role of Lagrange multipliers. We denote them collectively by H a = (H, H i ), and N a = (N, N i ).
For the BFV quantization [36] we introduce the canonical pair (N a , π a ), hence we have the functions G A = (H a , π a ). For each of these functions we define the pair of fermionic ghosts (η A , P A ), which can be split as (η a 1 , P 1 a ), (η a 2 , P 2 a ). The involution relations {G A , G B } = U C AB G C lead to the algebra of spacetime diffeomorphisms. There is an essential qualitative difference with the Hořava theory, since in general relativity the coefficients U c ab depend on the canonical fields. This fact has important consequences in the BFV quantization [33,36]. The gauge-fixed BFV path integral takes the form (4.4) In the 3 + 1-dimensional spacetime the two canonical pairs (g ij , π ij ), (N a , π a ) sum 20 degrees of freedom. The ghosts (η a 1 , P 1 a ), (η a 2 , P 2 a ) sum 16 degrees. The substraction yields the usual four physical degrees of freedom in the phase space of quantum general relativity. The BRST charge takes the form The gauge-fixed quantum Hamiltonian is defined by Eq. (2.18), with H 1 = 0. The appropriate form of the gauge-fixing fermionic function is given in (2.31), which, considering the four spacetime directions, takes the form Ψ = P 1 a N a + P 2 a χ a . Thus, the gauge-fixed Hamiltonian results We proceed to the construction of the quantum Lagrangian. For the integration on π a we adopt the same strategy we used in the Hořava theory, considering in this case the four directions of spacetime diffeomorphisms. We take a gauge-fixing condition in the form The isotropic scaling in general relativity is Therefore, D ab is nonlocal whereas its inverse D −1 ab is a local operator. After the integration on π a , the quantum action takes the form The ghost sector is given by δΓ e δN a . (4.10) By integrating on the corresponding Grassmann variables we get the action We now perform perturbations, obtaining the second-order action (4.12) Here we face another qualitative difference with respect to the Hořava gravity. The Lagrangian in (4.12) has no (p T ) 2 term, unlike the Lagrangian in Eq. (2.53). This is a consequence of the relativistic structure behind the Hamiltonian of general relativity, which implies the frozen of the scalar mode. Hence, we change the order of integration in this case, by performing first the integration on the longitudinal component of the momentum p i . This brings the terms to the Lagrangian (4.13) Now we perform the integration on p T and p ij T T , obtaining (4.14) Therefore, the resulting quantum Lagrangian is completely local as far as the remaining part Γ a of the gauge-fixing condition is local.
Conclusions
We have seen that the BFV quantization is suitable for the Hořava theory, both in its projectable and nonprojectable versions, and varying the dimension of the foliation. This extends the analysis that two of us performed in Ref. [32]. The BFV formalism provides a rich framework to study the quantum dynamics of the Hořava gravity, in particular by incorporating the BRST symmetry in terms of the canonical variables.
In the past it has been used to establish the unitarity of gauge theories, thanks to the ability of introducing a bigger class of gauge-fixing conditions in the Hamiltonian formalism [33,36]. We have seen that the BFV version of the projectable (three-dimensional) theory reproduces the quantum Lagrangian presented in Ref. [2], which was obtained by fixing the gauge following the Faddeev-Popov procedure. Our results reinforces the consistency of the quantization of the theory. We have performed the integration on momenta after specifying the dependence that the gauge-fixing condition has on them. Specifically, we have introduced a linear dependence on the momentum conjugated to the shift vector. Guided by a criterium of anisotropic scaling, we have incorporated an operator that balances the momentum in the gauge-fixing condition. It turns out that, in both versions of the Hořava theory, this operator introduces a nonlocality in the Lagrangian after the integration. Thus, we have arrived at the same result obtained in [2] of having a nonlocal quantum Lagrangian, in our case starting from a self-consistent Hamiltonian formulation provided by the BFV formalism. The original Hamiltonian theory is completely local. In Ref. [2] it was pointed out that the final nonlocality of the quantum Lagrangian, restricted to the kinetic term of the shift vector, can be eliminated by introducing the conjugated momentum of the shift vector. We have corroborated this in an inverse way, starting from the complete, self-consistent and local Hamiltonian formulation and ending with the nonlocal Lagrangian. With the aim of having a further comparison, we have performed the same procedure in general relativity, taking into account the relativistic isotropy of its field variables. In this case the operator introduced in the gauge fixing-condition is nonlocal and the quantum Lagrangian resulting after the integration is local (whenever the dependence of the gauge-fixing condition on the rest of variables is local). Thus, we see an interesting relationship between the anisotropy of the underlying symmetry and the nonlocality of the quantum Lagrangian. The relationship has been established on very basic grounds, since it comes from the integration of the Hamiltonian theory. | 8,187.2 | 2021-12-20T00:00:00.000 | [
"Physics"
] |
Feasibility study of DCs/CIKs combined with thoracic radiotherapy for patients with locally advanced or metastatic non-small-cell lung cancer
Background The combination of dendritic cells (DCs) and cytokine-induced killer cells (CIKs) can induce the anti-tumor immune response and radiotherapy may promote the activity. We aimed to explore the feasibility of DCs/CIKs combined with thoracic radiotherapy (TRT) for patients with locally advanced or metastatic non-small-cell lung cancer (NSCLC). Method In this study, patients with unresectable stage III/IV NSCLC and an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0–2 and previously receiving two or more cycles of platinum-based doublet chemotherapy without disease progression received TRT plus DCs/CIKs or TRT alone until disease progression or unacceptable toxicity. The primary endpoint was median progression-free survival (mPFS). In treatment group, patients received four-cycle autologous DCs/CIKs infusion starting from the 6th fraction of irradiation. Results From Jan 13, 2012 to June 30, 2014, 82 patients were enrolled, with 21 patients in treatment group and 61 in control group. The mPFS in treatment group was longer than that in control group (330 days vs 233 days, hazard ratio 0.51, 95 % CI 0.27–1.0, P < 0.05), and the objective response rate (ORR) of treatment group (47.6 %) was significantly higher that of control group (24.6 %, P < 0.05). There was no significant difference in disease control rate (DCR) and median overall survival (mOS) between two groups (P > 0.05). The side effects in treatment group were mild and there was no treatment-related deaths. Conclusion The combination of DCs/CIKs with TRT could be a feasible regimen in treating locally advanced or metastatic NSCLC patients. Further investigation of the regimen is warranted.
Introduction
Lung cancer is the most commonly diagnosed cancer worldwide (1.8 million, 13.0 % of the total), and also a leading cause of cancer death (1.6 million, 19.4 % of the total) [1]. Patients with non-small cell lung cancer (NSCLC) account for more than 80 % of those with lung cancers [2]. Although much progress has been made in the last decade in lung cancer treatment, the overall 5-year survival rate is still less than 20 % [3]. More efforts are needed to improve the prognosis of NSCLC patients.
Thoracic radiotherapy (TRT) plays an irreplaceable role in treating NSCLC patients, especially those with medically inoperable or locally advanced unresectable disease [4]. Accumulating evidences show that TRT may stimulate the anti-tumor immune response [5][6][7][8]. Tumor cells killed by irradiation of more than a total dose of 10Gy [9] release tumor antigens that induce numerous immune modulatory molecules [10,11] and promote tumor-specific effector CD8 + T cells via dendritic cell (DC) activation [7]. DCs are the major antigen-presenting cells, and play a central role in regulating and activating anti-tumor immune response [12,13]. CIKs which express both T cell marker CD3 + and NK cell marker CD56 + display a strong anti-tumor activity [14]. DCs/CIKs cytotherapy is clinically efficient and can be well tolerated in tumor patients [15,16]. Based on the hypothesis that DCs/CIKs combined with TRT could benefit the NSCLC patients, we therefore sponsored a phase II clinical trial from January 2012 to June 2014, and explored the efficacy, safety and immunologic effects of DCs/CIKs combined with TRT in patients with NSCLC. [17]. All enrolled stage III patients were reluctant to receive or were not suitable for concurrent chemoradiotherapy or radical radiotherapy because of certain conditions, such as huge primary tumors, potential risk of heart failure, respiratory dysfunction and previous chemotherapy in other medical center, etc. Other inclusion criteria included an age of 18 years or older at the time of signing consent form; a life expectancy of 3 months or longer at the registration; an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0-2; adequate function of the liver, kidney, heart and hematopoietic system; two or more cycles of previous platinum-based doublet chemotherapy without disease progression. No previous DCs/CIKs cytotherapy or TRT was allowed. One or more measurable lesions are necessary for therapeutic evaluation based on Response Evaluation Criteria in Solid Tumors (RECIST 1.1) [18]. All study participants provided "written informed consent". Major exclusion criteria included an acute infection; any autoimmune disease; a history of severe allergic reaction; HIV-positivity; pregnancy or nursing.
Study design and patients
A block randomization was designed at the beginning, with estimated median progression-free survival (mPFS) of 6 months in control group, one-sided significance level of 0.1 and a power of 0.7. The target sample size was set at 120 patients (1:1), and dropouts were allowed. However, it would take a very long period of time to finish the enrollment because of the high medical cost of DCs/CIKs cytotherapy, which was excluded from medical insurance in China. Therefore, from Jan 13, 2012 to June 30, 2014, enrolled patients were assigned to control group and treatment group at their will instead of randomization. Patients in control group received TRT alone, while the patients in treatment group received TRT in combination with DCs/CIKs cytotherapy that started from the 6 th fraction of irradiation (Fig. 1a). The primary endpoint for this clinical trial was mPFS, and the secondary endpoints were objective response rate (ORR), disease control rate (DCR), median overall survival (mOS), PS change and side effects. Immunologic effects were to be explored. After TRT, enrolled patients would continue chemotherapy to reach a standard of 6 cycles in total.
Preparation of autologous DCs and CIKs
Autologous DCs and CIKs were prepared following the previous studies [19][20][21] (Fig. 1b). Briefly, peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-Hypaque gradient density centrifugation, and then cultured in X-VIVO medium for 2 h. The adherent cells were collected for preparing DCs in X-VIVO medium containing granulocyte macrophage colony-stimulating factor (GM-CSF) and interleukin-4 (IL-4). Five days later, tumor necrosis factor-α (TNF-α) and MUC-1 peptide (SAPDTRPAPGSTAPPAHGVT) (GL Biochem, Shanghai, China) were added into DCs culture for another 2 days. For preparing CIKs, non-adherent cells were cultured in X-VIVO medium containing interferon γ (IFN-γ), CD3 monoclonal antibody, and interleukin-2 (IL-2) for 10 days. The immune phenotype markers CD80, CD83, CD86, and HLA-DR for DCs and CD3, CD56 for CIKs were analyzed by flow cytometry. Contamination of bacteria, fungi and endotoxin in all the cultured samples were detected during the course of cell culture.
DCs/CIKs cytotherapy
At the beginning of the study (day 0), we collected PBMCs from the patients for culturing DCs and CIKs respectively in vitro. Subsequently, over 1 × 10 7 DCs were injected subcutaneously in the lymph node-rich regions (bilateral axillary or inguinal region) on days 7, 14, 21, and 28. Over 1 × 10 9 CIKs in 100 mL of of normal saline (NS) (0.9 %) were infused intravenously once a day for 4 consecutive days from day 11 to day 14 ( Fig. 1a).
TRT regimens
The interval between chemotherapy and enrollment was no less than 14 days. TRT including three-dimensional conformal radiotherapy (3D-CRT) or intensity-modulated radiotherapy (IMRT) was adopted according to NCCN guideline for patients with advanced NSCLC. Contour delineation and radiotherapy plan was designed and confirmed by the professional radiation oncologist. TRT was delivered at 2 Gy per fraction, 5 fractions per week, to a total dose of 60-66 Gy at planning gross tumor volume (pGTV) in 6-7 weeks. All plans were performed with the support of four-dimensional chest CT. The normal lungs received a limited radiation according to NCCN guidelines.
Assessment of clinical outcomes
According to RECIST 1.1 [18], the treatment efficacy was classified as complete response (CR), partial response (PR), stable disease (SD), and progression disease (PD). The ORR was defined as the percentage of patients with CR or PR, and DCR was defined as the percentage of patients with CR, or PR, or SD. mPFS was defined as the median time scale from enrollment to disease progression, while mOS was the median time scale from first treatment to death. The follow-up was performed at the 1 st and 3 rd month after TRT, and then every 3 months for the first year, and every 6 months thereafter. Routine follow-up assessments included physical examinations, vital signs, computed tomographic scans (CT), and laboratory tests.
Assessment of immunologic effects
Blood-drawing from participants was performed on day 0 and within a week after TRT (Fig. 1a). Cytokines (IL-2, IFN-γ) in serum were detected by enzyme-linked immunosorbent assay (ELISA) (R&D Systems, MN, USA) following the manufacturer's instruction. For assay of T cell populations and NK cells, 100 μL of EDTA anticoagulant blood samples were stained with corresponding antibodies (BD Bioscience), namely, anti-CD3 + , CD4 + and CD8 + for T cells, anti-CD3 + and CD56 + for NK cells, in darkness for 20 min. Then, erythrocyte lysis buffer was added. After being vortexed for 15 s and incubated at room temperature for 5 min, the samples were centrifuged to remove the supernatant and washed with PBS. After being resuspended with staining buffer, the samples were analyzed on the BD Aria flow cytometer (BD Bioscience).
PS and side effects
Adverse effects, such as insomnia, anorexia, fever, skin rash, and joint pain, were monitored and were observed once a week during the therapy and once a month during
Statistical analysis
The measurement data were expressed as mean ± standard deviation x AE s ð Þ and analyzed with the independent Student t test. The enumeration data were analyzed using χ 2 test. Kaplan-Meier curves with the log-rank test were used to estimate mPFS and mOS. Hazard ratio (HR) and 95 % CI were also calculated by Cox proportional hazard regression models. P < 0.05 was considered statistical significance.
Patient characteristics
From January 13, 2012 to June 30, 2014, a total of 82 patients with locally advanced or metastatic NSCLC were enrolled, with 21 patients in treatment group and 61 in control group (Fig. 2). Clinicopathological characteristics such as age, gender, PS, clinical stage of tumor, previous systemic chemotherapy, pathological type and PS in treatment and control groups were analyzed. None of them showed significant differences (Table 1, P > 0.05), which meant a nearly identical baseline between the two groups.
Clinical outcomes
The median follow-up time in treatment and control groups was 339 and 393 days, respectively. 0 CR, 10 PR, 9 SD and 2 PD were found in treatment group, and 0 CR, 15 PR, 39 SD and 7 PD were found in control group. ORR in treatment group is higher than that in control group (47.6 % vs. 24.6 %, P = 0.04) (Fig. 3). However, no obvious difference in DCR was observed between the two groups (90.5 % vs. 88.5 %, P = 0.767).
Immunologic response
Among the 61 patients in control group, complete immunologic results were obtained in only 20 cases before and after TRT. There was a lack of some medical materials in the rest patients because of their refusal to draw blood and the delayed follow-up, and some other reasons. These 20 cases were analyzed by assessing the baseline (Table 2) and immunologic effects. The results of cytokines (IL-2,
IFN-γ), T cell populations and NK cells were analyzed.
The serum levels of IL-2 and IFN-γ did not differ significantly between the two groups both before and after the TRT (Table 3, P > 0.05). Moreover, there were no obvious changes in the percentage of CD3 + , CD3 + CD4 + , CD3 + CD8 + , CD4 + /CD8 + T cell ratio and CD3 − CD56 + NK cells before and after TRT in treatment group (P > 0.05). However, it should be noted that there was a decrease in CD4 + /CD8 + T cell ratio after TRT in control group, with a P value close to 0.05 (Table 3, P = 0.08).
PS and side effects
At the beginning of the study, the PS in treatment and control groups was 0.4 ± 0.6 and 0.6 ± 0.7, respectively (Table 1). At the end of TRT, the PS in treatment and control groups were 0.9 ± 0.8 and 1.4 ± 0.6, respectively. Little PS increase was found in treatment group after TRT (0.48 ± 0.7). However, obvious PS increase was recorded in control group (0.9 ± 0.7). The PS increase in treatment group was significantly lower than that in control group (P = 0.018, Table 4). Side effects were assessed in all the 21 cases in treatment group, and 59 of 61 cases in control group, with incomplete follow-up information in 2 cases. The functions of the liver, kidney and heart of all the participants remained normal at the end of the TRT treatment. The most commen side effects were fever, anorexia, nausea, vomiting, myelosuppression, and radiation pneumonitis ( Table 3). Most of them were at level I~II, except radiation pneumonitis. Radiation pneumonitis with grade 3 was observed in 3 patients in treatment group (14.3 %), and 9 patients in control group (15.3 %). All patients recovered after suitable treatment within 2 months. There were no cases with grade 4 radiation pneumonitis and treatment-related deaths.
Discussion
Cancer cytotherapy is a novel therapeutic approach with great potential [22][23][24]. Since the report of the first DCs-based cancer vaccine clinical trial in 1995 [25], a lot of trials have been designed and conducted [26,27]. In 2010, Food and Drug Administration (FDA) approved the first DCs-based vaccine Provenge for the treatment of advanced prostate cancer [23,28]. Additionally, the cytotoxic and regulatory anti-tumor effects of CIKs are also attractive and promising. The combination of DCs with CIKs is a viable adoptive cytotherapy with a strong anti-tumor effect [29,30]. It was shown that irradiation enhanced MHC I expression, and changed the tumor microenvironment to boost greater infiltration of immuneeffector cells [31][32][33]. Tumor cells killed by irradiation released tumor antigens which were presented by ectopic DCs [10]. Both preclinical and clinical researches proved that radiotherapy combined with cytotherapy elicited greater anti-tumor response [34,35].
As for the clinical outcomes of our study, a longer mPFS was observed in treatment group than in control group (330 days vs 233 days, P < 0.05), and ORR was higher in treatment group (47.6 % vs 24.6 %, P < 0.05). Although there was no significant difference in DCR and mOS between the two groups (P > 0.05), the positive results in mPFS and ORR were still encouraging. Thus, patients treated with DCs/CIKs combined with TRT had a better clinical benefit. In the present study, we started DCs/CIKs cytotherapy from the 6 th fraction of TRT to release enough tumor antigens. Our results validate the hypothesis that tumor antigens released by TRT could enhance tumor-specific killing via ectopic DCs/CIKs infusion. For safety analysis, during the combination therapy of DCs/CIKs and TRT, a majority of side effects were mild, tolerant and similar to TRT alone. No new safety signals were identified, and no treatment-related deaths occurred. In addition, we found a significant PS increase after TRT in control group (P < 0.05). Nevertheless, there was a minor PS increase in treatment group (P > 0.05). It suggests that combined cytotherapy improves the PS for advanced patients receiving TRT. Thus, DCs/CIKs in combination with TRT shows a good safety profile.
Cancer patients often suffer from immune deficiency, including a decrease in CD4 + /CD8 + T cell ratio, especially during a long period of systemic chemotherapy [36]. In the present study, we found that there was a tendency of a decrease in CD4 + /CD8 + T cell ratio after TRT in control group (P = 0.08) instead of in treatment group (Table 2). Thus, a reasonable explanation could be that radical TRT with conventional fractionation causes immune suppression in control group, and DCs/CIKs cytotherapy partially rescues immune suppression induced by TRT in treatment group.
Meanwhile, the current study detected other cytokines, such as IL-2 and IFN-γ in peripheral blood, which were supposed to play critical roles in specific immunological effects and promoting innate and adaptive immune responses [37]. The serum levels of IL-2 and IFN-γ did not change significantly after TRT in both groups (P > 0.05). Since the immune response is very complex in DC/CIK combined with TRT, further research is needed to reveal cytokine activity in the future.
Given that irradiation-mediated immune responses alter the tumor micro-environment, more and more researches have explored that local radiation combined with CTLA-4 blockade [38] or PD-L1 blockade [39] could promote anti-tumor immunity. Our results also suggest that the combination of cytotherapy with TRT is a novel feasible application. It shows better clinical benefit, a good tolerance, minor PS change, and promotes the PS change after TRT 0.48 ± 0.7* 0.9 ± 0.7 *PS change is significantly better than control group (P < 0.05) immunity to some extent. However, further studies are needed with larger sample sizes. In addition, due to the lack of randomization and thus possible bias (e.g. more wealth, better education, better supportive care in treatment group), activity needs to be further evaluated in a properly designed randomized trial. Standardized treatment schedule and detailed mechanism of DCs/CIKs combined with TRT should be elucidated in the ongoing research.
Conclusions
Our study confirms the efficiency and safety of the combination of DCs/CIKs cytotherapy with TRT in advanced NSCLC. Indeed, this novel strategy enhances immunity, improves ORR, prolongs mPFS, and barely changes PS, with no severe treatment-related side effects. It is therefore a feasible regimen for patients with advanced NSCLC. | 4,015.2 | 2016-04-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Hodgkin–Huxley revisited: reparametrization and identifiability analysis of the classic action potential model with approximate Bayesian methods
As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models.
Introduction
Cardiac cells have been one of the most popular targets of mathematical biological modelling since the field's inception in the late 1940s. This is partly owing to the difficulty of performing clinical studies on human hearts, as well as the high species dependence of electropysiological properties such as action potential (AP) shape, duration (APD) and restitution properties that limit the applicability of studies in model systems [1]. In the face of this paucity of data, simulations from these models are used as a primary level of inference for the impact of mutations or drugs on cellular-or organ-level properties [2][3][4], and can also inform the design of future experiments. Analysis of the ways in which simulations fail to reconstruct experimental recordings often leads to a better understanding of the data needed to correct the model in the next iteration [5].
One of the most influential and enduring cellular models is the Hodgkin-Huxley AP model [6]. Though the model was constructed for the squid giant axon, the experimental set-up and formulation of model components set a standard that was adopted by many cardiac models, and persists to this day. From their original voltage clamp experimental data [7], the authors determined that the ionic conductances were best explained by a system of nonlinear ordinary differential equations (ODEs) which, unbeknown to them, can describe by direct biological analogy the opening and closing dynamics of membrane ion channels. These equations ((2.6)- (2.16) in §2) serve as the foundation for the socalled 'Hodgkin-Huxley style' formulation for ion channel modelling, where all channel transitions are modelled by ODEs. To this day, many cardiac models incorporate this formulation for modelling ion channels when performing large-scale simulations owing to its simplicity when compared to a more general Markov formulation [8].
At the time, parametrizing the Hodgkin-Huxley model required meticulously matching hand-drawn curves to experimental data. Despite the advances in model fitting techniques from these pen and paper methods, however, a commensurate increase in cardiac model complexity has made it difficult to obtain enough independent data to fully employ such methods. Advances in recording techniques such as the patch clamp experiment [9] have driven the inclusion of more ion channels and intracellular dynamics with the intention of forming a more complete representation of the true cellular environment, increasing the amount of experimentation required to fully observe a system (see [5,10] for a more complete discussion of the history of cardiac modelling). The modelling of ion channel kinetics in particular has posed a problem, as most cardiac data are macroscopic, and isolation/expression systems to study single channels can alter their native behaviour [4]. In the face of these difficulties in obtaining data for modelling experiments, many modellers borrow or adapt parameter values, or even entire model subunits, from previous models, which were often constructed for different experimental systems [11,12].
While much effort is taken to adjust model components for differences in species and/or experimental conditions, as well as to maintain certain macroscopic properties such as AP shape, comparative analyses have shown discrepancies in behaviour between sequentially fit models purporting to represent identical systems [13][14][15]. This indicates a fragility in these models that limits our confidence in their predictive power outside the range of their validated behaviour. While some of these discrepancies may be attributed to biological variability, poor documentation or justification of parameter inheritance has made it increasingly difficult to link parametrized components to original data that might capture or explain this variation [14,16], leading to a weakening link to the mechanistic underpinning of some of the more complex models being proposed today.
This devolving link between experimental data and parametrized models makes the application of modelling techniques which provide posterior distributions over parameters (rather than single optimal values) very difficult. Paradoxically, therefore, one of the most mature areas of systems biology has perhaps benefited least from the application of the advanced methods of parameter fitting and model selection that have been applied to more recently developed areas [17,18]. In light of this problem, the original Hodgkin-Huxley model becomes an ideal case study for illustrating and assessing the utility of the application of modern inference techniques to cardiac AP models based on the Hodgkin-Huxley formulation. All data used by Hodgkin and Huxley were originally and consistently produced by the authors, and therefore provide the ideal test-bed for exploring the relationship between parameter values and experimental data for these types of models.
Application of these new techniques also allows us to explore the issue of model identifiability for Hodgkin-Huxley type models, that is, the degree to which the 'optimal' parameters for the model are unique. Published models are underpinned by an assumption that the given parametrization is unique for the data that were used to fit them. This assumption is inherent in the use of classical fitting methods such as least-squares regression, which output only a single optimal point in parameter space. We know, however, that such a unique optimum is unlikely to exist in biological systems, as both inter-and intracellular variations may be reflected by variations in the biophysical parameters of the model. Therefore, by examining posterior distributions over model parameters, we can quantify and characterize model uncertainty, which can either inform us as to the ability of our data to constrain the model or the degree of biological variability in the system, as we outline below. If a cellular model were to have a non-unique optimal parametrization, with either several or infinitely many equally likely possibilities, the model may be deemed 'unidentifiable'. Unidentifiability can be divided into two types: structural unidentifiability, where the model is overly complex to describe the system (this if often also called over-parametrization), and practical unidentifiability, where there are not enough data to fully constrain the model [19]. Distinguishing between the two types of unidentifiability can be important for experimental design. Pinpointing a structural unidentifiability, which is characterized by a functional relationship between several parameters (and thus infinitely many equally optimal parametrizations), may be a cause for concern, and prompt changes in model formulation before attempting further experimentation. However, if there is a high degree of prior faith in the model formulation, this may instead suggest a biologically important redundancy in the system characterized by the functional relationship between biophysical parameters. Practical unidentifiability, on the other hand, may inform how additional experimental data might be collected in an effort to further constrain the model [19,20], or may simply be a representation of inherent variability of the biophysical parameters in the experimental system.
While there is no universally accepted automated means to assess identifiability [21], many different methods, such as parameter sensitivity analysis [2,22,23] and analysis of curvature of an objective function [24,25], have been applied to cardiac cell models, and there now exist documented concerns for model identifiability in both widely used Hodgkin-Huxley and Markov style models under common experimental protocols [21,24,26].
When experimental data are available to us, however, an appropriately chosen Bayesian method for parameter fitting can also be used to assess the identifiability of a model. Bayesian inference involves calculating a full posterior probability distribution around the model parameters, the shape and width of which can inform the modeller as to the degree of variation about the optimal parameter value. A wide, flat posterior on a parameter, for example, indicates a large number of equally optimal values, which suggests that the parameter may be unidentifiable [27]. We expect variation arising from natural biological variation to manifest itself as a well-formed distribution, the statistical properties of which can be used to draw conclusion as to the nature or source of such variation, while variation arising from insufficient data would be more erratic, related to noise in the experimental data or random choices made by the algorithm when the observed data does not provide much information about a portion of the model dynamics. Because only further experimentation can firmly distinguish the two (inherent biological variation should remain unaffected by the inclusion of additional data), the automated design of 'optimal' experiments to attempt to reduce uncertainty is an area of current interest [28].
Because fully Bayesian inference involves potentially intractable integrals, much of the research in this area has been into methods to speed up or reasonably approximate calculation of the posterior [17,29]. Approximate Bayesian computation (ABC), first employed for parameter selection by Tavare et al. [30], gives a means to generate a population of solutions by repeatedly sampling from a prior distribution over model parameters and accepting draws based on an objective function evaluation. Bayesian inference also gives us the ability to include data from multiple sources in a principled manner, often weighted by the degree of noise present or prior knowledge as to its relative significance. This allows us to include data from multiple repetitions of an experiment at once, allowing both mean behaviour and variability of the experiment to inform posterior parameter distributions.
We aim to use ABC to investigate the identifiability of the Hodgkin and Huxley model by refitting both the basic (equations (2.6)-(2.10)) and expanded (equations (2.11)-(2.16)) form to the authors' original published data. We will assess the identifiability of the model by examining the width of the resulting posterior estimates for all parameters in each model and attempt to classify unidentifiabilities as structural or practical by examining model output fluctuations and parameter correlations across the posterior. Finally, we will examine the potential of using more complex voltage protocols proposed by the original authors to further constrain the voltage-dependent model. This analysis allows not only the assessment of the ability of the original manual parameter fitting methods to match modern automated techniques, but also serves as an example for the implementation of informative parameter estimation techniques in cardiac modelling.
We have released code implementing these techniques so that modellers can apply these Bayesian parameter fitting techniques to their own systems of study, providing an unambiguous link between components of published models and experimental data. This code was built on the functional curation extension [31,32] to the Chaste cardiac simulation library [33], which allows the specification of stand-alone simulation protocols that can be applied to a range of in silico models, in order to ease the mapping between simulated and real data. The adoption of such standards would allow modellers to quantify their certainty in a published model given the experimental data employed, and thus allow for the informed adoption of a model or its components by their peers.
In §2, we present both forms of the Hodgkin-Huxley model in full detail. In §3, we discuss the details of our implementation of ABC, as well as our use of functional curation. In §4, we present the results of our ABC re-fitting experiments on the two forms of the Hodgkin-Huxley model, and analyse the properties of the resulting posteriors. In §5, we discuss the implications of the identifiability analysis for potential improvements to the design of experiments used to fit the model, and finally conclude in §6.
The Hodgkin-Huxley model
Hodgkin and Huxley treated the squid axon as an electrical circuit, with current across the membrane being carried by a capacitor or by one of three ionic currents: I K , the current carried by potassium ions, I Na , the current carried by sodium ions, and I l , a catch-all leakage current. Thus, the fundamental equations for simulating membrane potential changes were as follows: 3) and where dV/dt is the rate of change in membrane potential, C M is the membrane capacitance, and I i is the sum of the three ionic currents. For each ionic current x, V x represents the reversal potential (the membrane potential at which there is no net flow of that ion) and g x is the membrane conductance per unit area for that ion. The ionic conductances, with the exception of the leakage current g l which was assumed constant, were theorized to be explained by the following ODEs: conductance type equations potassium sodium g Na =ḡ Na m 3 h,
Methods
All Hodgkin-Huxley conductance data were obtained from figs 3 and 6 digitized from the original publication using Plot Digitizer (http://plotdigitizer.sourceforge.net). Plot L of the potassium conductance data (figure 3, depolarization of −6 mV) was eliminated owing to the lack of a reliable scale on the y-axis.
Model simulations
All simulations for ABC were carried out using the current development version of the Python implementation of functional curation [31,33] (https://chaste.cs.ox.ac.uk/trac/browser/projects/ FunctionalCuration).
Hodgkin-Huxley CellML model
A CellML [34] model file for Hodgkin-Huxley was annotated with metadata tags to allow the adjustment of the α and β parameters (equations (2.7), (2.9) and (2.10)) by the fitting algorithm. Similarly, the constants in equations (2.11)-(2.16) were replaced with free, externally adjustable, variables as detailed in equations (3.1)-(3.6), giving five free parameters for the voltage-dependent potassium conductance g K and nine free parameters for the voltage-dependent sodium conductance g Na .
Voltage clamp protocol
A functional curation protocol, which details the external stimuli applied to a corresponding in silico model (parsed from a CellML file), was written to replicate the voltage clamp experiments in figs 3 and 6 of the original publication. Initial conditions for the ODEs and maximum conductance values were set as reported in tables 1 and 2 of the same [6]. Each voltage clamp experiment was carried out by redefining the model's membrane voltage to a fixed value from within the protocol, then simulating the model over a 12 ms time course from the initial state, recording the two ionic conductance values every 0.1 ms. When fitting the limited six-parameter form of the model (equations (2.7), (2.9) and (2.10)), the rate parameters α n , β n , α m , β m , α h and β h are defined by the protocol to be constants, which are set by the fitting algorithm. When fitting the full 14-parameter voltage-dependent model, these six rate parameters are in turn parametrized according to equations (3.1)-(3.6), with the 14 free parameters set by the fitting algorithm: conductance type equations potassium Threshold excitation. After reaching steady state, the model was subjected to a membrane depolarization of either −10 mV, 2 mV, 5 mV 6 mV, or 7 mV, with only the latter being sufficient to trigger an AP under reported values for gating rate voltage dependency parameters (figure 1b). The membrane voltage was then recorded every 0.01 ms for the duration of a 10 ms time course simulation.
Other functional curation protocols
Positive phase depolarization. After reaching steady state, the model was subjected to a 15 mV membrane depolarization followed by a 5 ms, 6 ms or 8 ms time course simulation. After this, the model was subjected to an additional 90 mV depolarization and the remainder of a 15 ms total time course simulation (figure 1c). Membrane voltage was recorded every 0.01 ms for the total duration of the 15 ms time course following the initial depolarization.
Oscillation induction. After reaching steady state, the membrane current of the model was clamped to −1.
Approximate Bayesian computation general settings and adaptive error shrinking
The simplest form of ABC is known as the rejection sampler. In this scheme, parameters are continually sampled from a specified prior distribution and used to simulate model output. Parameter sets that generate simulated output close to the experimental data are accepted and are added as 'particles' of a population of solutions that estimate the true posterior. Thus, for the rejection sampler, all that is required is the specification of prior parameter functions, a distance function between simulated and experimental output, and an acceptance tolerance for this function. When the prior and posterior distributions of parameters differ greatly, however, this method is impractical, as very few samples from the prior are expected to be accepted. More sophisticated variants of ABC create a series of posterior estimates, each one drawing samples from the one before it (rather than the prior). The error tolerance is initially relaxed, leading to a high number of acceptances, but is gradually tightened between rounds of sampling. Such a scheme will smooth the difference between the prior and the posterior, sequentially narrowing the range of accepted parameter values. We employed such a variation of ABC described by Toni et al. as 'sequential Monte Carlo' (ABC-SMC), which generates a series of posterior estimates of fixed size as follows: (i) draw N parameter vectors from the prior distribution π (θ) to form the initial posterior estimate θ 0 . Each component particle θ ) of this initial estimate will be assigned an initial, uniform weight of w -draw a particle from the previous posterior estimate θ t−1 and slightly perturb the parameter values according to a (stochastic) function K(θ) to obtain θ * . Repeat this drawing and perturbation until θ * is legal (i.e. has non-zero likelihood under the original prior π (θ)); -simulate model output y * under parameters θ * . If the distance function between the simulated data and the original data D(y * , y true ) is less than t < t−1 , ACCEPT θ (i) t = θ * . Otherwise, REJECT and repeat from the previous step; -set the weight w (i) , the probability of obtaining the particle as a perturbed draw from the previous estimate. In order to calculate this probability, the kernel function K(θ ) must either be reversible (P(K(θ) = θ * ) = P(K(θ * ) = θ)) or the so-called 'backward kernel' K −1 (P(K(θ ) = θ * ) = P(K −1 (θ * ) = θ)) must be provided; and (iii) when all N particles are updated, normalize the weights w t and begin the next iteration (t ← t + 1).
The choice of the so-called kernel function K(θ) is important, as the perturbations it induces to the draws promotes exploration around previously accepted parameter estimates with the intention of generating better ones. Too much deviation from the original values, however, may lead to erratic behaviour in the draws and lower the acceptance rate. Thus, a kernel function must balance between locality and exploration, and must take into account the scale of the model parameters in order to generate valid perturbations. The variance of the final posterior estimate can be used to quantify the identifiability of each parameter, or of the model as a whole.
The main drawback of ABC-SMC is the need to pick an appropriate 'cooling schedule' [ 0 , . . . , T ]. If the reduction of error demanded between rounds is too great, the algorithm will behave like the rejection sampler. If it is too small, the algorithm may take an extremely long time to run. We propose a novel variant of ABC-SMC that adaptively sets t at each round: A similar strategy for automated calculation of the cooling schedule is now implemented in the ABC-SysBio Python package [35]. While the SysBio implementation allows selection of an α parameter determining the quantile of the previous population to be used for the next error cut-off (arbitrarily initially set to 0.5 in our implementation), the implementation does not adaptively adjust α if the population is not filled in a certain number of iterations, unlike in our approach. Our adaptive error shrinking is thus relatively parameter-free, though a carefully chosen α (or heuristically determined cooling schedule) would be expected to show faster performance for a specific problem. Our ABC algorithm maintained a posterior population of 100 particles and performed a maximum of 10 000 draws from the previous estimate before reattempting with a higher error threshold. A 'no improvement' threshold was set to cause the algorithm to terminate if successive rounds did not decrease the maximum error by more than 0.003. This cut-off was chosen based on the magnitude of the minimum RMSE attained by a simulated voltage clamp trace under reported parameters (equations (2.11)-(2.16)).
Approximate Bayesian computation settings for fitting of Hodgkin-Huxley parameters
Prior distributions over all parameters were assumed to be independently uniform with width roughly an order of magnitude greater than the reported value. A random walk kernel distributed as a zerocentred normal with variance roughly 10% of the width of the associated prior was applied to each draw. Exact specifications of prior and kernel distributions are detailed in table 1.
For each voltage clamp experiment, data were digitized from the original Hodgkin and Huxley publication in the form of a pair of vectors: time points, t, and the ionic conductance (either sodium or potassium) at each time point, g(t). When fitting the limited form of the Hodgkin-Huxley model (equations (2.7), (2.9) and (2.10)), the experimental data from each voltage clamp were used to fit the six parameters at the experimental depolarization. Thus, the distance function employed by ABC when fitting α and β was the squared distance between the experimental conductance and the simulated When fitting the full, voltage-dependent Hodgkin-Huxley model (equations (3.1)-(3.6)), all voltage clamp data were employed at once to capture time-and voltage-dependent effects. This is equivalent to including M equally weighted experimental repetitions. Thus, the distance function employed by ABC in this instance is simply the RMSE between the simulated dataĝ j (t) and the experimental data (defined in equation (3.7)) averaged over all M voltage clamp experimental traces g j (reported for a single axon preparation in figs 3 and 6 of the original publication):
Approximate Bayesian computation posteriors on gating rate parameters α and β
We began with inference on the parameters of the simplified Hodgkin-Huxley model (equations (2.6)-(2.10)), as the data used by the authors to arrive at their reported parametrizations were visually reported in the original publication. Posteriors over the six gating rate parameters (α n , β n , α m , β m , α h and β n from equations (2.7), (2.9) and (2.10)) were inferred by ABC using the digitized voltage clamp data as described in §3.2.2. Summaries of the final posterior estimates obtained by applying the adaptive error shrinking implementation of ABC ( §3.2.1) are shown in figure 3a.
In figure 3a, we see a high homology between the values of α n and β n reported for the data depicted in fig. 3 of the original publication and the mean values of the posterior estimates produced by ABC. We also see universally small standard deviations about the mean posterior estimates, indicating a high degree of confidence that they reflect a well-constrained optimal value. When the magnitude of the depolarization is smaller, we see a slight systematic deviation of the mean posterior estimates produced by ABC from the original reported values. We attribute this to the ability of our automated method to produce a better fit than the manual methods of Hodgkin and Huxley when fluctuations in the data (∝ |min(g K ) − max(g K )|) are small. Indeed, we found the RMSE performances of all particles in these posterior estimates exceed that of the reported parametrization (not shown), indicating an improvement in fit to the experimental data by the ABC estimates. Figure 3a shows a less perfect homology between ABC mean posterior values of α m , β m , α h , β h and those reported by Hodgkin and Huxley, as well as generally larger standard deviations, which is not unexpected given the increased complexity of the function for sodium conductance (equation (2.8)). For the activation term m, which controls the initial increase of sodium conductance, we see that the 'spike' parameter α m shows lower homology with reported values for high depolarizations (as well as looser bounds) yet traces from the final ABC estimates were found to achieve better RMSE performance than traces with reported values. The higher uncertainty at large depolarizations despite improvement of fit may be explained by the low frequency of sodium conductance sampling during the AP in fig. 6 of [6], which results in very few data points falling on the spike itself. Poor definition of this portion of the curve would lead to multiple, equally fit choices for α m , and thus a larger variance in the ABC posterior. ABC mean values for the 'plateau' parameter β m are reasonably consistent with reported values with some exception towards low depolarizations, where the similar RMSE performance and loose bounds may indicate a non-unique parametrization for the complex function, resulting from low deviation in g Na for | V| < 10 mV.
For the inactivation term h, ABC posterior estimates for α h and β h are both tightly bounded and show high mean homology with reported values (particularly α h , which is effectively 0 throughout) except for low-magnitude depolarizations. At | V| < 10 mV, however, the sodium conductance inactivation gate h is effectively irrelevant, as the conductance never reaches a significant spike in activation. At these depolarization values, the shape of the sodium conductance curve begins to resemble that of potassium conductance, the model for which contains no decay term. This lack of impact of h on the observable quantity g Na explains the large variations of the posteriors around α h and β h .
Approximate Bayesian computation posteriors on voltage dependency parameters k
After fitting the α and β parameters describing the gating rates for potassium and sodium current conductance, we attempted to fit the parameters of the full Hodgkin-Huxley model (equations (3.1)-(3.6)), which describe the voltage-dependency for the said rates. The forms of these functions were empirically chosen by the authors to fit the variation of their estimates of the α and β parameters of the simplified model over a range of depolarization values. These α and β values were in turn inferred from the shape of the ionic conductance data described in figs 3 and 6 of the original publication and represented as dashed lines in figure 3a. Rather than this indirect means of fitting-first deriving values of the rate constants and then parameters for the functions thought to describe them-we sought to fit the parameters of equations (3.1)-(3.6) directly to the experimental ionic conductance data (using the distance function described in equation (3.8)). This effectively expanded the parameter space for ABC inference from the six in the previous section to 14, but also expands the training dataset from one to 12 traces.
Summaries of the final ABC posterior estimates for the potassium conductance parameters are reported in table 2. We initially note that the mean posterior estimates of k α n 2 and k α n 3 demonstrate high As the three highly variable parameters k α n 2 , k α n 3 and k β n 2 are all contained in the exponential portions of the equations, we sought to determine whether this could be a structural unidentifiability caused by the choice of an overly complex functional form. In figure 3b, we parametrized equations (3.1) and (3.2) according to each particle in the ABC posterior estimate and plotted the resulting values of α n and β n over a range of membrane voltages. In the case of a structural unidentifiability, we would expect low variation in the function output despite high variation in parameter space. Instead, we observe a relatively wide distribution around α n at all values of V, and a widening distribution around β n at low values of V when k β n begins to dominate the exponential portion of the function. This suggests that fluctuations in parameter values lead to commensurate fluctuations in the observable output. Additionally, the biplot of the posterior estimate in figure 4 fails to reveal any strong correlations between parameters that might be expected of a structural unidentifiability. The marginal distributions of the highly variable parameters k α n 2 , k α n 3 and k β n 2 also show one or more distinct peaks, which indicate preference of the algorithm for certain values despite the overall uncertainty. This would not be expected if fluctuations in these parameters did not affect the observable quantity g K . Together, this suggests that the full Hodgkin-Huxley model for potassium conductance exhibits a practical, rather than structural, unidentifiability.
ABC posterior estimates for sodium conductance voltage dependency parameters are comparatively well constrained. While table 3 shows several parameters with high mean deviation from reported values (k α m 2 , k α m 3 , k β m 2 and k α h 2 ), posterior estimates for all parameters show low relative variance and 90-percentile spread with the exception of k α h 2 . This, coupled with the substantial decrease in RMSE of all particles in the posterior when compared to the model under reported parametrization (indicated in the bottom row of table 3), suggests that ABC has arrived at a relatively tightly bounded optimum exceeding that of the manual methods of the original paper. Figure 3b seems to support this, as posterior traces around α m , β m and β h notably deviate from the reported traces yet maintain a constrained form across all values of V.
In figure 3b, α h shows high input sensitivity to variation in the unconstrained parameter k α h 2 at low values of V, where k α h 2 dominates the exponential portion of equation (3.5). This is not suggestive of structural unidentifiability, as fluctuation in the posterior distribution is manifest in the observable quantity α h . A biplot visualization of the ABC posterior (figure 5) fails to show any correlation between the voltage dependency parameters of h, and the marginal distribution of k α h 2 has clear skew, indicating the ability of the algorithm to discern and penalize fluctuations in its value. This supports classification of k α h 2 as a practically, rather than structurally, unidentifiable parameter.
Posterior performance on voltage protocols
To investigate experimental designs that could be employed to further constrain the ambiguous parameters returned from our ABC analysis (or, failing that, support classification of said variation as inherently biological), we investigated the emergent behaviour of the parametrized models when [6]. Two of these protocols were designed to probe the ability of the cell to respond to membrane potential injections of variable magnitude or timing. The 'positive phase depolarization' protocol assessed the degree to which a voltage stimulus during the recovery phase of the AP could trigger a secondary activation, while the 'sub-threshold depolarization' experiment sought to assess behaviour of the membrane potential under depolarizations insufficient to trigger a full AP. The other two protocols, 'anode break excitation' and 'oscillation induction', were designed to probe the behaviour of the membrane under new clamping conditions. The anode break protocol introduced an anodal polarization, lowering potassium conductance activation and sodium conductance inactivation, reversing the membrane current at resting potential and causing full excitation after release of the clamp. The oscillation induction experiment probed the observed oscillation of the membrane potential in response to small, sustained current injections.
For each experiment, either the sodium or potassium conductance model was allowed to vary according to the ABC posterior estimate (described in §4.2), while the other was held to default values. Simulating the membrane voltage under these conditions allowed us to attribute differences from reported behaviour to variation in the behaviour of a single channel, and thus assess the ability of the protocol to further constrain the conductance models. This inability to recreate the reported behaviour may be owing to the fact that the leakage current component of the model (I l in equation (2.2)) was set by the authors after fitting the sodium and potassium conductances as a 'catch-all' to ensure matching to experimental behaviour. As such, this value is probably not a biological truth, and employing it without a similar adjustment for our parametrizations may lead to this failure to capture the same behaviour. Regardless, it appears from these results that separately fitting the potassium conductance parameters to the voltage clamp experiments with ABC failed to produce a parametrized model capable of exhibiting all of the behaviour reported by the authors.
Variable behaviour of sodium posterior
Under the anode break excitation experimental protocol, the qualitative behaviour under-reported values is captured nearly universally across all posterior parametrized sodium conductance models, with variation largely constrained to the temporal placement of the excitation spike ( figure 6b). This suggests that information may be gained from this protocol as to the unidentifiable parameter k α h 2 , which as in the case of the potassium n gate parameters is unsurprising, given that the hyperpolarization of the membrane also decreases sodium current inactivation (controlled by h), leading to the reversal of membrane current and potentially exposing new ion channel kinetics.
Unlike the behaviour of the models parametrized under the potassium posterior, at least some models parametrized under the sodium posterior capture the membrane potential behaviour after nearly any depolarization (figure 7d-f ). We additionally see a range of behaviour, including the reported behaviour, when examining AP activation during both the first and second depolarizations of the positive phase depolarization experiment (figure 8d-f ). Interestingly, and again unlike the potassium posterior, we see that a particle in the posterior has a large outlier with regard to response to the current clamp experiment, and thus even the well-constrained model is sensitive to fluctuations within its estimates (figure 9b). We may conclude that some further information on the parameters of the model could be gleaned by examining model response to any of these protocols as well as that of the anode break experiment. Performing ABC parameter inference on the full Hodgkin-Huxley model highlighted the value of the method's ability to quantify model uncertainty. The wide posteriors around certain parameters in the potassium (k α n 2 , k α n 3 and k β n 2 ) and sodium (k α h 2 ) conductance components suggested an inability to fully fit the model under the provided experimental data. Because the output of ABC is the full population estimating the posterior over the parameters, we were able to take advantage of additional information, such as pairwise correlations and marginal distributions over the parameters, to support classification of the unidentifiabilities as practical rather than structural. This analysis suggests that additional data may be useful in further constraining the model, although without further experimentation we cannot rule out the unidentifiability being attributed to natural biological variability. In either case, this study highlights the usefulness of ABC in reporting all relevant statistics about the posterior within the final estimating population.
Discussion
The classification of the model unidentifiability as practical rather than structural suggests that Hodgkin and Huxley arrived at a model of the appropriate degree of complexity to describe their system. This also suggests that the authors could not have found a better parametrization of their model without the inclusion of additional data that could further inform the ion channel dynamics within the model. Despite this, we do observe instances where the ABC posterior estimates show notable performance gains over the reported model parametrization (table 3). This is probably a result of our fitting to data from a single recording of a single axon, whereas the authors employed unreported data from several different axons, which would be expected to exhibit variation in their conductance recordings. Using only the single-recording data available to us, it would be unwise to conclude anything significant from the deviations in mean posterior estimates and resulting gains in RMSE performance for particles in the posterior. Instead, the value of our approach lies in the quantification of confidence around the mean parameter estimates given just a sample of the data that was available to the original authors.
To assess what the original authors might have been able to accomplish with the experimental data for the protocols they proposed at the end of their paper (and armed with modern computational tools), we examined the differences in response to several complex protocols over the particles in the posterior estimates for both potassium and sodium conductance. We noted that the particles of the sodium conductance posterior showed greater consistency in the response to these protocols, both internally and when compared to the response of the reported parametrization, than the particles of the potassium posterior. This is probably owing to the presence of higher posterior variability in the potassium conductance model. The failure of all particles in the posterior to produce the same behaviour as the reported parametrization could be indicative of dependencies between the parameters of the two conductance models not captured by the separate fitting approach, or simply another constraint on the model not captured by the voltage clamp protocol.
While there appears to be additional information that could be leveraged from these experiments to further constrain both conductance models, the leveraging of this information to potentially decrease model uncertainty would be non-trivial. Each of these experimental protocols, including the standard voltage clamp, will only constrain a subset of the model parameters. Thus, fully constraining, the model would require leveraging a weighted combination of the deviation from experimental data under each protocol in our distance function. Such a weighted combination of data introduce socalled 'hyperparameters' into the ABC algorithm-parameters that control how much importance the algorithm gives to each source of data. These parameters are difficult to assign, especially without knowledge of the precision of the measurements being included, and might require an additional level of parameter fitting to set. Given the sparsity of the experimental data, it is unlikely that these hyperparameters could be reliably constrained. Even using the posteriors from the voltage clamp ABC as priors for a new round of ABC employing more complex voltage protocols would require a weight to be assigned for the trade-off between prior likelihood and experimental performance. This amounts to attempting to maintain the behaviour of the model under the voltage clamp protocol while simultaneously seeking to improve performance under the new protocol. In light of these limitations, we believe full integration of simulated data from these protocols to be beyond the scope of the paper. We can only theorize as to the degree of constraint they could have produced on the model, or the support they would lend to classifying the uncertainty as inherent biological variation by failing to further constrain it.
Arriving at an optimal means to combine data from multiple protocols, or simply determining a protocol that provides the most information on a given model, are problems of 'experimental design', and will be a focus of future work with ABC parameter fitting techniques, owing to its close relationship to model identifiability.
Conclusion
ABC parameter fitting of the Hodgkin and Huxley model to the voltage clamp data reported in their original paper was able to recover precisely the rate parameters controlling sodium and potassium channel gating, but was less able to recover the parameters controlling the voltage dependency of said rates. The width of the distributions around these unidentifiable parameters, as well as their lack of correlation with each other and the relative sensitivity of the output to their fluctuations, suggested that these were practically unidentifiable, potentially requiring more data to constrain. Investigation of differential behaviour under more complex voltage protocols thought to probe the conductance dynamics more fully revealed several promising sources of further model constraint.
We hope that this study serves as both a strong affirmation of the quality of the original work done by Hodgkin and Huxley to fit their model, which appears to be near-optimal under the data available to them, as well as a template for similar identifiability analyses in current cardiac models. While we doubt that many present-day models will be able to be constrained as effectively as the Hodgkin-Huxley model, given the increased complexity and the myriad sources of data, we believe the uncertainty quantification provided by these analyses can pinpoint directions for future experimental work, or provide insight into the degree and distribution of biological variability in the system. The adoption of these Bayesian parameter fitting methods would foster a documented link between experiment and data that has slowly been lost since the original work done by the founders of the field of computational biology, Hodgkin and Huxley.
Data accessibility. A repository containing the digitized Hodgkin-Huxley data, CellML files, ABC implementation, functional curation protocols and Python scripts used to generate the results presented in this paper are freely available to the public in the Paper Tutorials section of the Chaste wiki (https://chaste.cs.ox.ac.uk/trac/wiki/Paper Tutorials/HodgkinHuxleyABC), along with instructions on how to compile and execute the code, or by browsing the source code directly (https://chaste.cs.ox.ac.uk/trac/browser/projects/HodgkinHuxleyABC). | 9,697.8 | 2015-12-01T00:00:00.000 | [
"Computer Science"
] |
Melatonin alleviates heat-induced damage of tomato seedlings by balancing redox homeostasis and modulating polyamine and nitric oxide biosynthesis
Background Melatonin is a pleiotropic signaling molecule that plays multifarious roles in plants stress tolerance. The polyamine (PAs) metabolic pathway has been suggested to eliminate the effects of environmental stresses. However, the underlying mechanism of how melatonin and PAs function together under heat stress largely remains unknown. In this study, we investigated the potential role of melatonin in regulating PAs and nitric oxide (NO) biosynthesis, and counterbalancing oxidative damage induced by heat stress in tomato seedlings. Results Heat stress enhanced the overproduction of reactive oxygen species (ROS) and damaged inherent defense system, thus reduced plant growth. However, pretreatment with 100 μM melatonin (7 days) followed by exposure to heat stress (24 h) effectively reduced the oxidative stress by controlling the overaccumulation of superoxide (O2•−) and hydrogen peroxide (H2O2), lowering the lipid peroxidation content (as inferred based on malondialdehyde content) and less membrane injury index (MII). This was associated with increased the enzymatic and non-enzymatic antioxidants activities by regulating their related gene expression and modulating the ascorbate–glutathione cycle. The presence of melatonin induced respiratory burst oxidase (RBOH), heat shock transcription factors A2 (HsfA2), heat shock protein 90 (HSP90), and delta 1-pyrroline-5-carboxylate synthetase (P5CS) gene expression, which helped detoxify excess ROS via the hydrogen peroxide-mediated signaling pathway. In addition, heat stress boosted the endogenous levels of putrescine, spermidine and spermine, and increased the PAs contents, indicating higher metabolic gene expression. Moreover, melatonin-pretreated seedlings had further increased PAs levels and upregulated transcript abundance, which coincided with suppression of catabolic-related genes expression. Under heat stress, exogenous melatonin increased endogenous NO content along with nitrate reductase- and NO synthase-related activities, and expression of their related genes were also elevated. Conclusions Melatonin pretreatment positively increased the heat tolerance of tomato seedlings by improving their antioxidant defense mechanism, inducing ascorbate–glutathione cycle, and reprogramming the PAs metabolic and NO biosynthesis pathways. These attributes facilitated the scavenging of excess ROS and increased stability of the cellular membrane, which mitigated heat-induced oxidative stress. Electronic supplementary material The online version of this article (10.1186/s12870-019-1992-7) contains supplementary material, which is available to authorized users.
Background
Global warming has led to climate change, including heat stress and these changes considerate as a major threat for worldwide crop production [1]. Heat stress can cause misfolds or disorganized cellular homeostasis because of excess reactive oxygen species (ROS) accumulation, distorted protein structure, impeded protein synthesis, and overall cell division and growth disruption from reduced water content [2][3][4]. Heat shock-induced oxidative damage occurs as a result of excess formation of singlet oxygen ( 1 O 2 ), superoxide radical (O 2 •− ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radical (OH • ) under heat stress [5]; this leads to overproduction of malondialdehyde (MDA) and thus reduces membrane stability, permeability, and mobility, and impairs protein membrane polymerization [6,7]. As a sessile organism, in an unfavorable environment, plants develop an inherent antioxidative defense strategy to detoxify excess ROS, which helps to protect them from oxidative damage [8]. This efficient anti-oxidative defense mechanism consists of different enzymatic antioxidants, such as superoxide dismutase (SOD), catalase (CAT), peroxidase (POD), ascorbate peroxidase (APX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR), and dehydroascorbate (DHAR), and non-enzymatic antioxidants, such as ascorbate (AsA), glutathione (GSH), carotenoids, and phenols [2,9,10]. Additionally, heat-shock proteins (HSPs) play vital roles in ROS scavenging [11], because heat stress induces APX and CAT production [12]. Moreover, heat shock transcription factors A2 (HsfA2) plays a key role in the regulation of expression of heat-shock proteins, ascorbate peroxidase 2 and galactinol synthase 1 and 2 under hightemperature challenged [13].
Melatonin (N-acetyl-5-methoxytryptamine) is a naturally occurring low-molecular-weight multi-regulatory molecule that exists in all living organisms, including plants and animals [14,15]. Since its detection in plants, scientists' curiosity regarding melatonin has increased, because of its diversified biological role as a plant master regulator and defensive roles in capricious environmental conditions, such as extreme temperatures, salinity, drought, heavy metals, UV radiation, and oxidative stress [15][16][17][18]. Melatonin also accelerates seed germination [19], influences root and plant architecture [20], enhances growth vitality, ameliorates leaf senescence [21], regulates nitrogen metabolism [21], and alters physiological processes by inducing differential gene expression [16]. The most important function of melatonin is ROS detoxification through the production of free radicle scavengers (H 2 O 2 , O 2
•−
) and modulation of both antioxidant enzyme activity and concentration [22,23]. Rodriguez et al. [24] reported that pre-treated melatonin protects oxidative damage in cucumber through melatonin-mediated redox signaling pathways. Under both dark and light conditions, melatonin increases APX and CAT activity, and elevates AsA and GSH content; the AsA-GSH cycle also helps to reduce the dark-induced senescence [12,25]. There is lack of research on how melatonin protects seedlings from possible damage caused by thermal stress. Consequently, the aim of this research was to elucidate melatonin's mode of action.
Polyamines (PAs) are essential stress response biomolecules; they are small molecular weight nitrogenous compounds that exist ubiquitously in plants, mostly as putrescine (Put), spermidine (Spd), and spermine (Spm) [26]. PAs possess a wide variety of functions, including plant morphogenesis, reproductive stimulation, and delayed leaf senescence, and they play key roles against abiotic stresses, such as extreme temperature (high and low), salt, drought, heavy metals, osmotic stress, ultraviolet radiation stress, and submerged stress [27,28]. Additionally, PAs have a cationic charge that helps uphold membrane integrity, assists smooth enzyme function, and protects DNA, RNA, and protein structure. Therefore, plant physiological, biochemical, and molecular activities are enhanced through interactions with nucleic acids, proteins, and phospholipids [29]. Ke et al. [28] determined that supplemental melatonin alleviates salinity stress in wheat seedlings by regulating PAs metabolism. Melatonin increases iron-deficiency tolerance through increased accumulation of PA-mediated nitric oxide (NO) [30]. Shi et al. [31] noted that the PAs metabolic system also changed under oxidative stress conditions with melatonin pretreatment in Bermuda grass. Moreover, melatonin increased Put and Spd levels in carrot suspension cells, which helped reduce cold-induce apoptosis [32]. Additionally, melatonin pretreatment alleviated chilling stress in harvested peach fruits [33] and cucumber seedlings [34] which are closely related to PAs metabolism. These research findings revealed that melatonin may play critical roles in capricious environments by mediating PAs metabolism. Alternatively, the signaling molecule NO functions as a mediator of PAs metabolism and plant hormones, and also triggers NO biosynthesis [35]. However, until now, how melatonin regulates PAs metabolism have not been entirely understood. We hypothesize that melatonin may be associated with PAs via the NO biosynthesis pathway, thus helping plants cope with high-temperature challenges. Correlations between melatonin biosynthesis and PAs metabolism, and their underlying mechanisms could provide a novel insight that can help to promote plant production and protection.
Result
Melatonin improved morphological parameters in tomato seedling under heat stress Fresh weight (FW) and dry weight (DW) of shoots and roots significantly decreased in high-temperature treatment seedlings, especially root FW and DW (Table 1). Shoot and root FW were reduced by around 30 and 12%, respectively, under heat stressed seedlings compared to normal growth conditions. Conversely, exogenous melatonin mitigated temperature-induced inhibition of growth components and facilitated better growth.
Melatonin controlled the overaccumulation of ROS in heat stressed tomato seedlings To investigate if melatonin alleviates heat stress-induced oxidative stress, we first detected H 2 O 2 and O 2 •− generation in tomato leaves by histochemical staining. As shown in Fig. 1a, b, we observed deeper blue staining, which indicates O 2 •− production, and acute brown staining, which denotes H 2 O 2 production, on the leaf surface of high temperature-stressed tomato seedlings. Conversely, the differences among the other treated seedlings were less in their leaf blades, which indicates that melatonin inhibits ROS overproduction under stress conditions.
To evaluate the ROS accumulation trends under melatonin and/or heat stress, we further examined the generation rates of O 2 •− and H 2 O 2 in tomato leaves (Fig. 1c •− contents increased by 32 and 137%, respectively, compared with the control seedlings. By contrast, application of melatonin increased the heat-stress tolerance of seedlings by reducing the formation rates of H 2 O 2 and O 2 •− in leaf tissue by 15 and 36%, respectively, compared with the plants grown solely in the hightemperature environment.
Melatonin maintained cellular membrane integrity in tomato leaves under heat stress
High temperatures destroyed tomato seedling leaves cellular membranes as indicated by MII (80%) and the higher accumulation of MDA content (45%) compared with the control (Fig. 2). Application of 100 μM melatonin was more effective for overcoming harsh impacts of heat stress, as shown by a substantial reduction of MII (29%) and lower MDA concentration (16%) compared with untreated heat-stressed plants.
Melatonin enhanced proline metabolism and RWC in tomato seedlings under heat stress
Heat-stressed plants induced elevated proline biosynthesis that was 158% greater compared with the corresponding control (Fig. 3a). The melatonin-pretreatment combined with heat-stressed seedlings showed a maximum proline content that was 212% greater than that of control. To verify this, we further quantified the gene expression of P5CS, which is responsible for proline biosynthesis. The P5CS expression pattern was substantially upregulated (1.62-fold) in heat-stressed seedlings compared with the control plants. Melatonin pretreatment in heat-stressed seedlings further markedly upregulated P5CS expression by 6-folds in contrast with seedlings that were grown only heat-stressed conditions (Fig. 3b).
For RWC, in respect to normal plants, high-temperature challenge seedlings considerably decreased the RWC by 10%; supplemental melatonin application curtailed a significant amount of water loss from their tissues, and these plants contained 7% more RWC than the untreated heatstressed seedlings (Fig. 3c).
Melatonin balanced the antioxidant defense system in tomato seedlings under heat stress
To evaluate the role of melatonin in oxidative stress remediation, we investigated the activities of antioxidant enzymes upon exposure to heat stress (Fig. 4). Under heat stress, SOD activity substantially declined and was 1.89-fold lower than in the control; alternatively, SOD activity significantly increased and was 1.29-fold higher in plants with melatonin pretreatment under high temperatures than untreated heatstressed seedlings (Fig. 4a). Heat stress caused a marked decrease of 41% in CAT activity in leaves compared with normally grown leaves; melatonin-pretreated heat-stressed seedlings had upwards of 36% greater in CAT activity than untreated heat-stressed leaves (Fig. 4b). Melatonin-pretreated heat-stressed seedlings had dramatically increased POD activity (by 23%) in contrast with untreated heat- Table 1 Effects of melatonin on the morphology of heat stress exposed tomato seedlings stressed seedlings; however, relative to control plants, POD activity sharply declined in untreated heat-stressed seedlings (29%, Fig. 4c). Under heat stress, APX activity decreased by 60% compared with the control, whereas, melatonin pretreatment in heat-stressed seedlings resulted in 230% greater APX activity than that of untreated heatstressed seedlings (Fig. 4d).
Melatonin induced the AsA-GSH cycle and homeostasis in tomato seedlings under heat stress As displayed in Fig. 5a, AsA content was markedly increased by 47% after heat stress in contrast to the control plants. Moreover, in response to normal seedlings, AsA content was further increased by 62% in melatonin-pretreated heat-stressed seedlings. Under exposure to heat stress, the GSH content was remarkably increased (168%) in contrast to the corresponding control seedlings (Fig. 5b).
Alternatively, melatonin-pretreated heat-stressed seedlings had 28% greater GSH content with respect to the untreated heat-stressed seedlings. There were no significant differences in GR activity among the treatments except in the untreated heatstressed seedlings (Fig. 6a). However, GR activity sharply decreased by 45% in thermal-stressed plants with respect to the control seedlings. By contrast, when plants treated with melatonin followed by high-temperature exposure showed upregulation of GR activity by 94% than seedlings subjected to heat-stressed alone.
However, marked decreases of MDHAR and DHAR activities were observed in heat-stressed seedlings compared to other seedlings. Additionally, in respect to only heat-stressed seedlings MDHAR and DHAR enzyme contents rose by 91 and 56%, respectively, in melatoninpretreated heat-stressed seedlings (Fig. 6b, c). In untreated heat-stressed seedlings, the GST activity slightly decreased by around 19% compared with the control groups. In contrast, melatonin pretreatment increased GST activity by 39% than untreated heatstressed seedlings (Fig. 6d).
Melatonin modulated the transcription of enzymatic antioxidants under heat stress
To elucidate the molecular mechanism underlying how melatonin alleviates heat stress-induced oxidative damage, the transcript levels of some key genes that encode antioxidant enzymes were assayed. The results showed that SOD ( Melatonin regulated the RBOH expression in tomato leaves under heat stress As indicated in Fig. 7a, we analyzed the tomato RBOH expression level. The RBOH transcript levels were prominently elevated in heat-stressed seedlings than control ones. Conversely, exogenously applied melatonin with subsequent high-temperature exposure also increased RBOH expression by 1.89-fold compared with untreated heat-stressed plants. Melatonin regulated the heat shock-related genes expression in tomato leaves under heat stress As shown in Fig. 7b, c, HSP90 and HsfA2 expression levels were increased by 4.1-and 3.26-fold, respectively, in untreated heat-stressed seedlings compared with control seedlings. In contrast to untreated heat-stressed seedlings, the HSP90 and HsfA2 transcript levels were sharply upregulated by 1.29-and 1.55-fold, respectively, in melatonin-pretreated heat-stressed seedlings.
Melatonin modulated endogenous levels of PAs and their genes expression in tomato leaves under heat stress
We quantified endogenous free PAs accumulation to explicate how PAs and melatonin coordinate in order to eliminate the adverse effects of thermal stress. The Put, Spd, and Spm contents significantly increased in heatstressed seedlings by 25.26, 48.24, and 24.43%, respectively, compared with the control group (Fig. 8). Melatonin supplementation further increased Put by 82.52%, Spd by 78.72%, and Spm by 247.80% relative to their corresponding control plants.
To reveal the expression profile of PAs metabolism, we performed heatmap visualization and hierarchical cluster analysis (Fig. 9). In the presence of melatonin, the mRNA levels of PAs metabolic genes showed higher expression than control. High-temperature stress exposure with or without melatonin treatments caused upregulation of the transcript levels of the assayed genes. The ADC1, ADC2, and ODC1 mRNA levels were increased in stressed seedlings, with and without melatonin, whereas ODC2 expression was downregulated in all heat-stressed seedlings compared with control seedlings. Similarly, the SAMDC1, SAMDC2, and SPMS transcript abundance also increased in response to either all heatstressed seedlings compared with control seedlings. Furthermore, in contrast to untreated heat-stressed seedlings, the SPDS1, SPDS2, SPDS3, SPDS4, and SPDS5 expression levels were also upregulated in melatoninpretreated plants. Interestingly, the PAO1 and PAO2 transcript levels were downregulated in melatonin-pretreated seedlings than untreated heat-stressed plants, which indicates that melatonin might inhibit heat-induced damage via the PAs metabolic pathway and not a catabolic process.
Melatonin regulated the NO biosynthesis pathway under heat stress
As displayed in Fig. 10, untreated heat-stressed plants released 38% more NO than control seedlings. In contrast, melatonin pretreatment further increased the NO content in thermal-stressed tomato seedlings by approximately 205% than control plants. NR activity increased upon exposure to high temperatures with or without melatonin applied to the seedlings. Melatonin-pretreated heat-stressed seedlings markedly increased NR activity in respect to control plants by around 218%.
Alternatively, in the case of NOS-like activity, there was not significant variation between control and heat-stressed seedlings. The NOS-like activity was dramatically increased by 164% in melatonin-pretreated heat-stressed seedlings in contrast with control seedlings. The relative transcript level of NR was increased (2.6-fold) in untreated heat-stressed seedlings than control seedlings. However, NR expression was further exponentially upregulated (2.35-fold) in melatonin-pretreated heat-stressed seedlings in respect to untreated heat-stressed seedlings. However, in the case of NOS, expression was downregulated in all heat-stressed seedlings with or without melatonin pretreatment compared with the control seedlings.
Discussion
Melatonin is a pleiotropic molecule that is involved in diverse plant physiological functions, including seed morphogenesis, growth, and development; root architecture; photosynthesis; and chlorophyll pigment production; and it is a plant master regulator and defensive player in capricious environments [16]. Cellular oxidative damage is a stress marker of high-temperature stress, and H 2 O 2 and O 2 •− generation, MDA content, and MII are the representative of these stress markers. Heat-stressed tomato leaves conspicuously displayed deep blue spots, which indicated that they produced more O 2 •− . The leaves also had dark brown patches, •− , which showed markedly higher generation under elevated temperatures compared with melatonin-pretreated heatstressed seedlings (Fig. 1). These findings are consistent with prior findings that melatonin decreased H 2 O 2 and O 2 •− accumulation in kiwifruit [36], watermelon [37], cucumber [38], and Malus hupehensis [39]. The probable mechanism underlying this decrease may be that melatonin acts as an electron donor [40]. MDA often represents an essential stress symptom that forms through an auto-oxidative chain reaction as a result of ROS-induced bio-membrane damage [41]. However, in a capricious environment, the MII and MDA concentration are predominantly associated with each other. Plants exposed to heat stress had sharply increased MDA levels that could potentially damage the plasma membrane integrity, which elevated MII in tomato seedlings (Fig. 2). Melatonin application decreased both MDA and MII, which is consistent with the findings of previous studies on kiwifruit [36,42], Bermuda grass [31], and tomato [43] under various abiotic stresses.
These results indicate that melatonin may be able to repair the disrupted cellular membrane and reduce heat-induced oxidative damage by balancing ROS in a high-temperature environment. HsfA2 and HSP90 are the key regulators that stimulate ROS detoxification through the H 2 O 2 -mediated signaling pathway and, therefore, increase plant thermo-tolerance. In this study, melatonin-pretreated tomato seedlings had upregulated HsfA2 and HSP90 expression compared with untreated heat-stressed seedlings (Fig. 7), which indicates that melatonin ameliorated the heat stressinduced oxidative damage caused by HsfA2 and HSP90 activation. A recent report also revealed that HsfA2 plays roles in H 2 O 2 signaling and increases heat stress memory subsistence, whereas HSP90 coordinates DNA-binding enhancement process and HSF balanced in plants exposed to heat stress, and this whole mechanism might be related to melatonin-mediated heat tolerance [22,[44][45][46]. Melatonin is a dynamic antioxidant [47,48] that extensively stimulates cellular redox homeostasis by enhancing the activity of enzymatic antioxidants, including SOD, CAT, POD, APX, GR, MDAR, and DHAR, and non-enzymatic antioxidants, including AsA and GSH [49][50][51][52]. Therefore, melatonin helps detoxify excess ROS, which helps plants survive under stressful conditions. We assayed enzyme activity and conducted expression analysis of antioxidant-related genes and observed that all the antioxidant-related enzymes activities were reduced under thermal stress. Conversely, melatonin-pretreated heat-stressed plants showed higher SOD, CAT, and POD activity relative to untreated heatstressed plants (Fig. 4). The first mechanism of defense against ROS in plants is through SOD, which eliminates O 2 •− by converting it into O 2 and H 2 O 2 [53]. In addition, CAT and POD also actively participate in scavenging H 2 O 2 , which they convert into H 2 O and O 2 [54], which indicates that these enzymes have a vital role in scavenging more H 2 O 2 , and these and the MDA results were consistent with those of melatonin-treated kiwifruit [42], wheat [55], and tea [17].
Moreover, we speculated that heat stress elevated the proline content, upregulated the proline biosynthesis gene (P5CS), and lowered the water content in leaves, whereas melatonin pretreatment further increased the proline level, P5CS expression, and leaf water content. This finding indicates that melatonin has the potential to help leaves maintain a higher water level and lower cellular osmotic potential (Fig. 3) by the proline biosynthesis pathway, which enhanced the plants' ability to cope with heat stress [56]; this was also supported by previous experimental results [57].
Alternatively, ascorbate is a vital antioxidant enzyme that substantially detoxifies ROS; APX and GR are also crucial enzymes in the AsA-GSH cycle. The activities of APX, MDHAR, DHAR, and GR enzymes were only decreased in heat stress-exposed seedlings, whereas melatonin pretreatment elevated the AsA content as a result of increased the activities of APX, MDHAR, DHAR, and GR enzymes under heat stress. All of these enzymes actively contributed to the AsA-GSH cycle, which converts the tiny non-enzymatic molecules AsA and GSH [58]. The AsA values depend upon metabolizing (APX) and recycling (MDHAR, DHAR) enzyme activity. Moreover, at the time of ROS detoxification, DHAR oxidizes GSH to GSSG. Simultaneously, GR recycles GSH. Therefore, we concluded that melatonin pretreatment might have the potential to reduce oxidative damage by inducing the AsA-GSH cycle [59,60]. To elucidate the inherent mechanisms, we quantified the expression of several related genes. Under heat stress, RBOH expression was upregulated and the expression level was further magnified in melatonin-pretreated heat-stressed plants (Fig. 7a). In melatonin-pretreated heat-stressed seedlings, the relative transcript abundance of enzymatic (SOD, CAT, POD) and non-enzymatic (APX, GR, MDHAR, DHAR, GST) antioxidant genes were upregulated (Figs. 4 and 6), which indicates that plants were more stable under heat stress because of excess ROS scavenging; these findings are consistent with those of previous research done on kiwifruit [42], watermelon [52], apple [61], and Arabidopsis [62] under various abiotic stress conditions.
PAs play a critical role in plant signaling transduction that is beneficial for counteracting the effects of different capricious environments [27]. Some previous studies determined that melatonin has a positive regulatory effects on plant development and abiotic stress (alkaline stress, cold, thermal, oxidative, and iron deficiency tolerance) management by interacting with the PAs signaling pathway [28,63]. Melatonin might ameliorate the thermal oxidative stress by interacting with the PAs and NO biosynthesis pathways. The exogenous application of melatonin elevated the endogenous free PAs level. Similarly, expression levels of different PAs biosynthesis genes were also upregulated in melatonin-pretreated heat-stressed seedlings. The transcript abundances of ADC1/2, SAMDC1/2, SPMS, and SPDS1/2/3/5/6 were upregulated (Fig. 9), and that of PAO1/2 was downregulated in melatonin-pretreated heat-stressed seedlings, and these genes are associated with Put, Spd, and Spm biosynthesis. These findings indicate that melatonin and PAs metabolism have close interactions. This finding is also similar to that of previous research performed on various crops under different stresses [30,[32][33][34]64]. Alam et al. [65] concluded that long-term heat-stressed seedlings treated with melatonin adjusted through the modulation of PAs metabolism.
Melatonin along with NO has the potential to combat different stress conditions through the L-arginine and PAs metabolic pathways [66]. However, the NOS and NR pathways are also regulated via PAs [67,68]. Our current data also highlight that the NO content, NR activity, and NOSlike activity along with the expression of their related genes were elevated in melatonin-pretreated heat-stressed tomato seedlings (Fig. 10), which indicates that melatonin triggered the NO activity [30]. Overall, melatonin enhanced mitigation of heat-induced damage through coordination with PA-and NO-mediated signaling pathways.
Conclusions
To determine how melatonin mitigated heat stress-induced adverse effects in tomato seedlings, we described a probable mechanism (Fig. 11). We observed that 100 μM exogenous melatonin treatment improved the thermal tolerance of tomato seedlings by lowering ROS (H 2 O 2 , O 2 •− , MDA) production, enhanced antioxidant enzyme activity, AsA-GSH cycle modulation, and upregulation of antioxidant-related gene expression. Additionally, melatonin elevates endogenous PAs via upregulation of PAs biosynthesis genes. NO content along with NR and NOS activity were also increased with melatonin supplementation. Therefore, we concluded that heat stress-induced damage was suppressed by melatonin, which coordinates with the PAs and NO biosynthesis pathways, which helps to detoxify the overaccumulated ROS. These findings provide novel insight into the cross-talk that exists among melatonin, PAs, and NO to inhibit thermal stress. To better understand this phenomenon, further investigation is needed to determine how these three molecules collectively function to alleviate the heatstress induced damage.
Plant material and growth conditions
Tomato (Solanum lycopersicum L. Cv. Hezuo 903) seeds (Shanghai Tomato Research Institute, Shanghai, China) were sorted by uniform size and then sterilized by 0.1% sodium hypochloride (NaOCl) for 5 min, followed by washing several times with deionized water; then, they were placed in dark conditions for 36 h at 28 ± 1°C for germination. Germinated seeds were then sown in plastic trays (41 × 41 × 5 cm) that contained a peat and vermiculite (2:1, v:v) mixture and cultured in a growth chamber at Nanjing Agriculture University, where the environmental conditions were maintained at 28 ± 1°C (day) and 19 ± 1°C (night), relative humidity from 65 to 75%, and 12 h photoperiods (PAR 300 μmol m − 2 s − 1 ). After the second true leaf was fully expanded, the uniformly grown seedlings were selected and transferred into containers filled with a peat and vermiculite (2,1, v, v) mixture and watered on alternate days with fullstrength Hoagland solution.
Treatment and sampling
When the fourth true leaves were fully developed, the seedlings were divided into two sub-groups for challenge under different treatments. Melatonin was applied as described in previous experiment performed by Martinez et al. [69]. In the first sub-group, 80 mL of 100 μM melatonin was sprayed on each tomato seedling leaves each day and for 7 days; in the second sub-group, each tomato seedling leaves were sprayed with the same volume of water. Melatonin stock solution was prepared by dissolving melatonin in ddH 2 O with 0.01% v/v Tween-20 used as a surfactant. One week after treatment, half of the melatonin-treated seedlings and half of the water-sprayed seedlings were separated and exposed to a high temperature (42°C) for 24 h [10]. After 24 h of heat treatment, leaves were harvested for subsequent analysis and immediately stored at − 80°C.
Measurement of growth indicators
To assess the combined effects of melatonin and heat-stressed in tomato seedlings, we measured different growth indicators such as fresh and dry weight of leaves and roots. Fresh weight of leaves and roots were measured by electric balance. For dry weight records plants were oven dried (80°C for 72 h).
Histochemical detection of H 2 O 2 and O 2
•− H 2 O 2 and O 2 •− generation rate were detected using 3,3diamino benzidine (DAB) and nitro blue tetrazolium (NBT), respectively, using a previously described method [70] with minor modification. For H 2 O 2 localization, stained leaves were placed in vacuum along with 0.5 mg·mL − 1 fresh DAB solution prepared by 25 mM Tris-HCl (pH 3.8) and kept for 12 h at room temperature. Brown spots appeared on the surface of leaves because of the reaction between DAB and H 2 O 2 . For O 2 •− detection, the other leaf samples were immersed with 1 mg·mL − 1 NBT solution, which was made with 10 mM phosphate buffer (pH 7.8), and incubated at room temperature in the dark for 12 h. Blue spots were also present on leaves because of the reaction of NBT and O 2 •− . Both of the stained leaf samples were bleached by boiling in 95% ethanol for 20 min to remove chlorophyll. Then, the samples were placed into absolute ethanol for several hours before taking photos with a digital camera.
Determination of H 2 O 2 production level
The H 2 O 2 concentration in leaves was estimated by slightly modifying a method described by Velikova et al. [71]. First, 0.2 g leaves were homogenized with 1.6 mL 0.1% trichloroacetic acid (TCA) in an ice bath for 30 min and centrifuged at 12000×g for 20 min at 4°C. Then, 0.5 mL 0.1 M potassium phosphate buffer (pH 7.8) and 1 mL 1 M KI (Potassium Iodine) were added to 0.5 mL supernatant and kept in a dark place for 1 h. The absorbance was measured at 390 nm. Finally, the H 2 O 2 content was quantified with a standard curve and expressed as μmol g − 1 FW.
Determination of O 2
•− production rate The O 2 •− generation rate was determined following the procedure reported by Nahar et al. [72] with some alterations. Briefly, 0.2 g leaves were homogenized with 2 mL 50 mM phosphate buffer (pH 7.8) and centrifuged at 12000×g for 20 min at 4°C. Then, 0.5 mL 50 mM phosphate buffer (pH 7.8) and 0.1 mL 10 mM hydroxylamine hydrochloride were mixed in 0.5 mL supernatant and incubated at room temperature for 30 min. After incubation, 1 mL 17 mM sulfanilamide and 1 mL of 7 mM naphthylamine were added to the mixture and incubated for 30 min. The absorbance reading of the mixture was measured at 530 nm. O 2 •− production was then calculated with a standard curve of NaNO 2 and expressed as nmol g − 1 min − 1 FW.
Membrane injury index (MII) measurement
Membrane injury index (MII) of leaves was computed to the method outlined by Jahan et al. [10] with few modifications. Briefly, 0.5 g fresh leaves were thoroughly washed with deionized water, cut into small pieces, put into tubes filled with 20 mL deionized water, and placed at room temperature for 4-5 h under dark conditions in a shaker; then, the initial electrical conductivity (EC1) in the bathing solution was determined by a portable conductivity meter (DDS-307, Shanghai Precision and Scientific Instrument LTD., Shanghai, China). Subsequently, the samples were boiled at 95°C for 20 min and cooled to room temperature, and the final electrical conductivity (EC2) was measured in the bathing solution. Simultaneously, we determined the deionized water conductivity (EC0). The MII was calculated as follows:
Lipid peroxidation measurement
Lipid peroxidation was inferred based on MDA content in leaves, which was measured as described by Alexieva et al. [73] with slight adjustments. First, 0.2 g leaf samples were homogenized in a 1.6 mL 0.1% (w/v) TCA solution and centrifuged at 4°C for 20 min at 12000×g. From the supernatant, a 1.0-mL aliquot was added to 1.0 mL TCA containing 0.67% TBA; then, the sample was boiled at 95°C for 15 min and kept on ice for cooling. Subsequently, the mixture was centrifuged at 4400×g for 10 min. Then, MDA content was measured at 532 nm and 600 nm by a spectrophotometer (Evolution 300, Thermo Fisher Scientific, Waltham, MA, USA).
Proline content determination
The proline content was evaluated following the method described by Bates et al. [74]. Fresh leaf samples (0.2 g) were digested in 3% sulphosalicylic acid followed by centrifugation at 12000×g for 20 min at 4°C. The same amount of glacial acetic acid and ninhydrin solutions were incorporated in the supernatant and incubated for 30 min. Consequently, the sample was heated at 100°C for 1 h, and 5 mL toluene was added after cooling. Toluene absorbance was read at 520 nm by a spectrophotometer (Spectronic 20D, Milton Roy, Philadelphia, PA, USA).
Relative water content measurement
The relative water content (RWC) was calculated using the method established by Barrs and Weatherley [75] with some changes. Fully developed leaves were arbitrarily detached from treated plants and immediately weighed as FW, followed by soaking in distilled water and incubation for 6 h at room temperature. Then, the excess surface water was removed with a paper towel, and the turgid weight (TW) was recorded. Leaf samples were then oven dried at 80°C for 72 h to obtain the dry weight (DW). RWC was calculated using the following equation: Leaf enzymes activity assays Fresh leaf samples (0.2 g) were digested with a chilled pestle and mortar in 1.6 mL 50 mM pre-cooled phosphate buffer (pH 7.8), and supernatants were obtained by centrifugation of the homogenate at 12000×g for 20 min at 4°C. The supernatants were then used to estimate the antioxidant enzymes activities. SOD activity (EC 1.15.1.1) was calculated using a modified version of the protocol described by Giannopolitis and Ries [76], and Maresca et al. [77] described the procedure that was used to estimate POD (EC 1.11.1.7) activity. Briefly, a 40-μL enzyme extract was added to a 3-mL reaction mixture that contained 14.5 mM Met, 30 μM EDTA-Na 2 solution, 50 mM phosphate buffer (pH 7.8), 2.25 mM NBT solution, and 60 μM riboflavin solution. The SOD content was monitored at 560 nm. For POD activity assay, 40 μL enzyme solution was mixed with a 3-mL reaction volume that included 0.2 M phosphate buffer (pH 6.0), 50 mM guaiacol, and 2% H 2 O 2 solution, and absorbance was quantified at 470 nm.
To determine catalase (CAT, EC 1.11.1.6) activity, the protocol described by Dhindsa et al. [78] was used. Briefly, a 0.1 mL enzyme solution was added followed by a 3-mL reaction mixture that contained 0.15 M phosphate buffer (pH 7.0) and 0.3% H 2 O 2 solution. The level of activity was calculated at 240 nm.
AsA content was quantified as previously reported by Logan et al. [80]. Briefly, 0.1 g leaf samples were homogenized in 1.5 mL 6% pre-chilled HClO 4. After grinding, the sample was centrifuged at 12000×g for 15 min at 4°C, and supernatant was collected for further analysis. For neutralization, 200 mM sodium acetate buffer (pH 5.6) was added to the supernatant and AsA was assayed at 265 nm; the absorbance reading was recorded before and after incubation of the supernatant in 1.5 units of AsA oxidase for 15 min.
GSH was assayed with the protocol priorly described by Griffith [81]. Briefly, the leaf sample (0.1 g) was ground in 1.5 mL 5% sulfosalicylic acid, and the homogenized sample was centrifuged at 4°C for 20 min at 12000×g. The supernatant was neutralized with 200 mM sodium acetate buffer (pH 5.6). Then, 5, 5-dithiobis-(2nitrobenzoic acid) was incorporated for enzymatic recycling of GSH. The GSH content was calculated by recording the absorbance at 412 nm with a spectrometer.
Glutathione S-transferase (GST) activity was assayed with GST detection kit (Solarbio Life Science, Beijing, China) following the manufacturer's instructions. First, fresh leaves (0.1 g) were ground with extraction buffer (1 mL) in an ice bath and homogenized by centrifugation at 4°C for 10 min at 8000×g with the supernatant used for testing GST. GST activity was calculated using the molar extinction coefficient 9.6 × 10 3 Lmol − 1 cm − 1 .
Glutathione reductase (GR) activity was determined with GR detection kit (Solarbio Life Science, Beijing, China) following the manufacturer's instructions. Briefly, 0.1 g leaf tissue was taken and homogenized in 1 mL extraction solution in an ice bath and centrifuged at 10000×g for 10 min at 4°C, and the supernatant was used to determine GR activity. To calculate GR, 6.22 × 10 3 L mol − 1 cm − 1 was used as the extinction coefficient.
Nitrate reductase (NR) activity was measured with NR detection kit (Solarbio Life Science, Beijing, China) following the manufacturer's instructions. First, 0.1 g fresh leaf samples were gently washed and the water was removed from the leaf surface. Samples were incubated for 2 h in work solution under dark condition in room temperature and then kept at − 20°C for 30 min. Then, samples were taken, ground in an ice bath with 1 mL extract solution, and centrifuged at 4000×g for 10 min; the supernatant was used to determine NR activity.
Nitric oxide synthase (NOS) activity was assayed by NOS detection kit (Solarbio Life Science, Beijing, China) following the manufacturer's instructions.
NO content was quantified using NO detection kit (Solarbio Life Science, Beijing, China) according to the manufacturer's protocol.
Protein extraction
The protein content was determined using Bovine serum albumin (BSA) as the standard following the method described by Bradford [83].
Determination of endogenous free polyamines
Endogenous free polyamines content were assayed as the approaches reported by Shen et al. [84] with minor modifications. Briefly, 0.5 g leaf tissue was homogenized in 5% (v/v) cold perchloric acid and incubated on ice for 1 h. Then, homogenates were centrifuged for 20 min at 12000×g and the upper supernatant was used to determine the free PAs. A 0.7 mL aliquot was reacted with 1.4 mL NaOH (2 N) and 15 μL benzoyl chloride, and then gently vortexed the mixer and incubated for 30 min at 37°C. Later, to stop the reaction, 2 mL saturated NaCl was added to the solution. To extract benzoyl PAs, 2 mL cold diethyl ether was mixed into the solution, which was then centrifuged at 3000×g for 5 min. The extracted benzoyl PAs were evaporated to dryness and then re-dissolved in 1 mL of 64% (v/v) methanol. To separate and analyze the PAs content, we used UHPLC (Ultimate 3000, Thermo Scientific, San Jose, CA, USA) with a C18 reversed-phase column at a flow rate of 0.8 mL min − 1 .
Total RNA extraction and quantitative real-time PCR analysis
Total RNA was extracted from 0.1 g tomato leaves tissues using the RNAsimple Total RNA Kit (TIANGEN, Beijing, China) according to the manufacturer's instructions. One microgram of total RNA was reverse-transcribed into cDNA using a SuperScript First-strand Synthesis System for quantitative real-time PCR based on the manufacturer's instructions (Takara, Tokyo, Japan). The gene-specific primers were designed using DNA sequences from the NCBI database (https://www. ncbi.nlm.nih.gov/), and Sol Genomics Network (solge nomics.net) and the primer pair sequences are listed in Additional file 1: Table S1. Real-time PCR was performed on a StepOnePlus™ Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) with ChamQ Universal SYBR qPCR Master Mix (Vazyme Biotech Co., Ltd., Nanjing, China). The total reaction system volume was 20 μL, which consisted of 10 μL ChamQ SYBR qPCR Master Mix (2×), 0.4 μL ROX reference dye 1 (50×), 2 μL template cDNA (10×), 0.8 μL each specific primer (10 μM), and 6 μL sterilized ddH 2 O. Three biological replicates were performed for each reaction and the cycling conditions were as follows: 95°C for 5 min, followed by 40 cycles of denaturation at 95°C for 15 s and annealing at 60°C for 1 min, and a final extension at 95°C for 15 s. Relative expression was calculated using the 2 −ΔΔCt formula [85], and the mRNA expression level was normalized against actin (used as an internal control) and compared.
Statistical analysis
At least five independent biological replicates were performed for each treatment, and three replicates were performed for the whole experiment. All of the data were statistically analyzed with SPSS 20.0 (SPSS Inc., Chicago, IL, USA). One way analysis of variance was performed, and statistically significant differences among the treatments were determined using Tukey's honest significant difference test at P < 0.05. A transcript expression heatmap was created using the TBtools statistic package. Origin Pro 8.0 was used to make graphs.
Additional files
Additional file 1: Table S1. List of primers used for qRT-PCR assays. | 8,852.2 | 2019-10-07T00:00:00.000 | [
"Biology"
] |
Dynamic Behaviors of a Two-Degree-of-Freedom Impact Oscillator with Two-Sided Constraints
*e dynamic model of a vibroimpact system subjected to harmonic excitation with symmetric elastic constraints is investigated with analytical and numerical methods.*e codimension-one bifurcation diagrams with respect to frequency of the excitation are obtained by means of the continuation technique, and the different types of bifurcations are detected, such as grazing bifurcation, saddle-node bifurcation, and period-doubling bifurcation, which predicts the complexity of the system considered. Based on the grazing phenomenon obtained, the zero-time-discontinuity mapping is extended from the single constraint system presented in the literature to the two-sided elastic constraint system discussed in this paper. *e Poincare mapping of double grazing periodic motion is derived, and this compound mapping is applied to obtain the existence conditions of codimension-two grazing bifurcation point of the system. According to the deduced theoretical result, the grazing curve and the codimension-two grazing bifurcation points are validated by numerical simulation. Finally, various types of periodic-impact motions near the codimensiontwo grazing bifurcation point are illustrated through the unfolding diagram and phase diagrams.
Introduction
In mechanical engineering, there exist the vibroimpact phenomena widely, and systems interacting via impact have been extensively studied in recent years. Over the past years, scholars have mainly dedicated themselves to study the bifurcation phenomenon in smooth systems. Recently, a lot of work has gone into investigating nonsmooth bifurcations [1][2][3][4][5][6][7][8] of dynamical systems. e focus of investigations has gradually begun to change from a unilateral constraint system [9][10][11] to a multiconstraint system [12][13][14][15][16][17]. e impact oscillators can be divided into rigid or elastic impact oscillators according to the hardness of constraint. Shaw and Holmes [18] studied the motions of a system with a single piece rigid stop by using a one-dimensional mapping. Jiang and Wiercigroch [19] developed the concept of discontinuity geometry of rigid impact oscillators into the elastic impact oscillators, and the geometry analysis methods are applied to study the mechanisms of grazing bifurcations of system with unilateral soft constraint. Ing et al. [20] applied the methods of theoretical analyses and simulation to study different bifurcation scenarios for an impact oscillator with one-sided amplitude constraint. Du et al. [21] studied the intermittent chaos control method of a symmetrical collision system with the two-degree-of-freedom elastic double-impact system. e controlling idea of the Hopf bifurcation was applied to the system to provide a new control method for the chaos control of such system. Gritli et al. [22,23] considered a state-feedback controller and applied the Linear Matrix Inequality (LMI) approach to control the motion of the system subject to norm-bounded parametric uncertainties. Shen et al. [24,25] proposed a discrete feedback controller to suppress grazing-induced motion. Gritli and Belghith [26][27][28][29] considered the dynamic behavior of the model under the OGY-based state-feedback control, and the bifurcation phenomena were carried out via the bifurcation diagrams. Chávez et al. [30] studied a single-parameter and two-parameter bifurcations of the system that consists of two identical Duffing oscillators interacting via soft impact with the aid of COCO.
As a special bifurcation of nonsmooth systems, the grazing bifurcation is a qualitative transition between impact and nonimpact motion [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46]. Nordmark [31,32] studied the dynamic behavior near the grazing trajectory and a nonlinear mapping is obtained. e normal form mapping for grazing bifurcation in an n-dimensional impact system is obtained in [35]. e existence condition and stability of the grazing periodic orbit are derived in a two degree-of-freedom vibroimpact system with one-sided constraint in [36]. Yin et al. [37][38][39][40][41][42] analyzed and measured the degenerate grazing bifurcations of the impact systems. e dynamic behavior of the grazing bifurcation is investigated by using the Poincare composite mapping technology, and the conditions of the degenerate grazing bifurcation are found. And the relationship between the observed bifurcations and degenerate grazing point is presented by a two-parameter continuation. Kowalczyk et al. [43] made a simple classification of codimension-two bifurcations of nonsmooth systems. Dankowicz and Zhao [44] deduced specific formulae for the local map on the vicinity of the codimensiontwo points and presented the bifurcation behavior by numerical simulations. e codimension-two grazing bifurcations in single-degree-of-freedom impact oscillators are studied and the dynamic response near the bifurcation points is presented in [45]. Xu et al. [46] investigated the codimension-two grazing bifurcation in n-degree-of-freedom impact oscillator with bilateral constraints by using a classical approach of discontinuity mappings.
Based on previous studies, we discuss the bifurcation behavior of the system with a two-sided elastic constraint by using the path-following method. And then, abundant and complex bifurcation behaviors are exhibited, such as grazing bifurcation, saddle-node bifurcation, fold bifurcation, period-doubling bifurcation. Much attention has been paid to analyze codimension-two bifurcation of a system with rigid impact, little work has been studied on the analysis of such bifurcation in the soft impact system. And then, on the basis of discontinuity mappings of the rigid vibroimpact system, the zero-time-discontinuity mappings ZDM 1 , ZDM 2 of the elastic constraint vibroimpact system are deduced. And the compound mapping P is applied to obtain the existence conditions and specific mathematical expressions of codimension-two grazing bifurcation point.
is paper is discussed as follows: in section 2 we introduce a dynamic model of a two-degree-of-freedom vibroimpact system with two-sided soft constraints. e codimension-1 bifurcation analysis of the system is presented in section 3. In section 4, we derive a Poincare mapping near the grazing bifurcation by using the zerodiscontinuity-mapping method; the numerical simulations according to the deduced theoretical result are presented. In Section 5, the conclusions are given.
Physical Model.
A physical model for the two-degree-offreedom impact oscillator with masses M 1 and M 2 is described in Figure 1. e masses M 1 and M 2 are connected via the linear springs and linear viscous dampers, and the mass M 1 is limited by the symmetrical elastic constraint corresponding to two discontinuity surfaces D 1 and D 2 . e excitations on the masses M 1 and M 2 are harmonic forces with amplitudes P 1 , P 2 , respectively. e excitation frequency and the phase are the same for the masses, where the frequency is taken as the controlling parameter in the following: when the displacement X 1 of the oscillator is (D(or − D)), it will collide with the right (or left) elastic constraint. Damping in this model is considered as proportional damping of the Rayleigh type, which implies C 1 /K 1 � C 2 /K 2 .
Differential Equation of Motion.
e equation of motion of the physical model in Figure 1 can be expressed as follows: Introduce the following dimensionless variables and time: Systems (1) and (2) can be expressed in a normalized form as in which where the dot '·' denotes the differentiation with respect to the nondimensional time t. Let x � (x 1 , x 2 ) T ; it satisfies that _ x � (v 1 , v 2 ) T and M and K denote the matrix of mass and the matrix of stiffness in nondimension form, respectively. e state-space discontinuity surfaces D 1 and D 2 can be expressed as denote the distance to the left constraint D 1 and h RC (x) denote the distance to the right constraint D 2 .
Grazing Periodic Motion.
e general solution for nonimpact motions of equation (4) is described in the following form: φ ij e − η j t a j cos ω dj t + A j sin(ωt + τ) where φ ij are the elements of the canonical modal matrix Ψ, , a j and b j are the constants of integration that are determined by the initial conditions and system parameter, A j and B j are the constants of amplitude, and their expressions are as follows: e considered system consists of bilateral symmetrical elastic stops; the initial conditions and periodic motion conditions of the orbit with double grazing bifurcation satisfy the following formula: By substituting the above conditions into the general solution, we can see that where With appropriate system parameters, the motion of the system is periodic. e periodic motion of the system is represented by (p, q, n), the number of periods is represented by n, p denotes the number of collisions between the mass M 1 and the left side constraint, and q denotes the number of collisions between the mass M 1 and the right side constraint.
Codimension-One Bifurcation Analysis
In this section, we will use the COCO-toolbox to study the dynamic behavior, giving the path-following analysis with respect to the frequency of the excitation of the system parameters. e state space of the system is divided into three regions according to the motion state of the oscillator. And then event functions are defined to describe the critical conditions of each mode. We take u � ( Table 1.
In order to better show the dynamic behaviors of the oscillators under the case of elastic constraint, we take the Shock and Vibration 3 frequency of excitation as a controlling parameter. Because the constraints on both sides of the system are symmetrical, we take the left contact time as an example. And we discuss the bifurcations of the system as the contact time between the mass M 1 and the left-hand side spring varies with the frequency of external excitation. In the following bifurcation diagram, the point on the solid line corresponds to a stable orbit, while the solid line on the phase diagram represents a stable solution. e vertical coordinate LTOC in the following bifurcation diagram represents the contact time between the mass M 1 and the spring on the left.
In Figure 2, we show the result of the numerical continuation of the period-1 solution with respect to the frequency ω. We start continuation from the larger value of ω; at this moment, there is no interaction between the oscillator M 1 and the stop. us, the nonimpacting solution corresponds to a horizontal line with the LTOC being zero in Figure 2. From here, we can find a stable (0, 0, 1) solution. In the direction of decreasing ω, we find a grazing bifurcation point where the orbit makes tangential contact with both the left and right constraint surfaces (denoted as DGR1) at ω ≈ 2.768 as shown in Figure 3(a), after which the impacting motion occurs. Very close to the grazing point, a saddlenode bifurcation is detected. Here, a Floquet multiplier of the periodic solution crosses the unit circle from the inside, and therefore the stability is lost. e orbit goes to the direction of increasing ω until the second saddle-node bifurcation SN2 is encountered at ω ≈ 6.021. And the solution obtains stability; after this point, the orbit turns to the direction of decreasing ω. If we trace this stable branch, the third saddle-node bifurcation SN3 occurs at ω ≈ 2.639, and the orbit loses stability again. If ω is decreased further, another double grazing bifurcation point DGR2 shown in Figure 3(b) is found. As shown in Figure 2(c), in the direction of increase of ω, the saddle-node bifurcation point SN4 and period-doubling bifurcation point PD1 are detected in turn. And the stability of the periodic solution changes. Along the unstable (2, 1, 1) orbit, we detect the perioddoubling bifurcation point PD2 and the saddle-node bifurcation point SN5; finally, the stability is lost. is unstable impacting solution is traced further as shown in Figure 2(d). A saddle-node bifurcation point SN6 occurs, and the orbit regains stability. When ω is increased along the stable (2, 1, 1) orbit, the period-doubling bifurcation PD3 is found, and the stable orbit becomes unstable (2, 1, 1) orbit again. By tracing this unstable branch, we can detect the left grazing point where the solution is tangent to the left constraint (denoted as LGR) in (1, 1, 1) orbit at ω ≈ 2.132. If ω is increased further, a period-doubling bifurcation PD4 in (1, 1, 1) orbit is encountered at ω ≈ 2.435, and the perioddoubling bifurcation makes the (1, 1, 1) orbit stable. If we trace this period orbit in the direction of increasing the frequency ω, the saddle-node point SN3 is detected. After this point, the branch of (1, 1, 1) orbit turns to the direction of decreasing ω until a period-doubling point PD5 appears, and then the solution loses the stability. As shown in Figure 2(e), as ω is decreased further along the unstable (1, 1, 1) branch, a grazing bifurcation where the orbit makes tangential contact with right constraint (denoted as RGR) is detected, after which the impact between the oscillator M 1 and the right spring occurs, and then the unstable (1, 2, 1) branch emerges. In the direction of decreasing the frequency ω, a period-doubling bifurcation PD6 in (1, 2, 1) orbit makes the (1, 2, 1) branch stable again. When we decrease ω further, a saddle-node bifurcation SN7 in (1, 2, 1) orbit is detected, and the solution loses the stability. e branch has now switched to the direction of increasing the parameter ω. Along this branch, we find another saddle-node bifurcation SN8 in (1, 2, 1) orbit at ω ≈ 2.373. If ω is decreased further, a period-doubling bifurcation PD7 is detected. Finally, the solution loses stability.
By tracing the period-1 solutions via the continuation platform COCO, we detect eight period-doubling points which give rise to four period-2 branches. e path-following analysis of the period-2 branch with respect to the frequency ω is shown in Figure 4.
In Figure 4(a), starting from the period-doubling bifurcation point PD1 in (2, 1, 1) orbit, and an unstable (4, 2, 2) orbit can be found through period-doubling bifurcation. In the vicinity of the period-doubling bifurcation point PD1, we detect a grazing point LGR1 after which the (4, 2, 2) orbit becomes (3, 2, 2) orbit. As ω is increased, a period-doubling bifurcation PD and a saddle-node bifurcation SN in (3, 2, 2) orbit are detected respectively, and the stability of periodic solution changes. If the frequency is reduced to ω ≈ 2.064 a grazing bifurcation LGR2 in (3, 2, 2) orbit appears, where the solution makes tangential contact with the left stop, after which the branch of unstable (3, 2, 2) orbit becomes the branch of unstable (4, 2, 2) orbit. Later, as ω is increased, we find another saddle-node bifurcation SN and period-doubling bifurcation PD in (4, 2, 2) orbit; finally, the solution becomes stable. Along this stable branch, finally, it returns to the period-doubling bifurcation point PD2 of the period-1 orbit.
We start continuation at the period-doubling bifurcation point PD3 of the period-1 branch, and a period-doubling bifurcation PD is detected when ω is increased. Below this Table 1: e modes, vector fields, and event functions used for numerical simulation in COCO.
Interval
Mode Vector field Event function we find a (4, 2, 2) branch, which terminates at the grazing bifurcation point LGR3. Along this (3, 2, 2) orbit, a period-doubling bifurcation point PD and a saddle-node bifurcation point SN are detected, respectively, and the stability of periodic solution changes. As ω is increased, we find another grazing bifurcation point LGR4 after which the (3, 2, 2) orbit becomes (2, 2, 2) orbit. Finally, the period-2 branch returns to the period-doubling point PD4 of period-1 orbit. e results of the numerical continuation of other period-2 impacting solutions with respect to the frequency ω are presented in Figure 4(b). Starting from the perioddoubling bifurcation point PD5 of the period-1 orbit, we can detect a stable branch until the period-doubling bifurcation is reached at ω ≈ 2.386, and then the orbit loses stability. When we trace this branch further in the direction of decreasing the frequency ω, a grazing bifurcation RGR1 is found, where the (2, 2, 2) orbit becomes (2, 3, 2) orbit. As ω is decreased, a saddle-node bifurcation occurs. Below this value, we find a small window of stability, which ends at the period-doubling bifurcation point PD, and then the branch loses stability. Along this unstable branch, another grazing bifurcation point RGR2 is detected where the (2, 3, 2) orbit becomes the (2, 4, 2) orbit. Finally, the orbit returns to the period-doubling bifurcation point PD6 of the period-1 orbit. A similar behavior can be observed around another period-2 orbit, which is shown in the upper part of Figure 4(b).
According to the discontinuity mapping of the rigid constraint system introduced by Nordmark [4], the discontinuity mapping is extended to the elastic unilateral constraint system. As shown in Figure 5, there is a grazing trajectory with the discontinuous boundary. Consider the point x 1 near the grazing point x * ; therefore, it can be seen that the trajectory (x 1 , x 2 , x 3 , x 4 ) passes through the discontinuous boundary. From the reverse direction of its trajectory, the time required to move from the point x 3 to the point x 0 is the same as that from the point x 1 to the point x 3 . So, we assume that there is an instantaneous jump from x 1 to x 0 ; then, the zero-time-discontinuity mapping can be defined as the mapping from x 1 to x 0 . For the specific process of discontinuity-mapping expression of the elastic constraint system, see [4].
Based on the previous analysis, we extend the discontinuity mapping of elastic constraint systems from unilateral constraint to bilateral constraints. e two corresponding zero-time-discontinuity mappings ZDM 1 and ZDM 2 near two grazing points x * 1 and x * 2 can be written, respectively, as follows: where Q � (0, −μ k0 , 0, 0, 0) T , h RC max (x) represents the maximum displacement across the right collision surface, h LC min (x) represents the minimum displacement across the left collision surface. x 1 x 2 x * x 5 x 0 x 3 x 4 ∑ + ∑ - Figure 5: e sketch map of the zero-time-discontinuity mapping near the grazing trajectory. 6 Shock and Vibration e two zero-time-discontinuity mappings Z DM 1 and Z DM 2 are defined in the vicinity of the grazing points x * 1 and x * 2 , respectively, so the Poincare mapping P near the grazing trajectory has the following form:
e Poincare
Mapping. e mapping P 1,smooth can be expanded near the grazing point x * 1 as follows: Similarly, we expand the event function h LC near x * 1 and then h LC have the following form: As the point x * 1 is on the impact surface D 1 , it needs to satisfy h LC (x * 1 ) � 0, such that (14) is expressed as And, consequently, Similar to the above analysis, we have
Conditions of Codimension-Two Grazing Bifurcation.
If the starting point near the grazing point, which starts from either the impact side or nonimpacting side, converges to the grazing point after the iteration of mappings (16) and (17), the grazing periodic orbit is stable. For an impact point x in the neighborhood of the grazing point After the iteration of the mapping P 1 , it reaches the nonimpacting side, which satisfies that is, And after the iteration of the mapping P 2 , the oscillator collides with the discontinuity surface D 1 again, which satisfies that is, is shows that if the oscillator collides with the impact surface D 1 again after iteration, the impact will be persisted, which results in a large stretch of the orbit in a certain direction. erefore, the grazing periodic orbit is unstable.
Based on the above analysis, if
Shock and Vibration
h RC x x * 2 P 1,smooth,x x * 1 P 2,smooth,x x * 2 P 1,smooth,x x * 1 (j− 1) it means that the oscillator changes from the impact side to the nonimpacting side after several iterations but finally collides with the impact surface D 1 again and then the impact will be perpetuated. erefore, the grazing periodic orbit loses stability.
Similarly, if the oscillator collides with the impact surface D 2 again after iteration, and the impact will be perpetuated; then, the grazing periodic orbit is unstable. According to the above analysis, if that indicates the oscillator changes from the impact side to the nonimpacting side after several iterations but finally impacts the impact surface D 2 again and the impact will be perpetuated. us, the grazing periodic orbit loses stability.
In the same way, if then the oscillator impacts the impact surface D 1 and D 2 after iteration. If h LC x x * 1 P 2,smooth,x x * 2 P 1,smooth,x x * 1 P 2,smooth,x x * 2 j Q < 0, this means that the oscillator collides again with the discontinuity surfaces D 1 and D 2 after several iterations, and the impact will be perpetuated. us, the grazing periodic orbit loses stability.
According to the above analysis, four special cases can be obtained as follows: h RC x x * 2 P 1,smooth,x x * 1 P 2,smooth,x x * 2 (j+1) Q � 0, Based on the definition in the literature [24], the points corresponding to the above four cases are the codimension-two grazing bifurcation points. Let ξ n � 0(n � 1, 2, 3, 4) express the condition that the codimension-two grazing 8 Shock and Vibration bifurcation points satisfy; then, it can be written in four cases as follows: Next, we take the third case (H3) as an example to simplify the calculation formula: us, Similarly, we can simplify the calculation formulae of codimension-two bifurcation points for the other three cases (H1), (H2), and (H4). Since the system is symmetrically constrained, we can get that ξ 1 and ξ 4 , ξ 2 and ξ 3 are the same results, respectively, after simplification. e corresponding results are as follows: where P (n) 12 and P (n+1) 12 denote the (1, 2) elements of matrices (P 2,smooth,x (x * 2 )P 1,smooth,x (x * 1 )) n and P 1,smooth,x (x * 1 ) (P 2,smooth,x (x * 2 )P 1,smooth,x (x * 1 )) n , respectively.
In order to further study the dynamic behavior of the system in the region near a codimension-two grazing bifurcation point, firstly fixed μ k � 0.25, μ m � 0.25, ζ � 0.2, μ k0 � 100, and then ω � 0.3096, d � 10.88 corresponds to the codimension-two grazing bifurcation point. Changing the values of the parameters ω, d in the vicinity of ω � 0.3096, d � 10.88, different types of periodic-impact motions can be obtained by selecting a large number of parameters for numerical simulation. And finally, according to the dynamic behavior of the system, the parameter plane is divided into several regions. e unfolding diagram near the codimension-two grazing bifurcation point is drawn in Figure 8, where the dashed line in the diagram represents the grazing curve.
Conclusions
For a two-degree-of-freedom system with symmetric elastic constraints, the codimension-one bifurcation of the system is discussed in detail by using the continuation method, and grazing bifurcation, saddle-node bifurcation, and perioddoubling bifurcation are obtained. As far as we knew, there are little research on the codimension-two grazing bifurcations of the system with the symmetric elastic constraints. en, based on the traditional method of discontinuity mapping under the single constraint, we extend it to the impact oscillator with two-sided elastic constraints, and the specific mathematical expressions of zero-time-discontinuity mapping of the symmetric elastic constraint system are deduced after a complicated calculation. By combining the zero-time-discontinuity mappings and local smooth Poincare maps, the two composite mappings P 1 , P 2 are obtained. According to the composite mappings, the criteria of codimension-two grazing bifurcation point under four different conditions are given out. e grazing curve and codimension-two grazing bifurcation points are shown by numerical simulation. And a series of complex dynamic behaviors in the vicinity of the codimension-two grazing bifurcation point are also presented, which reveal the rich dynamic behaviors of the impact system with two-sided elastic constraints.
Data Availability
e simulation data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 5,635 | 2021-03-31T00:00:00.000 | [
"Engineering"
] |
Conceptual Paper: Analyzing Climate-Induced Financial Risks in the Langkawi Fishing Sector
This proposed research addresses the pressing concern of climate change's impact on Langkawi's fishing industry. While the research is yet to be conducted, it offers a blueprint for comprehending the complex relationship between climate change and the financial health of this vital sector. Its central aim is to conduct a comprehensive examination of the financial risks that climate change poses, providing valuable insights for decision-making and strategic planning. The research design employs a structured approach encompassing quantitative analysis, time series assessments, scenario modelling, and spatial analyses, all tailored to effectively address the research objectives. Anticipated outcomes include quantifying climate-induced financial risks, identifying influential climate variables, and proposing potential adaptation strategies for the industry. The findings, once realized, are poised to directly inform policy development, benefiting policymakers, industry stakeholders, and local communities. This knowledge will support the creation of adaptive strategies and policies that enhance the sector's resilience in the face of climate change. Recognizing potential limitations and challenges, including data availability and quality, the validity of assumptions, and the inherent uncertainty in modelling future climate scenarios, this research proposal will evolve into a comprehensive research endeavor. Its overarching goal is to empower informed decisions that mitigate risks and foster the sustainability of Langkawi's fishing sector in an era of climate change. The insights and recommendations emerging from this study offer the potential to positively impact Langkawi's community while contributing to broader discussions on environmental and economic sustainability.
Introduction
Climate change is recognized as one of the most critical environmental challenges facing the world today, with potentially significant economic, social, and environmental impacts (IPCC, 2021).Malaysia, like other countries, is particularly vulnerable to the impacts of climate change, with rising temperatures, changing rainfall patterns, and sea-level rise expected to have adverse consequences for various sectors, including agriculture, forestry, and fisheries (MOE, 2018).Langkawi, located in the northern region of Malaysia, is one of the country's Vol 12, Issue 3, (2023) E-ISSN: 2226-3624 most prominent tourism and fishing hubs, with the industry playing a crucial role in the local economy (Abdullah et al., 2019).However, the fishing industry in Langkawi is facing a range of challenges, including overfishing, habitat degradation, and climate change impacts (Azraai et al., 2020).There is a pressing need to understand the financial risks associated with climate change in the fishing industry in Langkawi and to identify strategies to mitigate these risks.This research proposal aims to assess the financial risks associated with climate change in the fishing industry in Langkawi, Malaysia.The research will focus on the economic impacts of climate change on the fishing industry, including changes in fish stocks, fishing revenues, and production costs.The research will also explore potential adaptation and mitigation strategies that can help to reduce the financial risks associated with climate change in the fishing industry in Langkawi, Malaysia.
Problem Statement
The fishing industry in Langkawi, Malaysia, is a critical component of the local economy, providing livelihoods for thousands of people (Abdullah et al., 2019).However, the industry is facing significant challenges, including overfishing, habitat degradation, and the impacts of climate change (Azraai et al., 2020).Climate change is expected to have a range of adverse effects on the fishing industry, including changes in water temperature, ocean acidification, and sea-level rise, which can result in a decline in fish populations and changes in fishing grounds (IPCC, 2021).These impacts could have significant financial implications for the fishing industry in Langkawi, with potential consequences for the local economy and livelihoods.Despite the potential financial risks associated with climate change in the fishing industry in Langkawi, there is a lack of research on this issue.While some studies have examined the impacts of climate change on fisheries in Malaysia, few have focused specifically on Langkawi (Azraai et al., 2020).The research that has been conducted has tended to focus on the ecological impacts of climate change, rather than the financial risks associated with these impacts.This research proposal aims to fill this gap by assessing the financial risks associated with climate change in the fishing industry in Langkawi, Malaysia.The proposed research will address the following research questions: 1.
What are the financial risks associated with climate change in the fishing industry in Langkawi?2.
How are these risks affecting the financial performance of fishing companies in Langkawi?3.
What strategies can the fishing companies adopt to mitigate these risks?The research will be conducted using a case study approach, focusing on a sample of fishing companies operating in Langkawi.The findings of this research will contribute to a better understanding of the financial risks associated with climate change in the fishing industry in Langkawi and will provide guidance for policymakers, industry stakeholders, and investors on developing strategies to mitigate these risks.
Expected Contribution
This proposed research is expected to make a significant contribution to the fishing industry in Langkawi, Malaysia in several ways: (1) Identifying Climate Change Risks: The research will provide an in-depth analysis of the specific climate change risks that the fishing industry in Langkawi faces.This can include threats such as changing sea temperatures, ocean acidification, extreme weather events, and shifts in marine ecosystems.By identifying these risks, the research can help stakeholders in the fishing industry better understand the challenges they are likely to encounter in the future.(2) Quantifying Financial Impacts: By assessing the financial risks associated with climate change, the research can estimate the potential economic losses and costs that the fishing industry may incur.This information is crucial for decision-makers, including government agencies, local authorities, and fishing industry businesses, to develop strategies for risk mitigation and adaptation.(3) Sustainability Strategies: The research can offer insights into sustainable practices and strategies that the fishing industry in Langkawi can adopt to mitigate the adverse financial impacts of climate change.This may include recommendations for sustainable fishing practices, diversification of income sources, and investments in more resilient infrastructure.(4) Policy Recommendations: The research can provide a basis for policy recommendations aimed at safeguarding the fishing industry in Langkawi against climate change-related financial risks.This may involve proposing regulations, incentives, and support mechanisms to help the industry adapt to changing environmental conditions.(5) Community Resilience: Climate change impacts can have broader implications for local communities in Langkawi that rely on the fishing industry.The research can shed light on how these communities can build resilience and adapt to changing circumstances, potentially offering recommendations for community-based initiatives and support systems.(6) International Relevance: Given the global nature of climate change, the research in Langkawi can serve as a case study with implications beyond its specific location.It can contribute to a broader understanding of how climate change affects fishing industries in various regions and highlight commonalities and differences in the financial risks they face.In summary, this research has the potential to be a valuable resource for the fishing industry in Langkawi, offering a comprehensive assessment of financial risks associated with climate change and providing guidance on how to adapt, mitigate losses, and promote sustainability in the face of these challenges.
Literature Review
Climate change is one of the most significant global challenges that the world is currently facing.It has the potential to cause widespread and severe impacts on various sectors of the economy, including the financial sector.This literature review section seeks to explore the relationship between financial risks and climate change by examining the existing literature on the subject.The Intergovernmental Panel on Climate Change (IPCC) defines financial risks as "the potential for unanticipated financial losses arising from exposure to climate-related hazards" (IPCC, 2018).The IPCC further identifies two types of financial risks associated with climate change: physical risks and transition risks.Physical risks refer to the risks associated with the physical impacts of climate change, such as sea-level rise, extreme weather events, and other climaterelated disasters.Transition risks, on the other hand, refer to the risks associated with the transition to a low-carbon economy, such as policy changes, technological advancements, and shifts in market preferences.Climate change is a global phenomenon that has far-reaching consequences for the economy, environment, and society.In recent years, there has been an increasing recognition of the financial risks associated with climate change, particularly for vulnerable sectors such as fisheries (Dulal et al., 2019).Langkawi, a popular tourist destination located in northern Malaysia, is heavily reliant on its fishing industry, which contributes significantly to the local economy.However, the effects of climate change, including sea level rise, increased frequency of storms and typhoons, and ocean acidification, are threatening the sustainability and profitability of the fishing industry in Langkawi.This research proposal seeks to assess the financial risks associated with climate change in Langkawi's fishing industry, to inform the development of effective adaptation and mitigation strategies.The impacts of climate change on the fishing industry have been widely documented in the literature.The Food and Agriculture Organization (FAO, 2018) reports that fisheries and aquaculture are highly vulnerable to climate change, with potential consequences including changes in fish stock abundance and distribution, altered fish migration patterns, and reduced fish growth rates.These impacts can have significant economic implications, as the fishing industry supports millions of people globally, particularly in developing countries.For example, a study conducted in Ghana found that climate variability and change had a significant negative impact on the income and livelihoods of artisanal fishers (Kankam-Yeboah & Biney, 2020).Numerous studies have examined the financial risks associated with climate change, particularly in the context of the financial sector.For example, a study by the Bank of England (2015) found that climate change posed significant risks to the stability of the financial system, including risks to insurance companies, banks, and pension funds.The study also highlighted the potential for stranded assets, as investments in high-carbon assets become stranded due to changes in policy and market preferences.In addition to the risks posed to the financial sector, climate change also poses risks to individual businesses and industries.For example, a study by S&P Global Ratings (2019) found that climate change posed significant risks to the energy, transport, and agriculture sectors, as well as to coastal communities.The study highlighted the potential for physical risks, such as damage to infrastructure and property, as well as transition risks, such as changes in regulations and market demand.The literature also suggests that the financial risks associated with climate change are not evenly distributed.Vulnerable populations, such as low-income communities and small businesses, are likely to be disproportionately impacted by climate change (IPCC, 2018).This is particularly true for developing countries, where limited resources and infrastructure can exacerbate the impacts of climate change.For example, a study by Kankam-Yeboah and Biney (2020) found that climate variability and change had a significant negative impact on the income and livelihoods of artisanal fishers in Ghana.In the context of the financial risks associated with climate change, risk management has become increasingly important for the fishing industry.Dulal et al. (2019) conducted a review of the literature on financial risk management in the fishing industry and found that most studies focused on the use of insurance as a risk management tool.However, insurance may not be a viable option for small-scale fishers in developing countries, who often lack the financial resources to purchase insurance policies.Other risk management strategies, such as diversification of income sources and investment in more resilient fishing equipment, have been suggested as alternatives.In terms of the specific case of Langkawi, there has been limited research on the financial risks associated with climate change in the fishing industry.However, the Government of Malaysia (2019) has acknowledged the vulnerability of the country's fisheries sector to climate change, particularly in the context of changing ocean currents and sea surface temperatures.A study conducted in the neighboring country of Indonesia found that small-scale fishers were experiencing negative impacts from climate change, including reduced fish catch and increased operating costs (Nurdin et al., 2021).It is likely that similar impacts are being felt in Langkawi, given its geographical proximity to Indonesia and similar dependence on the fishing industry.Overall, the literature suggests that the financial risks relationship with climate change in the fishing industry are significant and require urgent attention from policymakers and industry stakeholders.The case of Langkawi presents a particularly interesting and important case study, given its reliance on the fishing industry and exposure to climate change risks.By assessing the financial risks associated with climate change in Langkawi's fishing industry, this research proposal aims to contribute to the development of effective adaptation and mitigation strategies that can help protect the livelihoods of fishers and support the sustainability of the fishing industry in the face of a changing climate.
Methodology
To achieve a robust understanding of the financial risks stemming from climate change in the Langkawi fishing industry, a structured research design and data analysis process have been meticulously developed.This section provides a detailed account of the research design, data collection, preprocessing, analysis techniques, and the overall approach used to address the research objectives.The methodology's aim is to ensure the validity, reliability, and credibility of the research findings and to facilitate a comprehensive assessment of the financial implications of climate change in the context of Langkawi's fishing sector.Here is an explanation of the research design for this study:
Research Type
This research primarily employs quantitative methods to analyze and measure the financial risks associated with climate change.Quantitative research allows for the precise examination of relationships and statistical assessments of the data.
Research Approach
Cross-Sectional and Longitudinal Analysis: The research design includes both cross-sectional and longitudinal elements.It involves a cross-sectional analysis of current financial data and a longitudinal analysis of historical data to understand trends and patterns in the Langkawi fishing sector.
Sampling
Representative Sample: The study includes a representative sample of fishing industry stakeholders in Langkawi.This sample encompasses a variety of fishing activities, including small-scale and large-scale operations, to ensure a comprehensive assessment of the sector.
Variables
To achieve the objective of this proposed research, this research will consider the inclusion of several key variables.These variables, to be obtained from the sources mentioned, will be essential for assessing the financial risks associated with climate change in the Langkawi fishing sector and conducting the data analysis outlined in the methodology.The following table shows the list of important variables to be included in the proposed study their expected sources.Government publications, industry reports, and interviews with stakeholders.
Historical Data: Time Series Data: Historical data on all the variables mentioned above, allowing for trend analysis and the assessment of changes over time.
Government archives, industry records, scientific studies, and historical climate data.
Data Collection
Multi-Source Data.As mentioned in the above section, data will be collected from various sources, including government agencies, environmental organizations, fishing industry records, and meteorological databases.This multi-source approach ensures a comprehensive dataset to address the research objectives.
Data Analysis
Data Cleaning and Transformation: Raw data will be subjected to thorough data cleaning to address missing values, outliers, and inconsistencies.Any data that cannot be rectified will be documented.Variables are transformed as needed to meet the assumptions of statistical analysis.Transformation methods, such as log transformations or normalization, will be applied to ensure data reliability.
Potential Limitations
As we propose the research endeavor titled "Analyzing Climate-Induced Financial Risks in the Langkawi Fishing Sector," it's essential to anticipate potential limitations that may arise during the research process: Data Availability and Quality: One of the primary challenges we may face is the availability and quality of historical and current data.Access to comprehensive and reliable climate and financial data, specific to the Langkawi fishing sector, could present limitations.It's essential to identify potential data sources and assess their data completeness and accuracy.Assumption Validity: The proposed research will involve various assumptions, such as the stability of historical relationships between climate variables and financial indicators.The validity of these assumptions, especially in the context of changing climate patterns, is a potential limitation that requires careful consideration.Modeling Uncertainty: We anticipate that the quantitative models used in the analysis might introduce uncertainty.These models are based on historical data and projected scenarios, and they may not perfectly reflect the actual future climate change impacts in Langkawi.Generalizability: While the study aims to provide valuable insights for the Langkawi fishing sector, the extent to which these findings can be generalized to other regions with different ecological and economic contexts is a limitation.Each location has its unique set of circumstances that necessitates tailored assessments.While these potential limitations are recognized at the proposal stage, they will be addressed, and mitigating strategies will be put in place as part of the research design and methodology.Acknowledging these potential constraints helps in shaping the research approach to enhance the validity and reliability of the study's findings.
Conclusion
This proposed research seeks to investigate the impact of climate change on the financial well-being of the fishing industry in Langkawi.Although the research is yet to be conducted, the proposal outlines a critical avenue of study that holds immense promise.The motivation behind this research proposal stems from recognizing the profound influence of climate change on the Langkawi fishing sector.We believe that examining climate-induced financial risks can provide valuable insights for better decision-making and strategic planning.Our proposed research design entails a comprehensive and structured approach.It combines quantitative analysis, time series evaluation, scenario modeling, and spatial assessments to effectively address our research objectives.Although the research has not yet taken place, we expect to gain significant insights, including the quantification of financial risks associated with climate change, the identification of climate factors with the most substantial impact, and potential adaptation strategies for the fishing industry.The findings, once realized, are poised to have direct policy implications.Policymakers, industry stakeholders, and local communities can benefit from this knowledge to formulate adaptive strategies and policies that enhance the sector's resilience.As we transition from the proposal to the research phase, our objective remains clear: to contribute to the understanding of how climate change affects financial risk in Langkawi's fishing sector.We aim to create a foundation for well-informed decisions, enabling proactive measures to mitigate risks and foster the sector's sustainability amidst climate change.
Table 1 : List of Variables, Definition and Expected Sources Variable Potential Sources of Data
Descriptive Analysis: Descriptive statistics, such as measures of central tendency and variability, are used to summarize the data.Visualizations like histograms and time series plots are employed to explore data patterns.Time Series Analysis: Time series analysis helps detect trends, seasonality, and cyclic patterns in climate variables and financial indicators.Correlation coefficients are calculated to examine relationships between climate variables and financial outcomes.Multiple regression models are used to model the impact of climate change variables on financial performance.Risk Assessment Models: Quantitative financial risk assessment models, such as Value at Risk (VaR) and other models, will be used to estimate potential financial losses under different climate change scenarios.Scenario Analysis: Different climate change scenarios are considered, simulating various environmental conditions and their potential impact on the Langkawi fishing industry.These scenarios are based on established climate change projections.Spatial Analysis: Geographic Information Systems (GIS) tools may be used to assess spatial variations in climate-induced financial risks within Langkawi, allowing for a spatial dimension to the research.This research design incorporates a structured and systematic approach to examining climateinduced financial risks in the Langkawi fishing sector.By combining cross-sectional and longitudinal data, various analysis techniques, and multi-source data collection, the design aims to provide a comprehensive understanding of the challenges and opportunities facing the fishing industry in the context of climate change.
Behavioral and Socio-Economic Factors: Understanding the behavioral and socio-economic dimensions of the fishing industry is crucial for a comprehensive analysis of climate-induced financial risks.Limitations may arise in capturing and analyzing these factors, as the proposed research primarily focuses on quantitative aspects.Complexity of Climate Interactions: Climate-induced financial risks result from intricate interactions among a multitude of factors, and simplifications are necessary for analysis.These complexities may lead to the omission of nuanced factors, representing a potential limitation.Future Climate Scenarios: The research relies on climate change projections for future scenarios.Uncertainty associated with these projections could affect the accuracy of the results.The study should acknowledge the potential variation in actual future climate change impacts. | 4,594.8 | 2023-09-20T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
AlphaPeptStats: an open-source Python package for automated and scalable statistical analysis of mass spectrometry-based proteomics
Abstract Summary The widespread application of mass spectrometry (MS)-based proteomics in biomedical research increasingly requires robust, transparent, and streamlined solutions to extract statistically reliable insights. We have designed and implemented AlphaPeptStats, an inclusive Python package with currently with broad functionalities for normalization, imputation, visualization, and statistical analysis of label-free proteomics data. It modularly builds on the established stack of Python scientific libraries and is accompanied by a rigorous testing framework with 98% test coverage. It imports the output of a range of popular search engines. Data can be filtered and normalized according to user specifications. At its heart, AlphaPeptStats provides a wide range of robust statistical algorithms such as t-tests, analysis of variance, principal component analysis, hierarchical clustering, and multiple covariate analysis—all in an automatable manner. Data visualization capabilities include heat maps, volcano plots, and scatter plots in publication-ready format. AlphaPeptStats advances proteomic research through its robust tools that enable researchers to manually or automatically explore complex datasets to identify interesting patterns and outliers. Availability and implementation AlphaPeptStats is implemented in Python and part of the AlphaPept framework. It is released under a permissive Apache license. The source code and one-click installers are freely available and on GitHub at https://github.com/MannLabs/alphapeptstats.
Introduction
Mass spectrometry (MS)-based proteomics has emerged as a powerful tool in biomedical research (Aebersold and Mann 2016). The rapid development of platforms and algorithms allows the identification and quantification of proteins with ever greater depth and precision. These workflows and search engines produce tables of identified and quantified proteins, which then require rigorous statistical methods to identify robust patterns and potentially biologically interesting outliers.
To date, we and others have developed popular applications, such as MSstats (Choi et al. 2014), Perseus (Tyanova et al. 2016), and MSPypeline (Heming et al. 2022) for the downstream analysis of proteomics data. While these tools mostly cover the required steps in the analysis pipeline, they can be limited in the search engines they support, access to the source code, test coverage, automation, and the ability to easily implement the latest algorithms. Furthermore, some of their functionality can readily be leveraged by domain experts, but this is more challenging for non-experts who need to integrate biological knowledge and contextualize the findings. This constitutes the need for an easy-to-use, rigorous, and robust tool to maximize the biological insight that can be extracted from quantitative proteomics data.
The AlphaPeptStats library
As part of our recently developed AlphaPept framework (Strauss et al. 2021, Voytik et al. 2022, Zeng et al. 2022, we implemented AlphaPeptStats in Python because of its straightforward syntax and the availability of high-quality scientific libraries. AlphaPeptStats is built on top of highly performant, widely used, and community-tested computing packages such as NumPy (Harris et al. 2020), Plotly (https://plotly.com/), Pandas (McKinney 2010), and SciPy (Virtanen et al. 2020). We additionally implemented state-of-the-art bioinformatic libraries, such as diffxpy from the Scanpy-package for differential expression analysis (Wolf et al. 2018), a GO tool for enrichment analysis with gene ontology (GO)-terms, tailored for MS (Schö lz et al. 2015), and pyteomics allowing among other things the import of various proteomics data formats (Goloborodko et al. 2013, Levitsky et al. 2018.
The AlphaPeptStats source code is freely available on GitHub under the permissive Apache license. The package can readily be installed from PyPI using the standard pip module. Additionally, we provide one-click installers for Linux, MacOS, and Windows and a Dockerfile for containerized deployment, e.g. in cloud environments. Furthermore, automated postprocessing workflows can be created in AlphaPeptStats. This can also be used to systematically iterate over available options such as different normalization methods to identify best-performing ones.
Additionally, we have deployed a web-based version of AlphaPeptStats that is hosted on Streamlit-share (see GitHub repository for the link) from Streamlit (https://streamlit.io/). This enables users to explore and use AlphaPeptStats without requiring the installation of any software.
Proteomics has a long history of open-source proteomic tools, such as Trans-Proteomic Pipeline (Deutsch et al. (Millikin et al. 2018), and Proline (Bouyssié et al. 2020). We designed AlphaPeptStats with best software engineering practices in mind, including continuous integration pipelines on GitHub, ensuring that the software is continuously tested and verified. To monitor the test coverage of our library, we employed Codecov (https://about.codecov.io), which allows us to gauge the percentage of executed source code by the test suite.
Our extensive testing framework reports a test coverage of 98%, providing confidence in accuracy and reliability of the software, in line with standard packages such as NumpPy or Pandas.
Extensive documentation of the AlphaPeptStats functionalities was a key part of this project and include several Jupyter notebooks that serve as tutorials to guide novice users. These notebooks are designed to encourage user engagement and offer a step-by-step approach to learning the package, Alternatively, the graphical user interface is straightforward to learn as well as allowing quick and easy data exploration.
Overview of the AlphaPeptStats workflow
At present, AlphaPeptStats is already capable of importing and processing label-free proteomics data generated from multiple software platforms, including MaxQuant (Cox and Mann 2008), AlphaPept, DIA-NN (Demichev et al. 2020), Spectronaut (Bruderer et al. 2015), and the FragPipe computational framework (Kong et al. 2017, Teo et al. 2021, Yu et al. 2020, da Veiga Leprevost et al. 2020. Additionally, it supports the mzTab data exchange format for proteomics experiments (Griss et al. 2014) (Fig. 1A). The modular design of the import functions allow straightforward extensions to other data formats.
Users are required to specify their proteomics results file and accompanying metadata. AlphaPeptStats provides a high-level API by storing data in a Python class named DataSet, with multiple methods ranging from data preprocessing, statistical analysis, GO analysis, and to visualization. The latter can export vector graphics for subsequent use in publications. An overview of the processing steps in AlphaPeptStats is provided in Fig. 1B as well as in Supplementary Table S1.
Preprocessing
After loading the data into a DataSet object, the user can select multiple optional preprocessing steps ranging from the removal of contaminants, normalization to imputation. For contaminant removal, AlphaPeptStats uses a recently developed library (Frankenfield et al. 2022) to help decrease false discoveries. In addition, AlphaPeptStats incorporates various normalization and imputation techniques to facilitate robust and accurate data analysis. One of the methods that we integrated-random forest imputation-has demonstrated superior performance compared to other commonly used imputation methods in several studies (Jin et al. 2021, Kokla et al. 2019). This algorithm was imported from Scikit-Learn, demonstrating how easily state-of-the-art methods can be added to AlphaPeptStats.
Importantly, all selected preprocessing steps are stored in the DataSet object, ensuring reproducibility. Different normalization and imputation methods can be systematically assessed as AlphaPeptStats can iterate through them automatically by means of passing a single parameter to the plotting functions.
Visualization and statistical analysis
Users can visualize their results via dedicated functions that allow the straightforward interpretation of the data, including principal component analysis plots, heatmaps, dendrogram, and volcano plots. Figures can be exported as publicationready scalable vector graphics. AlphaPeptStats leverages the capabilities of the Plotly graphing library, producing interactive and zoomable graphs by default and enabling advanced users to tailor the generated figures to their specific needs and preferences.
Statistical testing for differential expression analysis can be performed using Analysis of Variance (ANOVA), Analysis of Covariance (ANCOVA), or t-testing. We further provide a reimplementation from R of the significance analysis of microarrays (SAM), which is a very widely used algorithm in proteomic (Tusher et al. 2001). Significantly expressed proteins can then be subjected to GO annotation.
Graphical user interface
As AlphaPeptStats is a Python library it can be imported and used in any Python program, scripts, or Jupyter Notebooks. As mentioned, figures are produced by the incorporated Plotly library, making graphs interactively explorable.
Furthermore, the popular Streamlit library provides even easier access to AlphaPeptStats functionalities and output for non-coders. In this case, the graphical user interface of AlphaPeptStats enables users to directly select functions and analyze their data in a browser-based environment.
Application of AlphaPeptStats
To illustrate the capabilities of AlphaPeptStats, we applied it to our recently published study of non-alcoholic liver disease (Niu et al. 2019). AlphaPeptStats facilitated a comprehensive downstream analysis, including preprocessing, data visualization, and extended biomarker discovery across different disease groups, all in a simple Jupyter notebook format. Additionally, we assessed the performance of our library using a standardized spiked proteomics dataset (Ramus et al. 2016). This standardized spiked proteomics dataset allowed us to evaluate the performance of AlphaPeptStats with simulated and ground truth data. Our analysis confirmed that random forest performed best, whereas mean imputation, for instance, led to a higher percentage of false positives (best Area Under the Curve (AUC) 1.0 versus 0.904). Analyses were performed with AlphaPeptStats version 0.6.2 and accompanying notebooks can be found as Supplementary Notebook S1 and Supplementary Notebook S2.
Conclusion
We developed AlphaPeptStats, a user-friendly, open-source package dedicated to the protein-centric downstream analysis of mass spectrometry data, covering all steps from preprocessing and statistical analysis to visualization. Apart from stand-alone use, it can also easily be incorporated into automated bioinformatics pipelines. It features extensive tests and robust design principles of software engineering on GitHub, such as continuous testing and continuous integration to ensure a stable and reliable workflow. Its modular framework allows extensions with additional functionality, such as the analysis of isotopically labeled data. We envision that AlphaPeptStats will be a suitable standard for statistical analysis and exploration for the challenging proteomics data set that can readily be produced today. AlphaPeptStats relies on several community-tested packages. It supports the import of AlphaPept, DIA-NN, MaxQuant, Spectronaut, and FragPipe files and also enables import of data in the generic mzTab-format. It can be interfaced with a GUI via streamlit and or can be used as a Python library, e.g. by loading and using in Jupyter Notebooks. The source code is publicly available on GitHub with GitHub actions being used for continuous integration (CI) and continuous delivery (CD). The application with GUI can be easily installed via a one-click installer or deployed via Docker, ensuring flexibility and portability. For efficient data processing, analysis, and visualization, AlphaPeptStats utilizes various scientific computing packages, such as scikit-learn and plotly, in addition to other relevant tools. (B) Symbolic display of the graphical user interface of AlphaPeptStats, depicting its step-wise workflow and highlighting its comprehensive functionalities enabling meaningful interpretation of data.
AlphaPeptStats for scalable statistical proteomics analysis 3 | 2,424.8 | 2023-08-01T00:00:00.000 | [
"Computer Science"
] |
Co-infection of TYLCV and ToCV increases cathepsin B and promotes ToCV transmission by Bemisia tabaci MED
Tomato disease is an important disease affecting agricultural production, and the combined infection of tomato chlorosis virus (ToCV) and tomato yellow leaf curl virus (TYLCV) has gradually expanded in recent years, but no effective control method has been developed to date. Both viruses are transmitted by Bemisia tabaci Mediteranean (MED). Previously, we found that after B. tabaci MED was fed on ToCV-and TYLCV-infected plants, the transmission efficiency of ToCV was significantly higher than that on plants infected only with ToCV. Therefore, we hypothesize that co-infection could enhance the transmission rates of the virus. In this study, transcriptome sequencing was performed to compare the changes of related transcription factors in B. tabaci MED co-infected with ToCV and TYLCV and infected only with ToCV. Hence, transmission experiments were carried out using B. tabaci MED to clarify the role of cathepsin in virus transmission. The gene expression level and enzyme activity of cathepsin B (Cath B) in B. tabaci MED co-infected with ToCV and TYLCV increased compared with those under ToCV infection alone. After the decrease in cathepsin activity in B. tabaci MED or cathepsin B was silenced, its ability to acquire and transmit ToCV was significantly reduced. We verified the hypothesis that the relative expression of cathepsin B was reduced, which helped reduce ToCV transmission by B. tabaci MED. Therefore, it was speculated that cathepsin has profound research significance in the control of B. tabaci MED and the spread of viral diseases.
Introduction
Tomato chlorosis virus (ToCV) belongs to the genus Crinivirus in the family Closteroviridae (Accotto et al., 2001;Wintermantel and Wisler, 2006). It was first discovered in Florida in the United States in the mid-1990s and subsequently spread worldwide (Dalmon et al., 2005;Hirota et al., 2010;Fiallo-Olivé et al., 2011;Vargas et al., 2011;Arruabarrena et al., 2014). To date, ToCV is known to infect 25 species of plants across 7 families (Sun et al., 2016). Once present in the tomato field, ToCV infection in tomatoes can frequently reach 100% (Fiallo-Olivé and Navas-Castillo, 2019), with ToCV infection in early-stage tomato plants causing yield loss of up to 76% (Farina et al., 2019). Additionally, co-infections occur with viruses of different genera, such as members of the genus Begomovirus (TYLCV), genus Orthotospovirus (TSWV), and others (Garcia-eano et al., 2006;Alfaro-Fernandez et al., 2010). The co-infection has been known to cause crop yield reduction over a large area with serious economic losses to agricultural production (Ding et al., 2019).
TYLCV belongs to the genus Begomovirus in the family Geminiviridae (Fauquet et al., 2005), and it was first discovered in Israel (Cohen and Harpaz, 1964), and then gradually spread to the Middle East, Mediterranean coast, Africa, Asia, and other places (Cohen and Antignus, 1994;Czosnek and Laterrot, 1997;Polston et al., 1999;Ueda et al., 2005). The virus is characterized by its high virulence, which makes it highly prevalent in a field suffering from tomato virus regiments (Pan et al., 2012). It has a wide range of host plants and a strong adaptability, as indicated by its high frequency of gene variation (Papayiannis et al., 2011;Shirazi et al., 2014). These factors make it difficult to prevent and control TYLCV. TYLCV infection on tomato plants can reduce tomato production greatly, especially when a breakout occurs (Kanakala and Ghanim, 2016;Prasad et al., 2020).
Bemisia tabaci, an omnivorous insect belonging to Homoptera, Aleyrodidae , is a kind of herbivorous agricultural insect with piercing mouthparts that primarily consume tomato leaves (Laurence, 1962). Bemisia tabaci has become an important insect to monitor due to its role as a vector for viral disease. Bemisia tabaci transmits ToCV in a semi-persistent way (Wintermantel and Wisler, 2006;Fortes et al., 2020), while B. tabaci transmits TYLCV in a persistent way (Rubinstein and Czosnek, 1997;Yan et al., 2018). Other than that, these two viruses cannot be transmitted by sap friction (Gilbertson et al., 2015), which makes B. tabaci even more important as the gateway for virus infection on tomato plants.
Co-infection by two viruses on a single plant has been shown to increase virus load/titer and cause more severe symptoms to manifest on plants compared to individual viral infection as demonstrated by the study of the southern rice black-streaked dwarf virus (SRBSDV) and rice ragged stunt virus (RRSV) co-infection on rice plants . Li et al. (2017) also suggested that higher virus titers in the infected rice plants can lead to higher virus acquisition efficiency by the insect vectors, such as white-backed planthopper and brown planthopper. Other examples from the co-infection study of potato leafroll virus (PLRV) and potato virus Y (PVY) on potato plants also demonstrated that Myzus persicae and Macrosiphum euphorbiae (Homoptera: Aphididae), which are the vectors for both of the potato disease viruses, have a more efficient transmission rate when feeding on the co-infected plants than on PVY-infected plants alone (Srinivasan and Alvarez, 2007). These results indicate that synergism can improve the transmission efficiency of insect vectors and enhance the pathogenicity of the virus (Murphy and Bowen, 2006;Malik et al., 2010). Thus, it was suspected that tomato plants co-infected with ToCV and TYCLV may enhance the transmission rates of ToCV.
Among the control strategies of whitefly-transmitted viruses, RNA interference (RNAi) is one of the most important virus management methods. RNAi is a phenomenon in which small non-coding RNA (sncRNA) produced by long double-stranded RNA (dsRNA) induces efficient and specific degradation of homologous mRNAs (Mallick and Ghosh, 2012). RNAi is mainly used to block gene expression at the post-transcriptional level, inhibit translation, or promote heterochromatin formation, resulting in the inability to synthesize proteins and "gene silencing" or reduced expression levels (Czech and Hannon, 2010). RNAi technology has been widely used in agricultural control, especially for insect gene function (Fire et al., 1998).
In this study, to find out the key factors that can affect the transmission of ToCV by B. tabaci MED, we sequenced the transcriptome of B. tabaci MED that fed for 48 h on tomato plants co-infected by ToCV and TYLCV and infected only with ToCV. We found several differential genes; among them, we found a high expression of the cathepsin B gene in B. tabaci MED that fed on co-infected plants, which promoted the transmission of ToCV. Cathepsin belongs to cysteine proteases, which have a conserved active three-dimensional pocket composed of histidine, asparagine, and cysteine residues (Gasteiger et al., 2003;Hu et al., 2014). Cathepsin is known to regulate the infection and transmission of the virus. For example, cathepsin B can inhibit the acquisition of PLRV by aphids (Pinheiro et al., 2017).
We hypothesized that the relative expression of cathepsin B would be reduced after silencing cathepsin B, which would reduce ToCV transmission by B. tabaci MED. Several experiments were conducted to investigate the effects of cathepsin B on the transmission of ToCV in B. tabaci MED infected by co-infection and single infection. They are (1) comparing the ToCV accumulation in B. tabaci MED between infection groups; (2) comparing the results of the differentially expressed genes from transcriptome sequencing of B. tabaci MED between infection groups to detect changes of related transcription factors in B. tabaci MED; (3) comparing the relative expression and enzyme activity of cathepsin B in B. tabaci MED between infection groups; and (4) determining the virus acquisition and transmission efficiency of B. tabaci MED between infection groups after treatment with a cathepsin activity inhibitor and silencing cathepsin B. Our results will enable us to further understand the mechanism of virus transmission affected by the co-infection of ToCV and TYLCV and to prevent virus transmission effectively.
Plant infection confirmation
The confirmation that the tomato plants were co-infected with ToCV and TYLCV and its visual details is shown in Figure 1. The non-infected tomato plants did not manifest any symptoms ( Figures 1A,B) compared to the symptoms in the co-infected plants ( Figure 1E), which had yellowed and curled leaves ( Figure 1F). On the contrary, the ToCV-infected tomato plants showed symptoms of chlorosis and yellowing on the leaves ( Figure 1I), while the veins were still green ( Figure 1J).
The RT-PCR and PCR tests showed that the non-infected plants were not infected with ToCV ( Figure 1C) and TYLCV ( Figure 1D), respectively. The RT-PCR and PCR tests also confirmed that the tomato plants become co-infected with ToCV and TYLCV after whitefly inoculation, as indicated by the similarity between the target band and the Beijing tomato ToCV isolate (KC887999.1) ( Figure 1G) and the Shanghai TYLCV isolates ( Figure 1H), which both reached 99%. The target DNA fragment of ToCV was 466 bp, and the target DNA fragment of TYLCV was 1,606 bp. These co-infected plants were used in the subsequent experiments.
For ToCV confirmation, the RT-PCR confirmed that tomato plants were infected with ToCV, but not TYLCV 30 days after whitefly inoculation, as indicated by the similarity between the target band and the Beijing tomato ToCV isolate (KC887999.1) that reached 99.0% ( Figure 1K) and the absence of TYLCV ( Figure 1L). The target DNA fragment of ToCV was 466 bp. These ToCV-infected plants were used in the subsequent experiments.
Med and tomato plants
At 48 h, the acquisition rate of ToCV in B. tabaci MED fed on ToCV-infected tomato plants decreased by 20% compared to the acquisition rate of ToCV in B. tabaci MED fed on co-infected tomato plants (Figures 2A,F 1,100 = 4.315, p < 0.01). The ToCV accumulation in B. tabaci MED fed on ToCV-infected tomato plants after 48 h was 1.59 × 10 7 copies/μL, and the ToCV accumulation in B. tabaci MED fed on co-infected tomato plants was 6.79 × 10 7 copies/μL, i.e., four times higher than the former ( Figure 2B, F 1, 30 = 6.546, p < 0.05).
With the different numbers of whiteflies, the transmission rate of ToCV gradually increased. The transmission rate of 10 whiteflies fed on co-infected tomato plants could reach 55%, which was 20% higher than the transmission rate of B. tabaci MED fed on ToCV alone. The transmission rate of 25 whiteflies fed on co-infected tomato plants was close to 100%, while 50 whiteflies were needed to reach 100% when fed on ToCV-infected tomato plants ( Figure 2C, F 1, 10 = 48.4, p < 0.001). ToCV accumulation in tomato plants inoculated with single ToCV-infected whiteflies after 30 days since inoculation was 1.40 × 10 9 copies/μL, and ToCV accumulation in response to inoculation with co-infected whiteflies reached 7.68 × 10 9 copies/μL, i.e., approximately five times higher than the former ( Figure 2D, F 1, 10 = 20.279, p < 0.01).
Transcriptome sequencing 2.3.1. Quality evaluation of the sequencing
The quality evaluation of the sequencing output data from each sample is shown in Supplementary Table S1. The Q30 value of genes in the combined infected groups and the single infected groups was approximately 94%; the GC content was approximately 40.5%.
DEG analysis
The FPKM distribution of the genes in the co-infection group and the single infection group is shown in Supplementary Figure S1. The DEGs of the two groups were identified by comparing two sets of data. As shown in Figure 3A, a total of 1,410 genes were differentially expressed when whiteflies were introduced to co-infected tomato plants for 48 h AAP, compared to whiteflies on ToCV-infected tomato plants for 48 h AAP, of which 506 were upregulated genes and 904 were downregulated genes.
KEGG analysis
The results showed that 1,292 DEGs had annotations indicating that these genes belonged to 139 pathways. As shown in Figure 4, the differentially expressed genes between the co-infected and ToCVinfected plants were mainly enriched in pathways involving lysosome metabolism. Compared to whiteflies on ToCV-infected tomato plants, the lysosomal pathway of whiteflies on co-infected tomato plants has the highest number of DEGs (49). Among them, cathepsins in the lysosomal pathway were significantly different.
Med between infection groups
At 6-96 h AAP, the relative expression and enzyme activity of cathepsin B of B. tabaci MED fed on co-infected tomato plants increased compared to B. tabaci MED fed on ToCV-infected tomato plants. Compared to B. tabaci MED fed on ToCV-infected tomato plants, the cathepsin B relative expression of B. tabaci MED fed on co-infected tomato plants had a statistical difference for 12, 24, 48, 72, and 96 h AAP. The cathepsin B relative expression of B. tabaci MED fed on co-infected tomato plants was 50% higher than that of B. tabaci MED fed on ToCV-infected tomato plants for both the AAP of 48 h ( Figure 3C, F 1, 28 = 16.530, p < 0.01). The enzyme activity of cathepsin B of B. tabaci MED fed on ToCV-infected tomato plants was 20% lower than that of B. tabaci MED fed on co-infected tomato plants at 48, 72, and 96 h AAP ( Figure 3D, F 1, 28 = 0.617, p < 0.05).
Functional verification of relative gene expression and enzyme activity 2.5.1. Effect of enzyme inhibition treatment on the transmission of ToCV by Bemisia tabaci Med
Along with the increase in the enzyme inhibitor Emur-64's (E-64) concentration, the activity of cathepsin B in B. tabaci MED was decreasing while the mortality was increasing gradually. When B. tabaci MED was fed a 100 μmol/L feeding solution, the activity of cathepsin B decreased by 50%. (Figure 5A, F 1, 28 = 47.659, p < 0.001), the mortality of B. tabaci MED could reach 30% ( Figure 5B, F 4, 100 = 45.157, p < 0.001).
The virus acquisition study showed that B. tabaci MED that had its cathepsin inhibited had a significantly lower rate of virus acquisition after 48 h AAP than controls, i.e., 15% higher than the former ( Figure 6A, F 1, 100 = 0.741, p < 0.001). Furthermore, the ToCV acquisition of cathepsin-inhibited B. tabaci MED was also lower than that of control with normal cathepsin, i.e., two times higher than the former ( Figure 6B, F 1, 26 = 9.036, p < 0.01).
For the transmission efficiency of ToCV, the study found that cathepsin-inhibited B. tabaci MED had significantly lower transmission efficiency than the control, i.e., 10% higher than the former ( Figure 6C, F 1, 10 = 1.000, p < 0.01). The ToCV transmission of cathepsin-inhibited B. tabaci MED was also lower than that of the control, i.e., approximately two times higher than the former ( Figure 6D, F 1, 10 = 0.293, p < 0.001).
Effect of CathB dsRNA treatment on the transmission of ToCV by Bemisia tabaci Med
The target DNA fragment of cathepsin B was 470 bp, and the target DNA fragment of GFP was 598 bp (Supplementary Figure S2).
In comparison to GFP dsRNA treatment in the acquisition of ToCV, the acquisition rate of B. tabaci MED that was fed with For the transmission efficiency of ToCV, B. tabaci MED fed with CathB dsRNA had significantly lower transmission efficiency than GFP dsRNA, i.e., 20% higher than the former ( Figure 8C, F 1, 10 = 1.869, p < 0.01). The ToCV transmission of B. tabaci MED fed with CathB dsRNA was lower than that of GFP dsRNA, i.e., approximately 1.5 times higher than the former ( Figure 8D, F 1, 10 = 0.868, p < 0.001).
Discussion
TYLCV is a DNA virus with the persistent transmission, while ToCV is an RNA virus with semi-persistent transmission by the whitefly B. tabaci. The interaction mechanism between the two different transmission modes of the virus and B. tabaci might be different. Through transcriptome analysis of B. tabaci MED that fed on co-infected plants from two viruses with different infection modes and a single infection, it was found that when ToCV and TYLCV were present at the same time, the cathepsin B gene of B. tabaci MED was significantly upregulated. Meanwhile, we found that the presence of TYLCV promoted the ability of B. tabaci to acquire and transmit ToCV with higher efficiency, which indicated that the cathepsin B gene is likely to be one of the key factors regulating the transmission of ToCV by B. tabaci.
In recent years, tomato plants showed an upward trend in the co-infection of TYLCV and ToCV (Ding et al., 2019). Potato virus Y acted as an excitation virus to promote the replication of other heterologous viruses (Goodman and Ross, 1974;Pruss et al., 1997). Co-infection of TSWV and ToCV can promote the replication of ToCV in tomato plants, thereby accelerating the death of the host plant (Garcia-eano et al., 2006). We found that there was an increase in the ToCV viral titer and, as a consequence, there was a higher acquisition efficiency of ToCV on co-infected plants than on singleinfected plants. Thus, we believe that this is a case of synergistic interaction. Li et al. (2021) found that ToCV and TYLCV mixedinfected tomato plants had a high disease severity index, and ToCV and TYLCV mixed-infected plants with viral accumulation was greater than in singly infected plants (Li et al., 2021). We hypothesize that the ability of whiteflies to acquire and transmit ToCV significantly increased after the co-infection of TYLCV and ToCV by a whitefly protein interacting with ToCV proteins. The silencing suppressor P1/ HC-Pro of potato virus A (PVA) significantly increased the accumulation of PLRV in the plant by suppressing the plant defense mechanism after the co-infection of PVA and potato leaf roll virus (PLRV). TYLCV-infected whiteflies can facilitate virus transmission by inhibiting the plant jasmonic acid defense pathway and activating the production of volatile Neophytadiene (Shi et al., 2019a). We hypothesize that TYLCV and ToCV-infected whiteflies could inhibit the defense pathway in tomato plants and thus promote ToCV transmission.
Cathepsins are important enzymes in physiological processes such as protein degradation, cell apoptosis, and signal transduction (Turk et al., 2001;Rossi et al., 2004;Turk et al., 2012), which can regulate the infection and transmission of viruses (Pinheiro et al., 2017). Kaur et al. (2017) found multiple significantly upregulated cathepsin B genes detected by transcriptome sequencing of ToCV-infected B. tabaci at KEGG enrichment diagram. The horizontal coordinate is the gene number, the number of genes of interest annotated in the entry, and the vertical coordinate is each pathway entry. The color of the bars represents the value of p of the hypergeometric test. (Pinheiro et al., 2017). PLRV is a persistently transmitted virus that is mainly retained in the midgut of aphids (Terra et al., 2019). ToCV is a semi-persistently transmitted virus and is mainly retained in the foregut of B. tabaci (Wang et al., 2019). Studies have shown that cathepsin exists in the salivary glands, saliva, intestines, and honeydew of Hemiptera green bugs and aphids (Lisón et al., 2006;Medina et al., 2015). Cathepsin in B. tabaci may have a function similar to that of cathepsin in aphids during the process of ToCV transmission. Gnirß et al. (2012) showed that cathepsin B and L inhibitors activated the Zaire ebolavirus glycoprotein (GP) to reduce 293 T cell infection driven. Cathepsin inhibitors would activate the ToCV CP or CPm protein to reduce ToCV transmission.
The ancestors of Hemiptera insects lost serine peptidase (SP) when they adapted to feed on low-protein plant sap (Terra, 2001;Jongsma and Beekwilder, 2016;Henriques et al., 2017). When they returned to the protein diet, cathepsin was recruited to replace their lost serine peptidase to digest proteins in plant sap (Jongsma and Beekwilder, 2016). Cathepsin B genes were massively amplified in the aphid lineages, and many genes showed intestinal-specific overexpression (Lomate and Bonning, 2016). Therefore, cathepsin not only plays an immune defense function in Hemiptera insects but also acts as a digestive enzyme to digest proteins eaten by insects (Rahbé et al., 2003). After treatment with a cathepsin inhibitor, the activity of cathepsin reduced and digestive enzymes also reduced in whiteflies, resulting in less frequent feeding with tomato plants by whiteflies, thus the ability of B. tabaci to acquire and transmit ToCV indirectly decreased.
The expression and activity of cathepsin B in a variety of malignant tissues are significantly higher than in adjacent normal tissues (Fröhlich et al., 2001). Cathepsins play an important role in the invasion of the body by viruses such as Ebola (EBOV) and severe acute respiratory syndrome coronavirus (SARS-CoV); EBOV depended on the activation of cathepsin B and L to invade cells (Gnirß et al., 2012), and SARS-CoV activated the viral envelope protein spines (S) by cathepsin B and L to enter human cells (Simmons et al., 2005). We found that higher expression and activity of cathepsin B lead to an increase in ToCV transmission efficiency. We verified the hypothesis that a decrease in cathepsin B after co-infection could promote ToCV transmission by B. tabaci. The silencing of cathepsin tabaci MED with E-64 treatment. CK, control; E-64, cathepsin inhibitors. Data are denoted as mean ± SE. *** indicates a significant difference at p < 0.001; ** indicates a significant difference at p < 0.01; * indicates a significant difference at p < 0.05.
Tomato plants cultivation and whitefly rearing
The tomato variety used was Solanum lycopersicum Mill. Cv. Zuanhongmeina, which was cultivated in a greenhouse with an average temperature of 26 ± 1°C, a relative humidity of 70 ± 5%, and a photoperiod of 16:8 l:D cycle. No pesticides were applied to plants, and no other insects were present during plant cultivation other than the introduced B. tabaci MED during the experiment. B. tabaci MED was originally collected from infested poinsettias (Euphorbia pulcherrima Wild. ex Klotz.) in Beijing, China, in 2009, and was raised in a greenhouse with an average temperature of 26 ± 2°C and a relative humidity of 60 ± 10% (Shi et al., 2019b). The biotypes were identified every 2 months based on the detection of the mitochondrial cytochrome oxidase I gene (mtCOI; Supplementary Table S2) based on CAPS (cleavage amplified polymorphic sequence) technology (Chu et al., 2010;Tang et al., 2017). The products were sequenced (Bioengineering Co., Ltd., Shanghai, China), and the results were submitted to the National Center for Biotechnology Information (NCBI) for blast comparison. CAPS-PCR technology and gene sequencing indicated that the insects were B. tabaci MED.
The tomato plants were divided into two infection groups, namely, one was co-inoculated with ToCV and TYLCV (co-infected group) and the other was individually inoculated with ToCV (ToCV-infected group). The tomato plants were injected with 0.5 ml of the infectious ToCV cDNA clone or TYLCV DNA clone at the three-true-leaf stage to achieve infection. The infectious cDNA clone of ToCV was provided by Prof. Tao Zhou (China Agricultural University), and the infectious DNA clone of TYLCV was provided by Prof. Xueping Zhou (China Academy of Agricultural Sciences). To verify the presence of the virus, the co-infected and single-infected tomato plants were assayed under PCR with specific primers for TYLCV-F/TYLCV-R and were assayed under RT-PCR with specific primers for ToCV-3F/ToCV-3R (Supplementary Table S2).
Tocv accumulation in adult Bemisia tabaci Med females and tomato plants
A minimum of 200 non-infected newly emerged adult whitefly females were starved for 2 h before being transferred to ToCV-infected (100 newly emerged adult B. tabaci females) and co-infected tomato plants (100 newly emerged adult B. tabaci females) with a similar ToCV virus content and left to feed on the infected plants for 48 h. After 48 h, all B. tabaci MED (about 100) was collected, and the acquisition efficiency of ToCV was measured using RT-PCR with specific primers for ToCV-3F/ToCV-3R. At the same time, the accumulation of ToCV on newly emerged adult B. tabaci MED females was measured and its ToCV amount was quantified using RT-qPCR by collecting 30 newly emerged adult B. tabaci MED females. The total RNA of newly emerged adult B. tabaci females was extracted by the TRI reagent (Life Technologies Co., Ltd., Beijing, China). Total RNA was quantified at 500 ng. Hiscript ® II Q RT SuperMix for qPCR Kit was used for reverse transcription (Vazyme Biology Co., Ltd., Nanjing, China), and qPCR was conducted by ChamQ Universal SYBR qPCR Master Mix Kit (Vazyme Biology Co., Ltd., Nanjing, China).
On the contrary, to detect ToCV transmission from newly emerged adult B. tabaci MED females to tomato plants, 5, 10, 25, and 50 newly emerged adult B. tabaci MED females that have fed on infected and co-infected plants were transferred to the non-infected tomato plants that have 3-4 true leaves by using clip-cages. After another 48 h, newly emerged adult B. tabaci MED females were removed. The top leaves of tomato plants were collected after 30 days to detect the ToCV transmission and amount. Tomato plants' total RNA was extracted by the TRI reagent (Life Technologies Co., Ltd., Beijing, China). Total RNA was quantified at 500 ng. Hiscript ® II Q RT SuperMix for qPCR Kit was used for reverse transcription (Vazyme Biology Co., Ltd., Nanjing, China); ToCV accumulation in tomato plants was conducted by ChamQ Universal SYBR qPCR Master Mix Kit (Vazyme Biology Co., Ltd., Nanjing, China). For both ToCV accumulation in B. tabaci and tomato plants, each treatment was repeated five times.
. Adult Bemisia tabaci Med females virus acquisition and transcriptome sequencing
For virus acquisition, non-infected and newly emerged adult B. tabaci MED females were placed in clip-cages (50/cage) and starved for 2 h. The clip-cages were then attached to co-infected plants or ToCV-infected tomato plants for 48 h. This period is known as the acquisition access period (AAP). After 48 h, 300 newly emerged adult B. tabaci MED females were collected from infected leaves in an RNA-free centrifuge tube, quickly placed in liquid nitrogen for quick freezing for 30 s, and stored at −80°C. There were three biological replications for each treatment. The six samples were used in transcriptome sequencing, which was performed following the known procedure (Ding et al., 2019;Levin et al., 2020; Servicebio Technology Co. Ltd., Wuhan, China).
Library construction and sequencing
A transcriptome library was constructed using the paired-end RNA-Seq method (Levin et al., 2020). A total of 1 μg RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using the NEBNext UltraTM RNA Library Prep Kit for Illumina (NEB, USA) following the manufacturer's recommendations, and index codes were added to attribute sequences to each sample. After the insert size was qualified, the effective concentration of the library was accurately quantified using the RT-qPCR method (effective concentration > 2 nM indicated qualification). According to the requirements of effective concentration and target offline data, after the pooling of different libraries, parametric genome sequencing was performed (reference genome 1 ).
To ensure the quality of analysis, it is necessary to filter the original sequence, removing adapters that contained more than 10% N (N indicates that the base information cannot be determined) and low-quality reads (where the base number of Q phred ≤ 10 accounts for more than 50% of the total read length). Then, the clean reads can be used for subsequent analysis.
DEG analysis
The edgeR and EBSeq software were used to analyze the gene expression level in each sample. The number of genes with different expression levels and the FPKM value (the expected number of fragments per kilobase of transcript sequence per million base pairs sequenced) of an individual gene was determined. The expression of genes was determined based on FPKM >1.5. The corrected p-value (FDR) was used to screen the DEGs (FDR < 0.05). The gene expression levels under different experimental conditions were compared using the FPKM distribution of all genes.
KEGG analysis
To further clarify the biological function and biological pathway of DEGs after B. tabaci MED fed on ToCV and TYLCV co-infected tomato plants, the selected DEGs were analyzed by KEGG enrichment
RT-qRCR verification
To verify the accuracy of the sequencing results, seven upregulated DEGs and seven downregulated DEGs were screened for RT-qPCR detection to verify the gene expression patterns using RNA-seq data. The TRIzol method (Life Technologies Co., Ltd., Beijing, China) (Fiallo-Olivé et al., 2014) was used to extract the total RNA of 30 co-infected or 30 ToCV-infected B. tabaci MED. The uniform quantification was set at 500 ng and the quality of RNA was strictly controlled (A260/280 was set between 1.8 and 2.1). First-strand cDNA was synthesized according to the Hiscript II 1 st Strand cDNA Synthesis Kit (Vazyme Biology Co., Ltd., Nanjing, China;Fiallo-Olivé et al., 2014;Pentzold et al., 2017). The primers in the primer sequence (Supplementary Table S2) were used for RT-qPCR detection to obtain the Ct values of each target gene and internal reference genes. Then, the relative expression of each gene was calculated using the 2 -△△Ct method (Livak and Schmittgen, 2001). The experiment was repeated three times.
The relative expression and enzyme activity of cathepsin B in adult Bemisia tabaci Med females between infection groups
To verify whether the candidate genes are key factors in the lysosomal pathway in regulating the transmission of ToCV by newly emerged adult B. tabaci MED females, a significantly upregulated candidate gene was screened: the cathepsin B gene (LOC109042327). The relative expression and enzyme activity of the gene were determined to explore their effects on the transmission of ToCV by adult B. tabaci MED females.
Newly emerged adult B. tabaci MED females (500) were collected and starved in a clip-cage for 2 h. After that, newly emerged adult B. tabaci MED females were transferred to each infection group for an acquisition access period (AAP) of varying duration of 0, 6, 12, 24, 48, 72, and 96 h. After the completion of each duration, 30 newly emerged adult B. tabaci MED females were collected from each infection group for the relative expression of cathepsin B was measured. The relative expression of cathepsin B in newly emerged adult B. tabaci MED females that acquired the viruses from single-infected and co-infected plants was determined. Fluorescence quantitative PCR detection was carried out by following the ChamQ Universal SYBR qPCR Master Mix Kit instructions (Vazyme Biology Co., Ltd., Nanjing, China). The experiment was repeated three times for each group.
The BCA method was used to determine the protein concentration of newly emerged adult B. tabaci MED females by following the BCA protein concentration kit instructions (Solebo Biological Co. Ltd., Beijing, China). The activity of cathepsin B in newly emerged adult B. tabaci MED females was determined according to the instructions of the insect cathepsin B ELISA kit (ZCIBIO Technology Co., Ltd., Shanghai, China). The experiment was repeated three times for each group. Referring to the instructions for Proteinase Inhibitor E-64 (Sigma-Aldrich Trading Co., Ltd., Shanghai, China), cathepsin inhibitor E-64 and 15% sucrose solution (with water as solvent) were prepared into 0, 10, 50, 100, and 150 μmol/L feeding solutions. A total of 50 co-infected newly emerged adult B. tabaci MED females (48 AAP) were fed with different concentrations of feeding solution and placed in an incubator with a 14:10 l:D cycle and 80% humidity. Each concentration treatment was repeated three times. After 2 days, the mortality of non-infected B. tabaci MED was counted, and the activity of cathepsin B was determined to find the optimal feeding concentration.
To study the effect of enzyme inhibition treatment on the acquisition of virus by newly emerged adult B. tabaci MED females, non-infected whiteflies were fed a 20 μL artificial diet solution made of 15% sucrose and 100 μmol/L E-64 cathepsin inhibitor. After a two-day treatment with enzyme inhibition, newly emerged adult B. tabaci MED females were collected and starved in a clip-cage for 2 h. After that, newly emerged adult B. tabaci MED females were transferred to the co-infection group for a 48-h acquisition access period (AAP). After AAP, all B. tabaci MED was collected, and the acquisition efficiency of ToCV was measured using RT-PCR with specific primers for ToCV-3F/ToCV-3R. The experiment was repeated five times. At the same time, 30 B. tabaci MED were collected for their ToCV accumulation measured by RT-qPCR. A 20-μL artificial diet that excluded the E-64 cathepsin enzyme inhibitor was used as a control, and the experiment was repeated five times.
To verify the effect of cathepsin inhibition on transmission efficiency, 50 co-infected B. tabaci MED that experienced the 48 h AAP were transferred to 3-4 true leaves of non-infected tomato plants in clip-cages, newly emerged adult B. tabaci MED were removed 48 h later. After 30 days, the ToCV transmission rate of infected tomato plants was calculated by RT-PCR, and ToCV accumulation of infected tomato plants was measured by RT-qPCR. Tomato plants fed on by newly emerged adult B. tabaci MED females previously fed on the artificial diet that excluded E-64 served as the control. Each treatment was repeated five times on each of the 10 tomato plants.
Effect of CathB dsRNA treatment on the transmission of ToCV by Bemisia tabaci Med
Based on the DNA sequence of the cathepsin B gene (LOC109042327) to clone target DNA fragments, the 470-bp band and the 598-bp band were detected by agarose gel electrophoresis for CathB and GFP. The CathB and GFP DNA fragments were subjected to dsRNA synthesis with the T7 RiboMAX Express RNAi system of the dsRNA synthesis kit (PROMEGA, Madison, USA) to obtain CathB dsRNAs and GFP dsRNAs. In total, 15% sucrose solution (with water as solvent) was prepared into 400 ng/μL CathB dsRNA and GFP dsRNA feeding solutions. A total of 50 non-infected, newly emerged adult B. tabaci MED females were fed with 200 μL of CathB dsRNA and GFP dsRNA to 400 ng/μL and placed in an incubator with 14:10 l:D cycle and 80% humidity. After 2 days, the mortality of B. tabaci MED was counted, and the relative expression of cathepsin B was determined. The experiment was repeated five times.
To study the effect of CathB dsRNA treatment on the acquisition of virus by newly emerged adult B. tabaci MED females, non-infected whiteflies were fed a 200-μL artificial diet solution made of 15% sucrose and 400 ng/μL CathB dsRNA or GFP dsRNA. After a 2-day treatment, newly emerged adult B. tabaci MED females were collected and starved in a clip-cage for 2 h. After that, newly emerged adult B. tabaci MED females were transferred to the co-infection group for 48 h AAP. After AAP, all B. tabaci MED was collected, and the acquisition efficiency of ToCV was measured using RT-PCR with specific primers for ToCV-3F/ToCV-3R. The experiment was repeated five times. At the same time, 30 newly emerged adult B. tabaci MED females from the co-infection group were collected for their ToCV accumulation measured by RT-qPCR. The experiment was repeated five times.
To verify the effect of CathB dsRNA on transmission efficiency, 50 newly emerged adult B. tabaci MED females were fed with a 200-μL artificial diet solution made of 15% sucrose and 400 ng/μL CathB dsRNA or GFP dsRNA at 48 h AAP, then transferred to 3-4 true leaves of non-infected tomato plants in clip-cages, and B. tabaci MED was removed 48 h later. After 30 days, the ToCV transmission rate of infected tomato plants was calculated by RT-PCR, and the ToCV accumulation of infected tomato plants was measured by RT-qPCR. Each treatment was repeated five times on each of the 10 tomato plants.
Data analysis
All of the data analyses were performed using SPSS Statistics 21.0 (SPSS Inc., Chicago, IL, USA). Enzyme activity of AGLU in whiteflies that fed different concentrations of E-64 was analyzed using a one-way ANOVA followed by Tukey's test at p < 0.05. An independent samples t-test was used to compare the acquisition and transmission of ToCV by B. tabaci, the relative expression and enzyme activity of cathepsin B, and the effects of CathB, GFP dsRNA, and E-64 on the acquisition and transmission of ToCV by B. tabaci.
Conclusion
In this study, we verified the hypothesis that the relative expression and enzyme activity of cathepsin B were increased after co-infection, which helped to promote ToCV transmission by B. tabaci MED. After the decrease in the activity of cathepsin in B. tabaci, its ability to acquire and transmit ToCV was significantly reduced. The silencing of cathepsin B decreased the expression of cathepsin B to reduce the acquisition and transmission of ToCV. This is of profound significance for the prevention and control of diseases and insect pests in the future.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors. The raw data supporting the conclusions of this article will be made available by the authors, without undue Frontiers in Microbiology 11 frontiersin.org reservation. Publicly available datasets were analyzed in this study. This data can be found here: NCBI under the accession PRJNA904418.
Author contributions X-BS, YL, and D-YZ conceived and designed the experiments. D-Y-HL and J-YL performed the experiments. D-Y-HL analyzed the data. J-BC, YW, Z-HZ, ZZ, L-MZ, and X-QT contributed to reagents/ materials/analysis tools. D-Y-HL, X-BS, AF, and X-GZ wrote the article. All authors contributed to the article and approved the submitted version. | 8,544.2 | 2023-03-16T00:00:00.000 | [
"Biology"
] |
An Enhanced Indoor Three-Dimensional Localization System with Sensor Fusion Based on Ultra-Wideband Ranging and Dual Barometer Altimetry
Accurate three-dimensional (3D) localization within indoor environments is crucial for enhancing item-based application services, yet current systems often struggle with localization accuracy and height estimation. This study introduces an advanced 3D localization system that integrates updated ultra-wideband (UWB) sensors and dual barometric pressure (BMP) sensors. Utilizing three fixed UWB anchors, the system employs geometric modeling and Kalman filtering for precise tag 3D spatial localization. Building on our previous research on indoor height measurement with dual BMP sensors, the proposed system demonstrates significant improvements in data processing speed and stability. Our enhancements include a new geometric localization model and an optimized Kalman filtering algorithm, which are validated by a high-precision motion capture system. The results show that the localization error is significantly reduced, with height accuracy of approximately ±0.05 m, and the Root Mean Square Error (RMSE) of the 3D localization system reaches 0.0740 m. The system offers expanded locatable space and faster data output rates, delivering reliable performance that supports advanced applications requiring detailed 3D indoor localization.
Introduction
Indoor localization technology is essential for enabling precise personnel tracking and efficient robot navigation in complex environments [1][2][3].Its applications significantly improve efficiency and convenience [4].Indoor localization can be divided into two categories according to the size of the localization area.The first category addresses larger areas, such as halls or multi-story buildings [5,6].This scenario often requires multiple anchor points as references [7], or multi-sensor fusion [8].The localization requirements are meter-level plane localization, and floor identification.The second category focuses on highprecision localization within a slightly smaller indoor area.Some existing technologies have already achieved indoor two-dimensional (2D) localization, for example, visual features [9], Light Detection and Ranging (LiDAR) [10,11], and wireless localization [12].However, with the advancement of indoor three-dimensional (3D) localization sensor technology, emerging services like indoor drone control [2,13], virtual reality (VR) [14,15], or augmented reality (AR) [16] experiences demand higher accuracy in location.They also require more stringent height estimation and more frequent updates of information [17,18].Additionally, while exploring the potential of these technologies, it is important to consider the cost factor [19] to ensure that technological solutions have the potential for widespread application.Based on our previous research [20], this study proposes an enhanced indoor 3D localization system, which fuses ultra-wideband (UWB) sensor ranging and barometric pressure (BMP) sensor height measurement.
Commonly used wireless sensors for ranging and localization include Wi-Fi, Bluetooth, and Radio Frequency Identification (RFID) [21].They estimate the target's location by measuring signal strength or time differences [1], with meter-level accuracy.Although they satisfy commercial and industrial needs to a certain extent, they still face challenges in positioning accuracy and stability in complex indoor environments.Additionally, visual positioning systems and LiDAR offer higher accuracy but their implementation costs and demands on computational resources limit their potential for widespread application [9,10].
In the realm of indoor localization, technology stands out for its high precision in distance measurement by transmitting ultra-short pulse signals [22].This method achieves centimeter-level ranging accuracy, surpassing traditional Wi-Fi and Bluetooth technologies in both accuracy and interference resistance [3,23,24].The target's 3D location can be calculated based on the distance measurement information between the target and each UWB base station.However, despite the superior distance measurement capabilities of UWB, existing research indicates persistent errors in height estimation [13], underscoring the need for continued research to optimize localization algorithms and enhance system performance.
In contrast, barometric pressure sensors (BMPs) enhance height measurement accuracy [25].The earliest mercury barometer was invented in the 17th century by physicist Evangelista Torricelli [26].Thanks to ongoing technological advancements, modern electronic barometers [27], which convert barometric pressure into electrical signals, are widely used.These BMP sensors can precisely measure barometric pressure and output digital signals, thereby estimating the corresponding height.By measuring atmospheric pressure changes, BMP sensors provide critical data for adjusting the vertical positioning in indoor localization systems, which is often flawed in UWB-only setups.However, the barometric pressure value at a given location can vary over time [6].Outdoors, the shifting of highand low-pressure weather systems can alter barometric pressure [28].Especially strong convective weather accelerates changes in barometric pressure, leading to increased height drift errors [29].Additionally, changes in air temperature cause air expansion or contraction, resulting in decreased or increased barometric pressure.In indoor environments, factors such as the building's sealing, ventilation conditions, and the temperature difference between indoors and outdoors predominantly influence barometric pressure [30].The barometric pressure in well-sealed indoor environments is relatively stable.In contrast, the opening and closing of doors and windows, along with the ventilation system, can cause fluctuations in indoor barometric pressure, which limits the single BMP sensor's performance for height estimation in indoor environments.
Current indoor localization systems often fail to deliver accurate height measurements, a gap our system addresses by integrating BMP sensors known for their reliable barometric pressure measurements.This integration ensures indoor localization system enhances not only vertical but also horizontal localization precision.Based on our previous research [20], we have upgraded the hardware and software.By observing the trend of barometric pressure changes at different indoor locations over 30 min using BMP sensors, we confirmed once again that the drift of barometric pressure in the same indoor space tends to be consistent.In this study, we introduced an enhanced indoor 3D localization system.The barometric pressure values at the tag are transmitted to the main controller via Wi-Fi signals.The relative height between the tag and anchor is then calculated using derived formulas.The system's main controller employs a dual-core processor with Wi-Fi capability, enhancing the rate and stability of data transfer and processing.The height measurements obtained from the dual BMP sensors are used to assist the UWB sensors in spatial ranging.The distance measurements from the UWB sensors are refined and corrected using a fitting equation.The localization system includes three anchors at the same height, where UWB sensors on each anchor measure the distances to the tag.The tag's planar position is estimated through geometric modeling and the centroid method of triangles.Finally, Kalman filtering is applied to obtain the most accurate estimate of the tag's location, resulting in a smoother and more precise localization trajectory.Compared to previous research, the new hardware support and software architecture optimization have enabled the system to obtain sensor information more quickly and stably.The dispersed anchor setup allows for spatial localization over a larger area, and the redesigned geometric localization model and Kalman filtering algorithm have further improved localization accuracy.
Relative to similar localization strategies [13,31], which typically exhibit height estimation errors exceeding 0.2 m, our dual BMP sensors-based method significantly improves accuracy in height estimation, thus enhancing the system's 3D localization precision.This work's main contributions are as follows:
Hardware System Design
The hardware system framework is depicted in Figure 1.The target (Tag) contains one UWB sensor and one BMP sensor.The indoor localization system features three anchors (Anchor (A), Anchor (B), Anchor (C)), each equipped with three UWB sensors and one BMP sensor to pinpoint the target's (Tag) location.The main controller, ESP32 (manufactured by Espressif Systems, Shanghai, China) is responsible for receiving data from the sensors, calculating the tag's location, and sending critical data to an external computer for data collection.The distances between the tag and each anchor are estimated by UWB technology.The ranging results are collected by the sub-controller of Anchor (A) and transmitted to the main controller via I2C communication.The BMP sensor on the tag measures the barometric pressure in real time, which is collected and processed by an ESP8266 controller (manufactured by Espressif Systems, China), and then sent to the main controller by Wi-Fi wireless communication.Furthermore, thanks to the dual-core processor of the main controller (ESP32), the Wi-Fi signal-reading process and the location calculation process do not interfere with each other, greatly enhancing the stability of the localization system.Figure 2 shows the tag's hardware design.The tag primarily consists of a controller (ESP8266), a BMP sensor, and a UWB sensor, all powered by a shared 5 V battery.The controller (ESP8266) and the BMP sensor are soldered onto a printed circuit board (PCB), with their I2C ports connected by the internal circuit.Finally, these components are assembled in a 3D-printed plastic frame.Additionally, a marker for the motion capture system (MCS) is also mounted on the top of the frame, which can be used for experimental validation.The anchors of the localization system are mounted on three tripods, which are powered by either a computer or batteries as shown in Figure 3.The UWB sensors in each anchor need to be adjusted to the same height using the tripods.Also, the proposed localization system is easy to set up in new indoor environments due to the portability of the tripods.And the specific parameters of the hardware devices in the designed system are detailed in Table 1.The barometric pressure varies with changes in altitude and can drift over time.BMP sensors estimate height by measuring barometric pressure values.In a previous study [20], we proposed a method for estimating indoor target height using dual BMP sensors.In this new study, we have revisited the observation of barometric pressure variations at fixed indoor locations, and conducted re-validation in the updated localization system.
Observation of Indoor Barometric Pressure Measurements
In an office indoor environment, we collected barometric pressure data using two BMP sensors.The BMP sensor of Anchor (A) remained stationary, and initially, the tag's BMP sensor was positioned close to Anchor (A) at the same height.A few seconds after initiating the timer, the tag moved to another location and remained stationary, with a horizontal distance of 2 m and a height drop to −0.5 m.This observation lasted for 30 min (i.e., 1800 s).Due to the significant noise in the raw measurement data from the BMP sensor, a weighted average filter was used to minimize the noise.The results of this observation are displayed in Figure 4.The observation results show that there is a continuous drift in the indoor barometric pressure values over thirty minutes.After filtering, the data noise from the BMP sensors was reduced.However, even if the two sensors are of the same model, the barometric pressure values they measure at the same height location will still have minor differences P di f f .To unify this deviation, the final output barometric pressure value of the tag P ′ Tag(t) was adjusted by adding P di f f as the following equation: where P Anchor(0) is the initial barometric pressure value of Anchor (A), P Tag(0) is the initial barometric pressure value of the tag, and P Tag(t) is the barometric pressure value of the tag measured at the current moment.After the tag's location was lowered, its barometric pressure value increased.During the observed 30 min, the trends of barometric drift at the two stationary positions were almost identical.
Relative Height Estimation Based on Dual BMP Sensors
After obtaining the final barometric pressure data from the BMP sensors (P ′ Tag(t) and P Anchor(t) ), the height values of the tag and the anchor (H Tag(t) and H Anchor(t) ) can be determined by the barometric height calculation [32] as follows: where the reference barometric pressure P 0 is equal to Anchor (A)'s initial value P Anchor(0) .The T is the temperature value in °C.And the height data analyzed from this are presented in Figure 5, where the analyzed height values in the stationary location float with changes in barometric pressure.After the tag descended to −0.5 m, the height results calculated by Equation ( 4) were also approximately −0.5 m.This confirms that BMP sensors possess a reasonable level of accuracy for measuring height changes over short periods in indoor environments.However, after thirty minutes of observation, the height data drift estimated by a single BMP sensor reached up to nearly 1.5 m at its maximum.Consequently, the results from a single sensor demonstrate considerable uncertainty after several minutes.Due to the similarity in barometric drift trends observed with the two BMP sensors in the indoor environment, we propose a method using dual BMP sensors to more accurately measure the indoor tag height H dual(t) as illustrated in Equations ( 5) and ( 6): = P Anchor(t) P Anchor(0) where P Anchor(0) is the initial barometric pressure of Anchor (A) and serves as the reference barometric pressure.In the indoor environment, the temperatures of the tag and the anchor are similar, so the temperature parameter value is uniformly set to K temp , which has a value of 44,330.The relative height was calculated as shown in red in Figure 5.After 30 min, the estimated tag height drifted by about 0.2 m, which is two-fifteenths of the error from a single sensor.This result has practical value for measuring tag height in indoor environments.
In the designed localization system, the filtered barometric pressure value from the tag is transmitted to the main controller via Wi-Fi.The time spent on Wi-Fi communication can impact the stability of the main process.Thus, this study leverages the dual-core processing capability of the controller (ESP32), dedicating one core specifically to handle Wi-Fi data transmission.Consequently, the data output of the localization system is accelerated and stabilized.Meanwhile, if the localization duration extends, the height estimated by the dual BMP sensors may still experience some drift.Therefore, the designed localization system includes a barometric pressure recalibration function.When the ranging part of the UWB sensor detects that the tag is very close to Anchor (A), the barometric pressure difference P di f f is updated as follows: Meanwhile, the height estimation is further optimized in the designed 3D localization algorithm.
Distance Measurement Optimization for UWB Sensors
Patch-type UWB sensors, limited by their physical structure, may have ranging affected by orientation [33].In this study, UWB sensors based on the new generation DW3000 chip (manufactured by Qorvo, Greensboro, NC, USA) are used for ranging.They also feature 2 dBi gain antennas to enhance the UWB sensors' omnidirectional ranging capabilities.The ranging principle involves recording the time it takes to transmit and receive extremely short pulse signals, which is used to calculate the distance between two devices [22].
Similar to BMP sensors, UWB sensors are also subject to measurement noise.Noise data or sudden erroneous extreme values can compromise the accuracy of the localization system.Therefore, filters are applied to refine the raw data output from the UWB sensors, obtaining more stable distance measurements L Filter .However, due to the hardware limitations of UWB sensors, filtered UWB measurements still show some errors compared to the actual distances.For example, the actual distance between two UWB sensors is 3 m, but the filtered output from the sensor devices is 3.18 m.Therefore, more measured and actual values need to be sampled to calibrate the UWB sensor [34,35].
Considering the range of indoor measurements, we sampled distance values at multiple fixed points from 0.1 m to 10 m.As illustrated in Figure 6, these measurement results were used to fit a calibration equation using Matlab's Curve Fitting Toolbox, resulting in the following equation: where the fitting parameter a is 0.9954, parameter b is −16.02, and the confidence bound for the fitting coefficients is 95%.When the filtered sensor measurement L Filter(t) is input in real time, the system outputs the calibrated value L UWB(t) .The final distance estimate obtained after calibration is closer to the real distance.
Indoor 3D Localization Method
For 3D localization of the tag, a minimum of four reference anchors is typically required, with additional anchors used to enhance localization accuracy [36].In this study, we use three UWB sensors at the same height as the localization anchor points to establish a geometric localization model for tag 3D localization.Furthermore, BMP sensors are utilized to determine the tag's height relative to a reference plane, as well as enhancing the height estimation's accuracy.The designed localization system is approached as in Algorithm 1.
Algorithm 1 Indoor 3D localization method.
Input: Distance values measured by UWB sensors: Barometric pressure values measured by BMP sensors: P Anchor(t) , P Tag(t) .Output: The coordinates of the tag's 3D indoor location: (Tag x(t) , Tag y(t) , Tag z(t) ).
Geometric Localization Model
Anchors for localization are distributed in an indoor space.The UWB sensors on the three anchors are adjusted to the same height H Tripod using tripods.In this horizontal plane, Anchor (A) is positioned midway between Anchor (B) and Anchor (C).As depicted in Figure 7, the geometry model is based on these three anchors.The tag location is denoted as point P. The locations of the three anchors are denoted as points A, B, C. The point O is the midpoint of line segment BC.Therefore, the localization system is defined with the x-axis in the direction of −→ OA, the y-axis in the direction of − → BC, and the z-axis facing vertically upwards.The lines AP, BP, CP represent the true length of the tag from each anchor.The projection of the tag point P in the plane is the point K, and the coordinates (P x , P y , P z ) of point P need to be computed.The UWB sensors at each anchor point measure the distance (L UWB A (t) , L UWB B (t) , L UWB C (t) ) from the tag point P in real time.At a given moment, these moments are denoted as ÂP, BP, and ĈP, respectively.To calculate the tag's coordinates (P x(t) , P y(t) ) in the X-axis and Y-axis, it is necessary to project the UWB measurements in the real 3D environment to the anchors' planes.Equation ( 9) employs the Pythagorean theorem to calculate the projected distance values from the UWB sensors, integrating the tag's height information estimated by dual BMP sensors as the PK in the geometric model.The lengths obtained are L UWB A (t) , L UWB B (t) , L UWB C (t) , corresponding to line segments ÂK, BK, and ĈK, respectively: Within the plane of the anchors, circles are drawn with the anchor as the center and the measured projected distance as the radius, respectively.For instance, a circle is drawn with the Anchor C as the center and ĈK as the radius.Ideally, the three circles should intersect at point K.However, since the lengths ÂP, BP, and ĈP estimated by the UWB sensor are approximations, the circles might intersect at multiple points as depicted in Figure 8.The three intersections P BC , P AC , P BC that are closest to each other are selected to form a triangle, and its centroid is used as the estimated point K.To determine the centroid's position, each of the two circles needs to provide a vertex for the final triangle.Additionally, due to the impact of UWB data noise, the measured results for ÂP, BP, and ĈP may result in the circles not intersecting.In this case, the vertex between the two circles needs to be redefined.Therefore, this study addresses several potential geometric scenarios separately.As an example, the intersection of circles A and C is divided into two cases for solving the intersection's result.Then, the results of the other two sets of two circles are combined to find the triangle's centroid.
Case 1: Two circles intersect at two points.
There are two intersections when two circles intersect.As shown in Figure 9, the intersection points P 1 and P 2 to be found are symmetrical about the line segment AC.The area of △ACP 1 is calculated using Heron's formula, and the lengths of P 1 G and CG are determined as shown in Equations ( 10)-( 12).
Then, the coordinates of point G can be obtained by the principle of similar triangles, i.e., by calculating the ratio of the lengths of CG to AC.Therefore, based on the point G, the coordinates of the points P 1 and P 2 in this plane are solved from the perpendicular relation and the length of P 1 G.Then, the point P 2 inside the circle B is chosen as the coordinates of P AC .Using the same method, P BC and P BC are determined through geometric calculation.
Case 2: Two circles are tangent or non-intersecting.As in Figure 10, the inclusion relationship between circle A and circle C is determined by the length relationship between ÂP, ĈP, and AC.When two circles are tangents, there is a shared tangent point that can be used as a vertex (P AC ) of the required triangle.And when two circles do not have an intersection point, it can be the case that the two circles do not contain each other, or that one circle contains the other.In this case, it is necessary to determine the point P AC in these unsolved cases by the percentage of ÂP and ĈP.The cases where the two circles are tangents or non-intersecting can be calculated as Equations ( 13) and ( 14): where (A x , A y ) represents the coordinate of point A. (C , C y ) represents the coordinate of point C, and (P AC x , P AC y ) is the coordinate of point P AC .The three scenarios in the piecewise function correspond to the following: the two circles do not encompass each other; circle A contains circle C; and circle C contains circle A. Here, when the judgment in the equation is an equal sign, it represents scenarios where the two circles are tangents.
Similarly, the results of geometrically resolving circles A and B and circles B and C, respectively, result in a total of three intersections: P BC , P AC , and P BC .Figure 11 shows a possible scenario where circles A and C do not intersect but circle B intersects circles A and C, respectively.In addition, for any other possible scenarios, it is possible to solve for three sets of two circles based on Case 1 and Case 2, respectively, thus forming a triangle that can be used to solve for the centroid K.
Finally, by the centroid method, the location (P x , P y ) of the point P projected on the anchors' plane is estimated.Then, the tag's height H dual(t) estimated by the dual BMP sensor is used again.The tag's 3D coordinates are denoted as:
Optimization of Location Estimation Based on Kalman Filtering
Due to the fluctuations in the data measured by the sensors, the tag's location calculated using the geometric model may deviate from the actual value.The Kalman filtering algorithm is an optimization estimation algorithm that is very effective in dealing with linear dynamic systems containing Gaussian noise.To optimize the location estimation, this study employs the Kalman filtering method to predict and update the optimal coordinates of the tag.
First, the time node information of the running loop in the localization system can be used to estimate the tag's velocity v (t−1) at the last moment (t − 1): where the tag's velocity v (t−1) in the three dimensions consists of the components [v x(t−1) , v y(t−1) , v z(t−1) ] ′ .The tag's height P z(t−1) is equal to the height estimated by the dual BMP sensors H dual(t−1) .∆t denotes the change, with the change in locations represented by [∆P x(t−1) , ∆P y(t−1) , ∆P z(t−1) ] ′ .Assume that the labels have similar velocities in a very short period of time.Then, the predicted location X (t|t−1) at the current moment is as follows: where the subscript (t|t − 1) denotes the estimation of the (t) moment based on the (t − 1) moment.The X (t−1|t−1) is the optimal solution for the location estimated at the previous moment, [P x(t−1) , P y(t−1) , P z(t−1) ] ′ .And the control input U (t) is the predicted location increment.Then, the update part of the Kalman filter for the localization data is as follows: Herein, P represents the prediction error covariance, K (t) is the Kalman gain, and X(t|t) is the state estimation.A is the state transition matrix, H is the observation matrix, and I is a 3 × 3 identity matrix.Z (t) is the actual measurement at the current moment, and its data are the tag's coordinate data calculated by the geometric localization model in Equation (15).Referring to the tests of UWB and BMP sensors in Section 3, the process noise covariance matrix Q in the Kalman filter is set to diag([1 × 10 −4 , 1 × 10 −4 , 1 × 10 −3 ]), and the mea- surement noise covariance matrix R is set to diag([4 × 10 −4 , 4 × 10 −4 , 5 × 10 −3 ]).Finally, the positioning system's output at the current moment is X(t|t) , i.e., the estimated location of the tag [Tag x(t) , Tag y(t) , Tag z(t) ] ′ .
In addition, at the start of the localization system operation, the Kalman filter cannot function accurately immediately due to the lack of initial measurement data for prediction.Therefore, at the first moment, the filter does not perform computations, and its output is directly set to the input positioning data.The initial error covariance matrix is set as a diagonal matrix diag[1, 1, 1], and from the second moment onwards, the filter begins to operate formally.
This section introduced the proposed indoor 3D localization method utilizing sensor fusion.UWB sensors at three anchors calculated distances to the tag, while dual BMP sensors provided height estimates, enabling the mapping of UWB sensors' range measurements onto a 2D plane.The geometric localization model, based on these measurements, integrated various relationships and compensated for errors.The tag's location on the anchor's plane was determined using the centroid method, and Kalman filtering was applied to enhance the accuracy of the 3D location estimates.The scheme will be validated experimentally in the following section.
Experimental Setup
The experimental validations were located in indoor environments.One setting was in a laboratory equipped with a motion capture system, and the other was a hall with more space.The experimental setup for the two scenarios is shown in Figure 12.The UWB sensors in the individual anchors were adjusted to the same height H Tripod by means of a tripod, and the parameters of the anchors' positions are listed in Table 2.
In the laboratory environment, a motion capture system (produced by OptiTrack) was used to capture the tag's reference positions, with the marker fixed on the tag.This system covers a measurable area of 3 m (length) × 2.5 m (width) × 2 m (height) with millimeter accuracy.Results from the motion capture system serve as reference locations for the tag, used to evaluate the localization system's accuracy.Additionally, to verify the maximum locatable range of the proposed system, a static tag localization experiment was carried out in a spacious hall setting.Moreover, Figure 12 shows the base coordinate systems of both the proposed localization system and the motion capture system.The base coordinate system of the localization system {O} is as described in Section 4.1's geometric localization model.The base coordinate system of the motion capture system {MCS} is established through a device containing three MCS markers, positioned directly below {O} and parallel to the ground.As the frames of the two systems reside in different coordinate systems, the registration of two frames is required to compare data.Each frame of the motion capture system is transformed into the proposed localization system as follows: where P M,i is the i-th reference location data in the {MCS} coordinate system.It is transformed to the coordinate P ′ M,i in the {O} coordinate system by a rotation matrix R O MCS and a translation matrix d.The specific values of the two matrices are obtained in accordance with the location information in the schematic.The result of the transformation is (z P M ,i , x P M ,i , y P M ,i − H Tripod ).It should be declared that there is a registration error in the two-frame registration process.The measurement point P O of the localization system is the tag's UWB antenna, and the measurement point P M of the motion capture system is the MCS marker.Due to the deviation of about 2 cm between the measurement points, the attitude change when the tag moves may slightly affect the registration accuracy.
In addition, the Root Mean Square Error (RMSE) is a common measure of the difference between the estimated and reference values as defined in Equation (25).Here, Err i denotes the Euclidean distance between the i-th estimated location and its corresponding reference location.A smaller RMSE value indicates a lesser difference between the estimated and the reference values, signifying higher localization accuracy: After registering the frames of the localization system to the frames of the motion capture system, the specific definition of Err i for different error evaluation objects is as follows:
•
Errors on different coordinate axes (Err x,i , Err y,i , Err z,i ): • Two-dimensional localization error on the X-Y plane (Err 2D,i ): • Three-dimensional localization error across the X-Y-Z axes (Err 3D,i ): where x re f ,i , y re f ,i , z re f ,i are the tag's reference location data collected by the motion capture system.x est,i , y est,i , and z est,i are the estimated location data of the proposed localization system.
Indoor 3D Localization Experiment
The experiment aims to verify the accuracy of the indoor 3D localization system, particularly the accuracy of the height estimation.For this purpose, the tag's moving reference trajectory was set as two rectangles (1.4 m × 1.6 m) with different heights.The reference locations of the tag were recorded by the high-precision motion capture system.The tag's trajectory, as shown in Figure 13, began at the point labeled 'Start', moved around a horizontal rectangle, then vertically up about 0.5 m.After completing another horizontal loop around the rectangle, it concluded at the point labeled 'End'.The tag's reference trajectory is depicted as the 'Reference' in the figure, with the geometric localization results shown in blue circles.The final Kalman filtering results produced by the localization system are indicated by the red line.The system's output rate was 37 Hz, processing nearly 2600 data sets in 70.5 s.To better differentiate the accuracy of localization in each dimension, Figure 14 displays the comparison of the tag's positions on the X-axis, Y-axis, and Z-axis.The gray background in the figure indicates the phase where the tag was ascending, flanked by rectangular path sections.It is evident that the system's localization results closely align with the reference trajectory.On the X and Y axes, there are relatively larger errors at the corners of the rectangular path.The optimization by the Kalman filter significantly enhances the geometric localization results.
The analysis of the results from the indoor 3D localization experiment is presented in Table 3.For the final output data of the localization system, the RMSE was about 0.041 m for the X-axis and Y-axis, and 0.028 m for the Z-axis.When the errors across different axes were superimposed, the 2D and 3D localization errors were also increased.Additionally, the cumulative distribution function (CDF) shown in Figure 15 demonstrates the specific error distribution for the 3D localization results.The accuracy of the tag's height estimation is crucial for 3D localization.In the Z-axis part, the results of the height error estimated by the dual barometric pressure sensors are as shown in Figure 16.Before filtering, the height error of the geometric localization results was about ±0.1 m.After filtering, the error was reduced to approximately ±0.05 m.
Locatable Range Verification Experiment
To verify the maximum locatable range of the proposed system, we conducted experiments in a more spacious hall.The three anchors were located in the same planar position as the experimental setup, and the height of the anchors H Tripod was set to 1.3 m.As in Figure 17, the stationary reference points for the test were set at the four vertices of a 6 m (length) × 6 m (width) × 2 m (height) sized cube.Based on the coordinate system of the localization system, the positions of these four reference points were Table 4 presents the RMSE comparisons of 2D and 3D errors for four points, calculated against their respective reference points.And Figure 18 shows a detailed distribution of the 3D localization error results, where the blue boxes are geometric localization results, and the purple boxes are Kalman filtering results.Analysis data and boxplots indicate that the Kalman filtering optimized the geometric localization results for four points.Notably, the error variability range narrowed, the median error decreased, and the average 3D localization RMSE improved from 0.0860 m to 0.0740 m.These results confirm the system's locatability within a 6 m (length) × 6 m (width) × 2 m (height) range.
Discussion
In this section, we compare the proposed system with our previous study and other related studies, respectively.
Comparison with Our Previous Study
This study successfully implemented indoor 3D localization, with the proposed system being evaluated through two experiments.The first experiment displayed the localization trajectories of dynamic tags within a limited range, while the second demonstrated localization estimates of static tags over extended distances.The 2D RMSE results for both experiments were nearly identical, approximately 0.058 m.For the 3D experimental results, the RMSE at longer distances was 0.074 m, which was an increase of 0.01 m compared to the experimental results obtained near the anchor.
Compared to our previous study [20], this research developed an enhanced indoor 3D localization system, with significant upgrades and updates to both hardware and software: 1.
In terms of hardware, the UWB sensor was upgraded from a patch type to an antenna type.To account for the propagation direction and range of the antenna signal, a short antenna with 2 dBi gain was selected to enhance the indoor omnidirectional ranging capability for the UWB sensors.2.
Additionally, the main controller was upgraded from ESP8266 to the dual-core ESP32 chip.By rationally allocating tasks through the software, one core was specifically used for Wi-Fi data reading, thus making the data output of the localization system faster and more stable.The localization output rate reached 37 Hz, which was a nine-fold increase.
3.
The dispersed arrangement of the anchors allowed the locatable area of the new localization framework to expand.The verified locatable range of the system was 6 m (length) × 6 m (width) × 2 m (height), which was approximately three times larger compared to the previous localization device.4.
In particular, the RMSE of the 3D localization system reached 0.074 m, which improved the localization accuracy by 40.7% compared to our previous study.
Comparison with Other Related Studies
Although a high-precision motion capture system was employed during the experimental validation, it uses fixed motion capture cameras that are not easily moved and are expensive.Our localization system features quick anchor setup, and the estimated tag trajectories closely match the reference values, offering more economic value in some application scenarios.
In the height estimation, localization using three UWB sensors makes it difficult to distinguish between positive and negative tag heights.A study on indoor 3D drone localization increased the accuracy of height estimation by adding a UWB sensor at a different height [13].However, this study reports that an error of approximately 0.2 m still exists in the height estimation.Moreover, as bandwidth utilization increases, a lower sensor data output rate (5 Hz) significantly impacts the accuracy of the localization system.Ma's research designed a 3D localization system for indoor mobile robots using four UWB anchors at different heights [31], compensating for the signal interference from patch-type UWB sensors, with decimeter-level accuracy meeting the application requirements.In areas with small height differences, a single BMP sensor shows insignificant pressure variations [6], and the accuracy is also greatly reduced by the effect of barometric drift.However, our proposed method using dual BMP sensors excels in height estimation, achieving an accuracy of ±0.05 m.
Tables 5 and 6 compare the localization performance of more different studies by RMSE results.In 2D planar localization, our approach achieves over twice the accuracy compared to methods that utilize additional UWB sensors [13,37,38].Zigbee-based localization methods are slightly less precise [39], and LiDAR faces challenges with repetitive localization accuracy [40].
In indoor 3D localization, a substantial number of sensors are required to support the localization in larger buildings, and significant barometric changes can estimate floor levels [6].For the size of the room environments, an indoor localization method that integrates SLAM and UWB technologies demonstrate notable accuracy [41].Yoon et al. developed a system that combines IMU and UWB, achieving high-accuracy localization in smaller areas for entertainment scenarios [42].This study has achieved a 60% performance improvement over similar research with comparable measurement scopes as referenced in [13].While the localization accuracy provided by four anchors at varying heights meets the needs of the intended scenarios [31], inaccuracies in any UWB range measurement can significantly increase the error in height estimation.To tackle this issue, our system incorporates dual BMP sensors to improve the precision of the height estimation.By utilizing geometric localization models and Kalman filters, the system's indoor 3D localization RMSE is optimized to 0.074 m.However, the system we propose is not without flaws.Continuous thick obstructions between UWB sensors can impact the accuracy of some distance measurements, leading to reduced overall localization accuracy.The height of indoor dual BMP sensors has been validated.But the relative barometric pressure measurements can be unstable near air conditioning vents, which are independent of the overall indoor barometric pressure.Meanwhile, employing additional reference anchors will furnish the localization system with more measurement data.However, more non-linear factors must be considered in real-world localization applications.We plan to further optimize and explore these aspects in subsequent studies.Furthermore, limited by the size of the battery, future improvements could involve using smaller lithium batteries and integrating all components on the same PCB to further reduce the size of the tag.Alternatively, a smartwatch that integrates UWB sensors, BMP sensors, and Wi-Fi transceivers could serve as an alternative hardware for the tag.
Conclusions
This study developed an enhanced indoor 3D localization system utilizing UWB and BMP sensors.The system features dispersed anchors as reference points within an indoor environment.The anchors were set on tripods and could be easily arranged in new environments.Filters reduced measurement noise for both types of sensors.BMP sensors measured barometric pressure at various indoor heights.The barometric pressure value at the tag was sent to the main controller through a Wi-Fi enabled microcontroller.The tag's relative height was estimated by comparing it with the barometric pressure value at the anchor point, and the error of the estimated height result was about ±5cm.This height value is also used to project the UWB sensor's measurements onto the anchor's plane, which helps reduce errors in 2D localization.UWB sensors at the three anchors calculated distances to the tag.The established geometric localization model considered various geometric relations and compensated for potential errors.The tag's projection location on the anchor's plane was determined using the centroid method.Finally, Kalman filtering optimized the location estimation.
We validated the localization performance and locatable range of the proposed system through indoor 3D localization experiments.The system has fast and stable output.In particular, the height estimation scheme with dual barometers estimated the height results with an accuracy of about ±0.05 m.The RMSE of the 2D localization reached 0.0585 m, and the RMSE of the 3D localization reached 0.0740 m.Compared to indoor localization systems in similar environments, our system has a larger measurable range and higher localization accuracy.
In future research, we plan to use more anchors to provide measurement data for the indoor 3D localization system and consider more non-linear factors to optimize the system's localization performance, with specific application scenarios such as indoor drones' localization.
Figure 1 .
Figure 1.The hardware system framework.
Figure 4 .
Figure 4. Barometric pressure values measured by BMP sensors and filtering results in 30 min.
Figure 5 .
Figure 5.Estimated height values and relative height values based on dual BMP sensors in 30 min.
Figure 6 .
Figure 6.The calibration equation fitted based on the sampled measurement values and actual distance values.
Figure 7 .
Figure 7. Geometric model based on the three anchors.
Figure 8 .
Figure 8. Geometric localization model.The geometric calculation starts by determining whether these circles intersect, based on the locations of points A, B, and C, and the lengths of ÂP, BP, and ĈP.When each pair of circles has two intersection points, a total of six intersection points are generated.
Figure 9 .
Figure 9. Geometric localization model of circle A and circle C intersecting at two points.
Figure 10 .
Figure 10.Geometric localization model of circle A and circle C without intersections.
Figure 11 .
Figure 11.A possible scenario of the geometric localization model.
GroundFigure 12 .
Figure 12.The schematic of the experimental setup.
Figure 13 .
Figure 13.Results of the tag's trajectories in the indoor 3D localization experiment.
Figure 14 .
Figure 14.Comparison of the tag's coordinates in three dimensions.
Figure 15 .
Figure 15.CDF results for the 3D localization errors in the localization experiment.
Figure 16 .
Figure 16.Results of the height errors in the localization experiment.
Figure 17 .
Figure 17.Results of the tag's locations in the locatable range validation experiment.
Figure 18 .
Figure 18.Boxplot results for the 3D localization errors in the locatable range validation experiment.
•
Proposed and validated a method for estimating tag height based on dual BMP sensors, effectively compensating for most indoor barometric pressure deviations, and providing a more accurate height estimation than achievable with a single BMP sensor.For the challenge of indoor height estimation, our accuracy is approximately ±0.05 m, with an RMSE of 0.0282 m.
• Developed a hardware framework that enhances the system to be more efficient and stable, with a localization output rate reaching 37 Hz, a nine-fold increase compared to earlier designs.• Our portable localization system covers a larger measurable range.The proposed geometric localization model and the Kalman filtering technique are empirically validated, showing a 2D localization RMSE of 0.0585 m, and a 3D localization RMSE of 0.0740 m.Compared to indoor localization systems with a similar number of anchors, ours offers an extended measurable range and superior accuracy.
Table 1 .
The parameters of the devices used in the hardware system.
3. Spatial Distance Measurement 3
.1.Height Measurement with Dual BMP Sensors UWB A (t) < A predefined close distance value for calibration) then 1: if (L 4: P ′ Tag(t) ← P Tag(t) + P di f f 5: H dual(t) ← Dual BMP sensor height calculation with Equation (6) 6: 1 3 (P AB x + P AC x + P BC x ) P y = 1 3 (P AB y + P AC y + P BC y ) P z = H dual .
Table 2 .
The parameters of the experimental setup.
Table 3 .
Analysis of the indoor 3D localization experimental results.
Table 4 .
Analysis of the locatable range validation experimental results.
Table 5 .
Comparison of 2D localization accuracy of different methods.
Table 6 .
Comparison of 3D localization accuracy of different methods. | 9,877 | 2024-05-23T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Recent Developments in the Theory and Applicability of Swarm Search
Swarm intelligence (SI) is a collective behaviour exhibited by groups of simple agents, such as ants, bees, and birds, which can achieve complex tasks that would be difficult or impossible for a single individual [...].
Overview
Swarm intelligence (SI) is a collective behaviour exhibited by groups of simple agents, such as ants, bees, and birds, which can achieve complex tasks that would be difficult or impossible for a single individual. The collective behaviour of these organisms is characterized by decentralized decision making, self-organization, adaptive responses to environmental changes, and emergent properties that are not present in individual organisms. SI algorithms emulate these features to solve complex optimization, control, classification, clustering, routing, and prediction problems in diverse domains, such as engineering, robotics, biology, economics, social sciences, and humanities [1].
SI algorithms can be classified into two main categories: swarm-based algorithms and swarm-inspired algorithms [2]. Swarm-based algorithms involve the simulation of a population of individuals (agents) that interact with each other and their environment to achieve a collective goal. Examples of swarm-based algorithms include ant colony optimization (ACO) [3], particle swarm optimization (PSO) [4], artificial bee colony (ABC) [5], and firefly algorithm (FA) [6]. Swarm-inspired algorithms, on the other hand, extract specific mechanisms or principles from natural swarms and incorporate them into conventional optimization or machine learning algorithms. Examples of swarm-inspired algorithms include artificial immune systems (AIS) [7], bacterial foraging optimization (BFO) [8], and grey wolf optimizer (GWO) [9].
The success of SI algorithms is attributed to their ability to efficiently explore a large search space, converge to optimal or near-optimal solutions, and handle multiple objectives or constraints simultaneously. The collective intelligence of the swarm enables the sharing and exchange of information, the exploitation of promising regions, and the avoidance of suboptimal regions. Furthermore, the decentralized and distributed nature of the swarm allows for scalability, robustness, fault-tolerance, and adaptivity to dynamic or uncertain environments [10].
Despite their advantages, SI algorithms face several challenges and limitations, such as premature convergence, scalability issues, sensitivity to parameter settings, lack of theoretical guarantees, and difficulty in interpreting or explaining the obtained results. Researchers have proposed various approaches to overcome these challenges, such as hybridization with other optimization or machine learning techniques, dynamic adaptation of parameters, incorporation of domain knowledge, and rigorous analysis of convergence properties.
Applications
The advancement of technology has spurred a growing demand for multi-agent and swarm robotics solutions to address an ever-expanding range of complex and diverse challenges. With the emergence of distributed systems, it has become increasingly clear that relying solely on a single robot may not be the optimal approach for many application domains. Instead, teams of robots are being called upon to work in a coordinated and intelligent fashion, leveraging the power of redundancy to achieve greater efficiency and reliability.
The benefits of multi-agent systems stem from their ability to harness the collective intelligence of multiple entities, allowing them to tackle complex tasks that would be beyond the capability of a single robot. This approach provides the flexibility to scale up or down the number of robots based on the task at hand, while also providing redundancy to ensure mission success even in the face of individual robot failures. Moreover, multi-agent systems can leverage complementary skills and diverse perspectives, leading to improved problem-solving capabilities and more robust decision making.
Swarm robotics takes the concept of multi-agent systems a step further by drawing inspiration from the collective behaviour of natural swarms, such as ants, bees, and birds. Swarm robotics seeks to emulate the self-organizing and adaptive behaviour of swarms in order to create distributed systems that can operate autonomously and efficiently. By leveraging simple local interactions between agents, swarm robotics can achieve complex global behaviours, such as exploration, foraging, or assembly, without the need for centralized control or explicit communication. The emergence of swarm robotics opens up exciting new possibilities for applications in fields such as search and rescue, environmental monitoring, and precision agriculture.
In [11], a detailed description of swarm-robotics application domains is presented, demonstrating how large-scale decentralized systems of autonomous robotic agents can be significantly more effective than a single robot in many areas. However, when designing such systems it should be noted that simply increasing the number of robots assigned to a task does not necessarily improve the system's performance-multiple robots must intelligently cooperate to avoid disturbing each other's activity and achieve efficiency.
In nature, "simple-minded" animals such as ants, bees or birds cooperate to achieve common goals and exhibit amazing feats of collaborative work. It seems that these animals are "programmed" to interact locally in such a way that the desired global behaviour is likely to emerge even if some individuals of the colony die or fail to carry out their task for other reasons. A similar approach may be considered for coordinating a group of robots without a central supervisor, by using only local interactions between the robots. When this decentralized approach is used, much of the communication overhead (typical of centralized systems) is saved, the hardware of the robots can be fairly simple, and better modularity is achieved. A properly designed system should be readily scalable, achieving reliability through redundancy.
There are several key advantages to the use of such intelligent swarm robotics. First, such systems inherently enjoy the benefit of parallelism. In task-decomposable application domains, robot teams can accomplish a given task more quickly than a single robot, by dividing the task into sub-tasks and executing them concurrently. In certain cases, a single robot may simply be unable to accomplish the task on its own (e.g., to carry a large and heavy object).
Second, decentralized systems tend to be, by their very nature, much more robust than centralized systems (or systems comprised of a single but very complex unit). Generally speaking, a team of robots may provide a more robust solution by introducing redundancy, and by eliminating any single point of failure, while considering the alternative of using a single sophisticated robot, we should note that even the most complex and reliable robot may suffer an unexpected malfunction, which will prevent it from completing its task. When using a multi-agent system, on the other hand, even if a large number of the agents stop working for some reason, the entire group will often still be able to complete its task, although perhaps slower. For example, for exploring a hazardous region (such as a minefield or the surface of Mars), the benefit of redundancy and robustness offered by a multi-agent system is quite obvious, and it is in this context that Rodney Brooks wrote their famous "Fast, Cheap and Out of Control" report [12].
Another advantage of the decentralized swarm approach is the ability of dynamically reallocating sub-tasks between the swarm's units, thus adapting to unexpected changes in the environment. Furthermore, since the system is decentralized, it can respond relatively quickly to such changes, due to the benefit of locality-the ability to swiftly respond to changes without the need of notifying a hierarchical "chain of command". Note that as the swarm becomes larger, this advantage becomes increasingly important.
In addition to the ability of quick response to changes, the decentralized nature of such systems also improves their scalability. The scalability of multi-agent systems is derived from relying on the "emergence" of task completion by inherently low communication and computation overhead protocol implemented by the agents. As the tasks assigned nowadays to multi-agent-based systems become increasingly complex, so does the importance of the high scalability of the systems.
Finally, by using heterogeneous swarms, even more efficient systems could be designed, thanks to the utilization of different types of agents whose physical properties enable them to perform much more efficiently in certain special tasks.
Unfortunately, the mathematical and geometrical theory of such multi-agent systems is far from being satisfactory, as pointed out in [104][105][106][107] and many other papers.
Our interest is focused on developing the mathematical tools necessary to design and analyse such systems. For example, in [108] it was shown that a number of agents can arrange themselves equidistantly in a row via a sequence of linear adjustments, based on a simple "local" interaction. The convergence of the configuration to the desired one is exponentially fast. A different way of cooperation between agents, inspired by the behaviour of ant colonies, is described in [109]. There it was proven that a sequence of ants engaged in deterministic chain pursuit will find the shortest (i.e., straight) path from the ant hill to the food source, using only local interactions. In [110], the behaviour of a group of agents on Z 2 was investigated, where each ant-like agent pursued their predecessor, according to a discrete biased-random-walk model of pursuit on the integer grid. The average paths of such a sequence of a(ge)nts engaged in a chain of probabilistic pursuit was shown to converge to the "straight line" between the origin and destination, and this too happens exponentially fast.
An in-depth analysis of the effect of certain geometric properties on the search efficiency of a collaborative swarm of autonomous drones appears in [111,112], whereas an example of a set of analytic complexity bounds for this problem can be found in [113,114]. A work that analysed the effect of a stochastic framework for the same problem is presented in [115].
Decentralized Intelligence Architectures and the Swarm Paradigm
A key principle in the notion of swarms, or multi-agent robotics, is the simplicity of the individual agent. The notion of "simplicity" here means that the agents should be significantly simpler than a "single sophisticated system", which can be constructed for the same purpose. As a result, the capabilities and the resources of such simple agents are assumed to be very limited, with respect to the following aspects: • Memory resources-basic agents should be assumed to contain only O(1) memory resources (i.e., the size of memory is independent of the size of the problem or the number of agents). This usually imposes many interesting limitations on the agents. For example, agents can remember only a limited history of their activities so far. Thus, protocols designed for agents with such limited memory resources are usually very simple and attempt to solve a given problem by relying on some (necessarily local) basic patterns arising in the environment. The task is completed by a repetition of these patterns by a large number of agents. • Sensing capabilities-defined according to the specific nature of the problem. For example, for agents moving along a 100 × 100 grid, a reasonable sensing radius may be 3 or 4, but certainly not 40. • Computational resources-although agents are assumed to employ only limited computational resources, a formal definition of this constraint is hard to define. In general, most of the time-polynomial algorithms may be used, provided that the amount of memory the agents have is sufficient. • Communication is very limited-the issue of communication in multi-agent systems has been extensively studied in recent years. Distinctions between implicit and explicit communication are usually made, in which implicit communication occurs as a side effect of other actions, or "through the world" (see, for example [116]), whereas explicit communication is a specific act intended solely to convey information to other robots on the team. Explicit communication can be performed in several ways, such as a short-range point-to-point communication, a global broadcast, or by using some sort of distributed shared memory. Such memory is often referred to as a pheromone, used to convey small amounts of information between the agents [22, [117][118][119]. This approach is inspired from the coordination and communication methods used by many social insects-studies on ants (e.g., [120,121]) show that the pheromone-based search strategies used by ants in foraging for food in unknown terrains tend to be very efficient. Additional information can be found in the relevant NASA survey, focusing on "intelligent swarms" comprised of multiple "stupid satellites" [122,123] or the following survey conducted by the US Naval Research Center [124]. The lack of explicit communication poses an challenge for various special configuration sets, such as symmetric environments [111].
In the spirit of designing a system which uses as simple agents as possible, we aspire that the agents will have as little communication capabilities as possible. With respect to the taxonomy of multi-agents discussed in [125], we would be interested in using agents of the types COM-NONE or if necessary COM-NEAR with respect to their communication distances, and BAND-MOTION, BAND-LOW or even BAND-NONE (if possible) with respect to their communication bandwidth. Therefore, although a certain amount of implicit communication can hardly be avoided (due to the simple fact that by changing the environment, the agents are constantly generating some kind of implicit information), explicit communication should be strongly limited or avoided altogether, in order to fit our paradigm (note that in many works in this field, this is not the case, and communication, as well as memory, resources, are often being used in order to create complex cooperative systems).
In summary, while designing intelligent swarm systems we must assume (and often even aspire for) having an available individual agents that are myopic, mute, senile and rather stupid.
Limitations
While SI has been applied successfully in many fields, including optimization, robotics, and networking, it also has limitations that need to be taken into account. One of the main limitations of SI is its sensitivity to initial conditions and parameter settings. Small changes in the initial configuration or the parameters of the swarm can have a significant impact on its behaviour and performance, leading to suboptimal solutions or even failure to converge. This problem is exacerbated in large-scale systems, where the number of variables and interactions increases exponentially [10].
Another limitation of SI is its vulnerability to perturbations and disturbances. Swarms are designed to be robust and resilient to individual failures or disruptions, but they can be vulnerable to systemic disturbances [126], such as environmental changes, resource depletion, or external attacks. These disturbances can destabilize the swarm beyond its self-emergent macroscopic regularities [127], leading to disintegration, divergence, or oscillations.
Real-world examples of these limitations include the behaviour of ant colonies in changing environments. Ants use SI to forage for food and build nests, but they are also susceptible to disturbances such as climate change or human intervention. In some cases, ant colonies can collapse or become maladapted to their environment due to the loss of critical resources or the disruption of communication channels.
Another limitation of SI is related to the trade-off between exploration and exploitation. Swarms can achieve impressive results by exploring a large search space and exploiting the best solutions found. However, there is a risk of getting stuck in local optima or suboptimal regions of the search space, especially if the swarm lacks diversity or adaptability [128]. In some cases, the swarm may require a balance between exploration and exploitation to achieve the best results, which can be challenging to achieve in practice [54].
A related limitation is the scalability of SI [113], while swarms can scale up to thousands or millions of agents, the computational and communication overheads can become prohibitive in large-scale systems. The swarm may require efficient algorithms for coordination, decision making, and resource allocation, which can be difficult to design and optimize. Such limitations may take form, for example, when SI is used in traffic management systems. Swarms of autonomous vehicles or drones can optimize traffic flow and reduce congestion by coordinating their movements and avoiding collisions [83]. However, these systems require efficient algorithms for path planning, decision making, and communication, as well as robust mechanisms for handling uncertainties and unexpected events.
Another example is the application of SI in social networks. Swarms of agents can learn and adapt to social dynamics by interacting with each other and with the environment [129]. However, these systems are also susceptible to biases, echo chambers, and polarization, which can affect their ability to explore new ideas and perspectives [130].
Swarm Search with Communication
While decentralized swarms have been the main focus of swarm-based search algorithms due to their scalability and simplicity, there are also several promising works that utilize synchronization or communication among the agents. These parallel swarms often employ communication to enhance the efficiency of the search process, such as parallel ant colony optimization, parallel particle swarm optimization, and other parallel metaheuristic approaches.
Parallel ant colony optimization (PACO) [131] is an example of a parallel swarm algorithm that utilizes communication among agents. PACO algorithms allow multiple agents to cooperate by sharing pheromone information, which helps in quickly identifying the optimal solution. For instance, PACO has been used in multi-robot coverage problems, where a group of robots are required to explore an unknown environment while avoiding collisions with each other. By sharing pheromone information, the robots can quickly converge to a solution, even in complex and large environments [22,132].
Parallel particle swarm optimization (PPSO) [133] is another example of a parallel swarm algorithm that uses communication among agents. PPSO is a variant of particle swarm optimization (PSO) that allows multiple agents to communicate with each other to improve the search process. For instance, PPSO has been used to optimize complex systems such as power grids, where the agents need to communicate to efficiently manage the distributed resources [134].
In cases where decentralized swarms may not be sufficient, parallel swarms can be beneficial. For example, in situations where the problem space is complex, the search space is vast, and the search process is time-critical, parallel swarm algorithms can offer a significant advantage over decentralized swarms. In such scenarios, communication among agents can help to identify the optimal solution more quickly and efficiently.
However, one of the main drawbacks of parallel swarm algorithms is the increased complexity of the communication mechanisms, which may require significant computational resources [135]. Additionally, communication can also lead to increased synchronization overhead, which may impact the scalability of the algorithm. Thus, in cases where the problem space is relatively simple, decentralized swarm algorithms may still be a better choice.
While decentralized swarms remain the main focus of swarm-based search algorithms, parallel swarm algorithms that utilize communication among agents have shown significant promise in enhancing the efficiency of the search process. These algorithms have been used in various applications, such as multi-robot coverage problems and power grid optimization. However, the increased complexity of communication mechanisms and synchronization overhead should also be considered when deciding on the appropriate approach for a given problem.
Opportunities and Future Research
SI and swarm systems have received considerable attention in recent years due to their potential for solving complex problems in various fields, such as robotics, optimization, and network design. As a result, there are numerous opportunities for future research in this area.
One promising avenue for future research is the development of more sophisticated algorithms and models for SI, while current approaches have shown promise, there is still much to be done in terms of improving the efficiency and adaptability of swarm systems [136]. Researchers may explore new ways to optimize the communication and coordination of swarm agents, or develop new approaches for dealing with the inherent uncertainty and complexity of real-world environments [137].
Another important area for future research is the application of SI to real-world problems, while there have been many successful demonstrations of swarm systems in laboratory settings, there is a need for more research on how to apply these systems to real-world problems. This may involve working with industry partners to develop practical solutions that can be deployed in the field, or collaborating with government agencies to address societal challenges such as disaster response or urban planning [138,139].
In addition to these technical challenges, there are also important ethical and social considerations to be addressed. As swarm systems become more advanced and pervasive, there may be concerns around issues such as privacy, security, and control. Researchers may need to explore new ways to address these concerns, such as developing transparent and accountable algorithms, or working with policymakers to establish appropriate regulations and standards [140].
Overall, there are numerous opportunities for future research in SI and swarm systems. By continuing to explore these systems and their potential applications, researchers can help to unlock new solutions to complex problems and contribute to the advancement of science and technology.
Conclusions
The study of SI has revealed that even seemingly simple organisms, such as ants, can exhibit complex and sophisticated collective behaviours when allowed to work together in a synergistic manner. This insight has led researchers to investigate the potential for applying this approach to artificial intelligence and robotics, with promising results.
In this Special Issue, a number of research studies have been presented that demonstrate the power of SI in producing complex and adaptive behaviours. By studying the ways in which ants and other social insects cooperate and communicate with one another, researchers have been able to develop algorithms and models that can be applied to a wide range of problems.
One of the key insights from these studies is that individual agents within a swarm do not necessarily need to be highly intelligent or even aware of the larger goals of the group. Rather, by following simple rules and responding to local cues, they can collectively produce intelligent and adaptive behaviours that emerge at the swarm level.
This approach has numerous potential applications, from optimizing traffic flow to coordinating the movements of swarms of robots in search and rescue operations. By harnessing the power of SI, researchers are exploring new ways to tackle complex problems that would be difficult or impossible for any individual agent to solve alone.
Overall, the research presented in this Special Issue provides compelling evidence that even the simplest organisms can exhibit remarkable intelligence and adaptability when working together in a synergistic manner. By taking inspiration from nature, researchers are opening up exciting new avenues for developing advanced technologies that can benefit society in countless ways.
In summary, let us cite a statement made by a scientist after watching an ant making his laborious way across a wind-and-wave-moulded beach [141]: "An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the environment in which it finds itself." Such a point of view, as well as the results of the research presented in this Special Issue, lead us to believe that even simple, ant-like beings, when allowed to synergically collaborate, can yield a complicated, adaptive and quite efficient macroscopic behaviour, in the intelligent swarm-level scope. 16 | 5,238.8 | 2023-04-25T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Climate Policy in Household Sector
Compared to the industry sector, the progress of energy conservation of the household sector is very slow. It is because the household sector is more diverse than the industrial sector, and regulatory enforcement is much more difficult. The government can stop firms’ operation if their environmental burden is too heavy but cannot stop household’s activities. Therefore, the government needs to find energy conservation policies that are supported by the public. Like other countries, the Japanese government has introduced various energy conservation measures to reduce the energy usage from households for the past several decades. It has introduced energy efficiency standards for energy-consuming durables and provided subsidies to promote energy-efficient products in recent years. At the same time, it has raised the price of energy in order to provide households with an appropriate incentive to conserve. In addition, it has promoted renewable energy usage in the household sector. Facing climate change, the Japanese government has not introduced energy conservation measures systematically but rather on an ad hoc basis. In this chapter, we review energy conservation measures implemented in the household sector in Japan. We then make policy recommendations to enhance the effectiveness of energy conservation measures in the household sector.
Introduction
Households use energy for transport and housing. Excluding fuel consumption for passenger vehicles, household energy consumption accounts for about onefifth of global energy consumption. However, the share exceeds one-third if the fuel consumption is included (International Energy Agency (IEA) 2016). For the past several decades, countries have implemented various energy saving measures to reduce household energy consumption. However, energy saving in the household sector has not been so successful, vis-à-vis other sectors. For example, in EU countries, while industry energy consumption decreased by 16.4% from 2005 to 2016, household energy consumption decreased only by 8.0% (European Environment Agency 2019). This trend is also visible in Japan: industrial energy consumption decreased by 17.9% from 1990 to 2017, while household energy consumption increased by 42.0% (National Institute for Environmental Studies 2019).
The Japanese government's mid-term target is a 26.0% reduction in greenhouse gas (GHG) emissions from their 2013 level by the year 2030 1 and to reduce household GHG emissions by 39.3% during this period (Ministry of the Environment 2020). Although household energy consumption began decreasing in 2012, the reduction over the past five years is only 12.3%, which is obviously too slow to achieve mid-term target.
Japan has had another difficult energy policy problem since 2011: The Fukushima accident increased awareness of the risks of nuclear power, while decreasing its public support. The share of nuclear power in the Japan's electricity supply before the accident was about 30%, and decreased to 1% in 2017 (Agency for Natural Resources and Energy 2017). Although the government states that the desirable share of nuclear power in 2030 is approximately 20-22%, there is strong objection to this plan (nippon.com 2015). On the other hand, Japan lags other developed nations in introducing renewable energy (see Chap. 4).
Slow progress in energy conservation measures and the difficulty in shifting toward alternative energy highlight the importance of household energy conservation measures. The objectives of this chapter are to investigate Japanese household energy conservation measures, and to propose policies to achieve the 2030 target.
This chapter is structured as follows. In the next section, we examine energy usage among Japanese households. 2 We review major energy conservation measures implemented in Japan and summarize their distinguishable features in Sect. 3. We conclude with policy recommendations to enhance the effectiveness of household energy conservation measures.
Characteristics of Japanese Households: International Comparison
Japan's energy consumption is characterized by a high share in the industrial sector and a low share in the household sector. Although its market share has declined in recent year, the industrial sector still has the greatest share of 46% in 2016. In contrast, the Japanese share of household to total energy consumption was 14% (Agency Natural Resources for Energy 2017). This share is much lower than that of the EU, 26%, (Euro Stat 2016), or the US, 21% (US Energy Information Administration (EIA) 2018). Although US household energy consumption differs widely among the states, the annual energy consumption of the average American household was 81.3 GJ (EIA 2015). Similarly, although there is a wide variation in energy consumption between countries, that of the average EU household was 54.0 GJ, according to the Eurostat (2016). In contrast, the average Japanese household consumed only 33.5 GJ (Ministry of the Environment of Japan 2016). The energy consumption per household in Japan is about the level of Spain and Bulgaria.
Japanese shares of electricity, natural gas, propane gas, and kerosene of total energy sources were 52%, 19%, 11%, and 18%, respectively (Survey on Carbon Dioxide Emission from Households (SCDEH) by Ministry of the Environment (2016)). In contrast, those in the US were 47%, 44%, 4%, and 5%, respectively (Residential Energy Consumption Survey (RECS) by US Energy Information Administration (2015)). In the EU, natural gas accounted for 36% of household energy consumption, electricity 24%, renewables 18%, and petroleum products for 11%, according to Eurostat (2016). Pertaining to CO 2 emissions, the shares of electricity, natural gas, propane gas, and kerosene, are 70%, 13%, 5%, and 12%, respectively in Japan. These statistics indicate that Japanese households heavily rely on electricity. Table 1 compares energy use purpose across several countries, and indicates that Japanese households use less energy for space heating, but more for lighting and appliances. It is interesting to know that Japanese households use more energy also
Historical Change in Household Energy Consumption
The National Survey of Family Income and Expenditure by Statistical Bureau of Japan (1980-2014) (NSFE) is a nationwide cross-sectional survey initiated in year 1959, and conducted every five years. It collects data on households' socioeconomic characteristics, such as income/expenditure, savings/liabilities, and ownership of durables, as well as housing information such as dwelling characteristics and site area. Using household micro data from NSFE, we report the change in household energy consumption from 1989 to 2014 below. The NSFE data pose two major drawbacks. First, the data do not report the actual energy consumption; rather, only the average monthly expenditure. We calculated the average monthly energy consumption from the monthly electricity energy expenditure, which contains measurement errors, since the price of energy varies across regions and depends on the type of contract held by the household. Second the NSFE's sampling period is limited to between September and November, that corresponds to the fall season and require less energy for room temperature control. Therefore, the estimation based on the NSFE data may underestimate household energy consumption.
Although it is preferable to analyze the annual data to take account of the seasonal variation in energy consumption, we focus on energy usage in autumn due to the above-mentioned data limitation. Figure 1 shows the change in monthly energy consumption of Japanese households from 1989 to 2014. Electricity consumption increased until 2004 while natural gas and kerosene consumption decreased steadily; consequently, the overall energy consumption decreased from 4.70 to 3.61 GJ.
The share of energy sources varies between regions. Warmer urban regions use electricity mainly, while cold suburban regions use kerosene more intensively. More specifically, the share of kerosene in Hokkaido, the coldest prefecture in Japan, was 60.4% in 2014, while in Tokyo it was 19.6%; and the share of electricity in Hokkaido was 24.7% while in Tokyo it was 40.6% (NSFE 2014).
Electric Appliance Ownership
As explained so far, Japanese households depend on electricity for much of their energy consumption. Households use home electric appliances. Here, we report how the ownership of home electric appliances has changed among Japanese households since 1980s. Given that approximately 60% of the electricity is consumed for .3% own a TV, and 89.12% own an AC. The penetration of these electric appliances has completed and Japanese households have increased the number of TVs or ACs for the past 30 years. The 2014 NSFE reports that 79.87% of households own multiple TVs, and 79.87% own multiple ACs.
The 2014 NSFE asked respondents whether they were using light-emitting diode (LED), which is more energy efficient than conventional fluorescent lamps, and found that only 31.42% of households installed LEDs, suggesting a significant energy saving potential.
The ownership of home electric appliances is associated with households' characteristics. Table 2 shows the number of AC/LED/TV/REF used in the average Japanese Single-person households tend to own fewer appliances than multi-person households. For example, the average multi-person household owns 2.74 ACs while the average single-person household owns only 1.83 ACs. However, multi-person households more likely tend to install LEDs than single households, suggesting that multi-person households may be more energy saving. Pertaining to the relationship between appliance ownership and housing characteristics, Table 2 indicates that households living in a detached house tend to own more appliances than those living in apartments; whereas the former uses LEDs more frequently than the latter.
In the Tokyo metropolitan area, the number of households increased from about 429 million to about 670 million from 1980 to 2015. However, the ownership of ACs per 100 households increased from 95 units to 301 units between 1982301 units between and 2015301 units between (BETMG 2018. Therefore, the growth rate of ACs is substantially higher than that of households. This is because Japanese households began purchasing additional air conditioners in order to make spending time at home more comfortable. This comparison growth rate suggests that the reduction of energy consumption is not an easy task even in a society with a declining population.
Electric Appliance Usage
Household appliance ownership is not directly associated with energy consumption and it is necessary to know how intensively households use appliances, in order to understand household energy consumption. Here, we report the intensity of appliance use from the SCEDH (2016). Table 3 indicates a large variation in the time of TV and AC use across households. The median time of TV use is around 4-8 h. However, about 8% of households do not watch TVs on weekdays, and about 7.5% of households keep TVs on for more Table 3 compares time of TV and AC use across regions. It shows that people living in Tokyo and Osaka (the second largest city) use both TVs and ACs more intensively than those in other regions. The average household in Japan owns 2.32 ACs. The average household in Tokyo owns 2.84 ACs while the average household in Osaka owns 2.91 ACs. This data suggests that households living in large cities own more ACs and use them more heavily.
Energy Price and Carbon Pricing
Japan imports almost all energy from abroad and thus the energy prices have been set at a high level for both household and industrial uses. Considering that further energy price increases would lower the international competitiveness and impact economic growth negatively, introducing the carbon tax in Japan has been long debated; after two decades Japan finally introduced the carbon tax in October 2012 to mitigate warming mitigation. 4 Carbon pricing is now considered as one of the most cost-effective measures to reduce CO 2 emissions, especially under the long-term target of de-carbonization. In this sub-section, we compare energy prices between Japan and other countries, especially the relative size of carbon taxes among the household sector. We focus on electricity, natural gas, and kerosene, which comprise almost 90% of Japanese energy usage (see Sect. 2.1).
Energy price 5 data in Table 4 were collected from Energy Prices and Taxes of IEA (2018). The table indicates that energy prices in Japan are higher than other countries: the prices of natural gas and electricity for Japan are 107.4 USD/MWh and 226.6 USD/MWh, respectively, the average prices in OECD countries are 53.9 USD/MWh for natural gas and 166 USD/MWh for electricity. Table 4 indicates the size of energy taxes to energy prices; and that the sizes of taxes in Japan are lower than those in France or Germany. The tax size of natural gas for Japan, France, and Germany are 7.4%, 24.5%, and 24.3%, respectively, and electricity: 8.9%, 36.2%, and 54.5%, respectively. By removing tax payment, we can calculate each country's prior-tax base energy prices. The base price of natural gas for Japan is 99.5 USD/MWh, while for France and Germany, 59.3 USD/MWh and 56.6 USD/MWh. Similarly, the base price of electricity in Japan, France, and (2018) Germany are 206.4 USD/MWh, 119.5 USD/MWh, and 156.3 USD/MWh, respectively. The price-differences between Japan and the other countries are substantial on the base price level. Pertaining to kerosene, its price in Japan is lower than the average OECD countries. The prior-tax base price in Japan is about 619.3 USD/1000 L, which is the second highest price among the five countries listed in Table 3.
Thus, the base energy prices are relatively high in Japan but carbon taxes are relatively small. Indeed, the effective carbon price of residential and commercial use in Japan was 5 EUR/ton, while that of UK, Germany, and France was 23 EUR/ton, 26 EUR/ton, and 19 EUR/ton, respectively (Ministry of the Environment 2018).
Policy Measures to Improve Appliance Energy Efficiency
Energy consumption per service is reduced via energy efficiency improvement. Households might choose an energy-efficient product even without any policy intervention since they can save money. Manufacturers will develop an energy-efficient product to increase demand for their products. However, it is often difficult to achieve the sufficient energy efficient improvement necessary for the society when simply relying on household's voluntary product selection and manufacturers' voluntary investment, 6 and thus the government has introduced policies to forcibly improve the energy efficiency of durable consumer goods. While policies for improving electricity usage of home appliances have been widely implemented, the Japanese government has adopted similar strategies.
The government introduced the Top Runner Program to improve the efficiency of energy-consuming durables in 1998. It set the energy efficiency of the products with the highest efficiency as the energy efficiency standard and requested manufacturers to achieve it before the specified target year. Although only 11 items were covered at the beginning of the program, seven items were added in 2002, two items were added in 2009, and five in 2013. Presently, a total of 31 items are subject to the Top Runner Program, resulting in significant improvement in the energy efficiency of energy-consuming durables. The energy saving of several electric appliances has improved twice or more than the target. For example, the energy efficiency of REFs has improved by 43% from 2005 to 2010, while its target was 21%. Similarly, the energy efficiency of TVs has improved by 73.6% from 2008 to 2012, while its target was 37%.
Households cannot examine the energy efficiency of products at the time of purchase. In 1995, to effectively inform consumers of product energy efficiency, the Japanese government introduced the Energy Star Program jointly with the US. In 2000, the Japanese government introduced the Energy Saving Labeling program based on the Japanese Industrial Standards. A green mark is placed if a product achieves the top runner standard, while the orange mark indicates that it did not. Manufacturers further provide consumers with detailed information including an energy-saving mark of the target year, an achievement rate of energy-saving standard, and an annual electricity consumption.
A strength of these programs is that they do not significant consumer effort. The Top Runner Program improves energy efficiency of the products sold, and the Energy Saving Labeling Program enables consumers to choose an energy-efficient product at the time of product replacement by reporting its energy saving benefit. To further promote the selection of energy-efficient products by consumers, the Japanese government started the Unified Energy Saving Labeling Program in 2006, in which the government requests retailers to indicate the energy efficiency of products with the number of stars, as well as the annual estimated electricity bills from using the products. Consumers require less cognitive skills to identify the energy efficiency of products since they can identify product energy efficiency by simply counting the number of the stars. Presently, six varieties of home electric appliances including AC, REF, and TV are covered under this program.
CO 2 emissions per household reached approximately 4520 kg CO 2 in 2016, about 50.9% was due to electricity. Pertaining to electricity usage, the shares of usage from REF, lighting, TV, AC were 14.2%, 13.4%, 8.9%, and 7.4%, respectively in 2009 (Ministry of the Environment 2019). This data suggests that improvements in energy efficiency of electric appliances are closely related to the reduction of the CO 2 emissions from households. However, households tend to not choose an energy-efficient durable even if they are informed of the detailed information about product energy efficiency (Allocott 2011; Jaffe and Stavins 1994). Moreover, not all households would equally react to such programs: for example, wealthy households with many family members are more likely to purchase an inefficient REF (Wang et al. 2019). Households living in rented houses are less likely to choose LED lamps (Onuma and Matsumoto 2019). Since the energy-efficiency of appliances has been greatly improved through the implementation of the programs mentioned above, the next challenge is how to encourage households to purchase an energy-efficient appliance.
Policy Measures to Improve Housing Energy Efficiency
Households can reduce energy consumption by installing energy-efficient durables. Similarly, households can reduce energy consumption by improving the energy efficiency of their houses. Although both the purchase of energy-efficient durables and the renovation of old houses are energy-saving investments, previous studies have found that households respond differently to these two types of energy-saving investments. Ramos et al. (2016), and Trotta (2018) confirm that the environmental attitude can explain the purchase of energy-efficient appliances, but not for home renovation. This data suggests a different policy for improving the energy efficiency of houses vis-à-vis that of other energy-consuming durables.
In order to improve the energy-efficiency of houses, the Japanese government has introduced various measures including subsidies and a long-term tax reduction, and the most ambitious measure: the subsidy for net zero energy houses (ZEHs). 7 These are houses whose annual primary net energy consumption is set at around zero (or less). Under the ZEH program, houses are constructed to save energy as much as possible, while maintaining a comfortable living environment. In the fourth Energy Basic Plan introduced in 2014, the Japanese government targeted make more than half of newly-constructed detached houses ZEHs by 2020, and the average newly-constructed house ZEH by 2030 (Agency for Natural Resources and Energy 2014).
In recent years, a series of subsidy programs have been introduced to promote ZEHs. The first, "ZEH support program", started in 2015, which targets newlyconstructed detached houses with more than 20% reduction rate of primary energy consumption as well as high thermal insulation performance. In the first program period, 1.3 million JPY would be provided for households constructing a ZEH, with 1.5 million JPY for households in cold regions. In 2016, 6146 subsidies were issued, the average reduction rate of the primary energy consumption including solar power among these houses reached to 120.7%, and with excluding solar power reached to 43.9%. With the success of the first ZEH program, the government continued it but reduced the amount of subsidies: 1.25 million JPY in 2016, 0.75 million JPY in 2017, and 0.7 million JPY in 2018 and 2019, although the number of issued subsidies has increased to 7100 in 2018 (Sustainable Open Innovation Initiative, SII 2019).
The Japanese government introduced "ZEH + program" in 2018 and "ZEH + R program" in 2019. The ZEH + program requires ZEHs' average reduction rate of primary energy consumption to be 25%. The ZEH + R program asks sufficient energy provision during a power failure as well as the resilience strengthening option, in addition to the requirement of the ZEH+ program. The subsidy amount of the ZEH+ program is 1.15 million JPY/house, and that of ZEH + R program is 1.25 million JPY/house. The number of subsidies provided under three types of programs (ZEH, ZEH + , and ZEH + R) were 9172 in 2018 and 7345 in 2019.
Subsidies for companies began 2018 with "Detached-sale ZEH program. Aiming to support building companies to construct ZEHs, the program provides 0.7 million JPY (or 1.15 million JPY) per house to the building company (SII 2018). In 2018, the first subsidy program targeted at housing complexes (including apartments), "High building ZEH-M program" started, whereby projects with six floors or higher ZEH apartment can obtain a subsidy two-thirds of the total subsidized cost.
In addition, the ZEH builder mark and the ZEH planer mark have been implemented to increase the recognition of ZEHs among households as well as building companies. However, despite such efforts, only 15.3% of newly-constructed detached houses were ZEHs in 2017 (Agency for Natural Resources and Energy 2019), far below the 2020 target of 50%. Given the high housing construction, it seems difficult to achieve the target solely thorough the ZEH subsidy programs: with the average price of a new house in Japan of 34 million JPY (Japan Housing Finance Agency 2019), the subsidy amount to less than 4% of the construction cost.
Support for Solar Panel Installation
Solar panel, an important renewable energy, has been universally used in the household sector. In Japan, the first solar panel for residential use was installed in 1993. Given the expensive price of solar panels, the Japanese government introduced a subsidy program in 1994. The size amounted to 50% of the installation cost. Nevertheless, solar panels are unpopular, with only 3.14% of Japanese households installing them in 2005 (NSFE 2014).
The promotion of solar panels in the household sector was proposed again when formulating the Action Plan for Creating a Low-Carbon Society in 2008 (Ministry of the Environment 2008) and the revival of the subsidy program since 2009. Owing to this new program, the installation cost of the solar-panel system was lowered substantially. When introduced in 2009, households purchasing a solar-panel system with a unit price less than 700,000 JPY could receive a subsidy of 70,000 JPY/kW initially. However, the amount of subsidy kept decreasing continually to 15,000 JPY/kW when the program ended in 2013. This subsidy targeted households that purchased a relatively low-price solar-panel system. For example, in 2012, the subsidy for a system priced lower than 475,000 JPY was 35,000 JPY/kW, while that for a system priced lower than 550,000 JPY was only 30,000 JPY/kW (Eco life 2019).
In addition to the subsidy program, the government started a 10-year Feed-in Tariff (FIT) 8 in 2009, promising that the surplus electricity produced by solar panels Figure 2 indicates the differences in annual energy consumption (the sum of electricity, natural gas, propane gas, and kerosene consumption) between solar-panel households and non-solar-panel households (SCDEH 2016). (The energy produced by solar panel is not included.) Figure 2 shows that solar-panel households use less energy than households without solar panels, a propensity more palpable among multi-person households and households living in detached houses. The annual energy consumption of non-solar-panel households is 43.08 GJ while that of solarpanel households is only 30.8 GJ. As for households living in detached houses, the annual energy consumption of solar-panel households is 30.04 GJ, and that of non-solar-panel households is about 45.17 GJ.
Conclusion
In this chapter, we reported the characteristics of the energy consumption of Japanese households, and then reviewed the policy measures implemented in Japan for residential energy conservation. Like other developed countries, the Japanese government has introduced various programs to improve energy efficiencies of energy-consuming durables. Among them, the most effective policy is probably the top runner program: the energy efficiency of appliances has greatly improved for the last several decades. According to a survey by Ministry of Economy, Trade and Industry (METI) (2007), during the period 1997-2004, the energy efficiencies of televisions (TVs), air conditioners (ACs), and refrigerators (REFs) improved by 25.7%, 67.8%, and 55.2%, respectively. Nevertheless, much of energy saving effects has been lost due to stock and size increase (Inoue and Matsumoto 2019). The fact tells that it is difficult to reduce residential electricity consumption merely through technological innovation.
Even if it reliably reported that energy investment is beneficial, many households will not invest in energy efficiency. In recent years, many studies have been conducted worldwide in order to find effective programs to induce households to choose energyefficient durables. Although many interesting findings have been reported in recent studies, it is expected that the effectiveness of incentive programs would vary across countries. Thus, it is necessary to find effective programs for Japanese households. However, at present, it is not well-known what types of households do not invest in energy efficiency and what type of information households are likely to respond to. Further research is clearly needed.
Although various subsidy programs have been introduced for the last several decades, those programs primarily focus on the purchase of new products. Such subsidy programs would be effective for durables with a short replacement cycle, and less effective for the durables whose replacement cycle is slow. And given that the amount of subsidy is small compared to the purchase price, the subsidy program for energy-efficient houses seems less successful currently (Matsumoto 2016). Given that household energy efficiency improvement will substantially impact carbon mitigation, it is important to find more effective programs for penetrating energy-efficient houses. Although a system to display the total energy performance of houses has been introduced in Japan (Housing Performance Evaluation and Display Association 2019), its usage is low, and will (as in other developed nations) be necessary to popularize it in the future.
A palpable weakness of the subsidy programs is regressivity: Almost all subsidy programs, including for solar power and new appliances, support the purchase of durables, but households obviously must purchase them to receive subsidies. The households using such a subsidy program lived in detached houses where solar panels could be installed, or were those who had an additional deposit to replace electric appliances during the specified subsidy period. Therefore, in past subsidy programs the poor supported the rich to enable him or her to use energy services at low cost. Perhaps, such regressive policies will not be able to retain public support. A publicly acceptable policy, must not only account for energy consumption, but also energy consumption purposes.
Japan introduced the carbon tax in October 2012 to mitigate global warming, which was simply added to the old energy taxes (Chap. 1). As we mentioned before, the tax rate is low presently but is expected to increase in near future. The distinguishing feature of this new carbon tax is that it is uniformly applied on a CO 2 basis regardless of the purpose of energy use. In contrast, the conventional energy taxes were adjusted by the energy use purpose. Although the new carbon tax effectively mitigates carbon, it is regressive. In particular, the new carbon tax is more stringent for low-income households living in cold regions. Thus, the government should introduce redistribution policies when it increases the carbon tax.
The rapid spread of renewable energies is essential for significant energy savings in the household sector. Households with a strong interest in environmental problems installed a renewable energy system initially, and subsequently households with sufficient financial asset installed it by using subsidies. However, the system penetration is still low, and more households will need to use renewable energy equipment in the future. Even if various policy options for renewable energies are introduced, it will be difficult to achieve the energy conservation target. It is, therefore, necessary to investigate energy use purpose in order to judge whether a household is using essential energy for life or is wasting energy. Without that knowledge, it is impossible to speculate how much energy can be reduced.
Jiaxing Wang is a doctoral program's student at Aoyama Gakuin University, major in economics. She also earned a Master's Degree in Economics from Aoyama Gakuin University in 2018. She comes from Shanghai, China. She earned her Bachelor's Degree in Management at Shanghai Normal University in 2015. In addition, she has an experience of studying as an exchange student at Yokohama City University during 2013-2014.
Her research interest lies in environmental economics, particularly focuses on energy problems among household sector. She studies the mechanism of the energy consumption of households and consumers' valuation on energy-saving practices.
Shigeru Matsumoto joined the Aoyama Gakuin University faculty in 2008. He studied on Heiwa Nakajima Foundation Scholarship at North Carolina State University, where he earned his Ph.D. in economics. He also holds his Masters of Environmental Science from Tsukuba University. Before coming to Aoyama Gakuin University, he spent seven years on the faculty of Kansai University.
His research interest lies in the applied welfare economics, with particular focus on consumer behavior analysis. In recent years, he studies households' pro-environmental behaviors such as recycling and energy-saving practices as well as consumers' valuation on food attributes such as organic farming. http://shigeruykr.wixsite.com/happy-environment.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 6,874 | 2020-09-18T00:00:00.000 | [
"Economics"
] |
PROBLEMS OF DEVELOPMENT OF THE AGRICULTURAL INSURANCE MARKET IN UKRAINE
insurance products. The study is original in nature, as the authors consider the issue of agricultural insurance in a compre - hensive manner and draw attention to its importance for the sustainable development of agriculture. The results of the study can be applied in decision-making practice for the development of agricultural insurance in Ukraine.
Statement of the problem in a general form and its connection with important scientific or practical tasks. Agricultural insurance is an important component for Ukraine, as the agricultural sector is one of the largest sectors of the national economy. Ensuring its stable and uninterrupted operation is a very important factor for the economic stability of the country as a whole. Agricultural insurance helps to reduce the risks of growing agricultural products, such as droughts, heavy rains, frosts, pests, etc., which can lead to significant losses for agricultural enterprises and the country as a whole.
Analysis of the latest studies and publications, which the author relies on, which consider this problem and approaches to its solution. The problem of agricultural risk insurance is an important topic for many scientists and economists. Famous scientists who have conducted research in this area include V. Mirgorodska [1], S. Pokhylko [1], T. Petruk [2], Y. Krakivskyi [3], Y. Aleskerova [4], O. Hutsalenko [4], H. Marich [5], these scientists were engaged in the study of the concept of agricultural insurance, its features, characteristics and types. But the comparison of foreign and domestic agricultural insurance was carried out by such scientists as Clobodianiuk O. V. [11], Zhuravka O. S. [12], Saiko A. A. [12], Tanklevska N. S. [13], Yarmolenko V. V. [13], Prokopchuk O. T. [14] and others.
Highlighting previously unsolved parts of the general problem, to which the specified article is devoted. The study of agricultural risk insurance is an important area for the economy and agriculture in Ukraine and other countries. Research in this area helps to determine the effectiveness and decisionmaking on risk insurance in agriculture, reduce crop losses and ensure the stability of agricultural income.
Formulation of the goals of the article (statement of the task). The main objectives of the article are to determine the current state of the agricultural insurance market in Ukraine (volumes, types of insurance and the level of participation of agricultural enterprises in insurance programs), given the challenges of the full-scale aggression of the Russian Federation against Ukraine, to study the mechanisms of agricultural insurance support in different countries and identify positive examples for Ukraine, and to analyze the program of state support for agricultural insurance currently being implemented by the Ukrainian government, as well as to outline the prospects for the development of the Ukrainian market.
Presentation of the main research material. Agricultural insurance in Ukraine is an important component of the sustainable development of the agricultural sector, as it helps to reduce the risks associated with growing plants and animals, and to preserve income or reduce losses in the event of negative consequences of natural phenomena, diseases or other circumstances.
Analyzing the concepts from Table 1, we can conclude that agricultural insurance is a type of insurance that protects agricultural products, machinery, and animals from risks and provides for compensation for losses.
In Ukraine, 10 insurance companies are currently engaged in classical agricultural insurance, although more than 40 companies have licenses to provide this type of insurance.
Analyzing Table 2, we can conclude that all Ukrainian insurance companies, except for two, Etalon and Guardian, reduced their agricultural insurance payments in 2022 comared to previous years, this may be due to a decrease in the number of companies insuring their property due to the unstable economic and political situation.
PZU Insurance Company has the largest volume of payments, but it decreased by 68% in 2022 compared to 2021 and by 57% compared to 2020. UNIVERSALNA also reduced its insurance payments by 31% in 2022 compared to 2021, but increased them by 111% compared to 2020.
Insurance companies Krayina, ARX, Ingo and ORANTA also reduced their insurance payments in 2022 compared to previous years. Guardian increased its insurance payments by 13% in 2022 compared to 2021, but reduced them by 92% compared to 2020. ORANTA experienced a 26% decrease in its insurance payments compared to 2021, and BROKBUSINESS does not have data for 2021, but experienced an 83% decrease compared to 2020. UPSK has data only for 2021 and 2020 and reduced its insurance payments by 34% in 2021 compared to 2020.
Based on the table data, we have constructed a diagram (Fig.1), analyzing which we can say that most insurance companies have reduced their insurance payments for agricultural insurance in 2022 compared to previous years. is a special type of property insurance under which agricultural producers (enterprises and individuals) acting as insureds, by purchasing an insurance policy from an insurer, ensure protection of their activities against specific industry risks (loss of crops and plantings of perennial plantations) with a low level of predictability O.M. Ostapenko an objective category that allows to regulate the relations between insurers and agricultural producers regarding the settlement of issues of compensation for losses to each other Y. S. Krakivskyi a mechanism for managing all agricultural risks that balances the interests of all parties or participants Y. V. Aleskerova is a set of economic relations between specific economic entities, where one party is insurers and the other party is insureds -agricultural producers who, for a specified fee, transfer their potential risks that may occur in agricultural production in order to receive compensation in the event of an insured event Y. V. Samoilik is a type of civil law relationship to protect the property interests of individuals and legal entities involved in agricultural production in the event of certain events (insured events) specified in the insurance contract or applicable law О. M. Lobova is a system of economic relations between specific economic entities, where, on the one hand, there are insurers -financial and credit institutions, and on the other -insuredsagricultural enterprises, tenants, peasant (farm) households, which for a certain fee transfer their risks of property and financial losses in agricultural activities in order to receive compensation for the occurrence of an insured event Source: compiled by the authors based on: [1][2][3][4][5] In this paper, we have analyzed the activities of four companies: «PZU», «Universalna», «Krayina» and «UPSK».
The largest volume of insurance premiums is provided by PZU, and its services include: 1. Crop insurance for the wintering period. It provides insurance of expenses incurred by the farm for sowing and growing winter crops.
The cost of insurance is determined on the basis of the basic tariff, taking into account adjustment coefficients.The insurance premium is payable in a lump sum or by installments, but no later than December 1 of the current year.
The program allows you to protect crops against the most dangerous winter risks that most often threaten plants. Death means the destruction of 50% (or more) of plants from their original density.
2. Future crop insurance for the spring-summer period. It provides insurance of future harvest of winter and spring crops. The insurance is provided in case of non-receipt or shortfall of the crop harvest as a result of the occurrence of insurance risks The sum insured is determined as the value of the future harvest by multiplying the insured yield by the area of crops and the cost per metric unit of the respective agricultural product.
3. Insurance of perennial plantations. The insurance program allows you to protect perennial plantations: fruit and berry crops and vineyards from death. The sum insured is determined in the amount of the appraised or book value of the plantations. The subject of insurance is orchards and vineyards.
4. Animal insurance. The program allows compensation for material losses associated with the death and forced slaughter of livestock as a result of natural disasters, accidents, and infectious diseases. The insured risks in this type of insurance are fire, smoke, soot, corrosive gas, lightning, explosion, natural disasters, etc. The insurance rate depends on the type of animal, the size of the selected deductible, and can range from 0.1 to 5.0 percent of the sum insured [7].
Universalna Insurance Company ranks second in terms of insurance payments; it specializes in insuring agricultural crops such as wheat, barley, rye, triticale, and winter rape. The insured event under the insurance contract is the complete or partial loss, destruction or damage to crops that resulted in the failure to receive or shortfall in the crop yield in the event of direct exposure to insurance risks. Under this type of insurance, the sum insured is determined by multiplying the level of insured yield (trigger) by the area of crops taken for insurance and the price of a metric unit of the crop. When signing the insurance contract, the farmer can choose the level of insurance coverage, prices for future products, and the percentage of reimbursement of expenses in the winter period. Insurance rates are calculated based on statistical data on the unprofitability of agricultural production over the past 20 years in Ukraine [8].
Insurance company «Krayina» ranks third in terms of insurance payments, its activities are carried out in the following areas: insurance of agricultural machinery, insurance of perennial Source: сompiled by the author on the basis of [6] plantations, insurance of farm animals and poultry, as well as crops. All agricultural production risks accepted for insurance are ceded to Swiss Re -72% (Switzerland) + Hannover Re -28% (Germany) under a bond agreement without loss limitation (no Cap Loss) with a liability volume of up to UAH 2 billion. Insurance contracts can cover: overwintering crops, including protection against spring frosts until May; future harvest against both complex and named risks; future harvest according to the yield index, and others. Tariff rates vary by region [9].
UPSK Insurance Company provides agricultural insurance under the following programs: 1. Program of comprehensive insurance of crops for the period of wintering.
2. Comprehensive insurance of future crops harvest.
3. Insurance of future crops against hail and fire. Insurance rates for each of the programs are determined depending on the crop, the region of cultivation, the selected deductible, the list of risks, the size of the crops transferred for insurance, etc. Insureds can be agricultural producers engaged in the cultivation of winter and spring crops.
The company carries out reinsurance with a non-resident reinsurer with a high reliability rating and experience in providing reinsurance protection to the Ukrainian crop insurance market [10].
After analyzing the work of insurance companies, we can conclude that agricultural products are most often insured in Ukraine, and machinery and animals are less common. In general, the level of insurance tariffs is quite difficult to analyze because of the individual approach to each client, and the level of insurance indemnities is not indicated on the official websites of the companies.
The analysis of insurance companies' activities helped to identify the following main problems of agricultural insurance development in Ukraine: 1) farmers' distrust of insurance companies due to low quality of service and lack of transparency of insurance terms and conditions; 2) tariff rates for agricultural insurance services are unbalanced and incorrectly reflect the cost of possible losses in the future, as there is no centralized statistical information on insured crop areas and the amount of insurance premiums collected; 3) crop insurance is not used in Ukraine throughout the entire agricultural production cycle; 4) the lack of a mechanism to provide guarantees from the state regarding the distribution of insurance premiums among agricultural enterprises; 5) many agricultural enterprises do not have sufficient information about the benefits and opportunities of agricultural insurance; 6) the system of collecting statistical data and technologies for assessing damages in Ukrainian agriculture is imperfect, which complicates the process of determining the cost of insurance premiums; 7) high tariffs for agricultural insurance make many farmers refuse to take it out.
For comparison, let's look at the peculiarities of agricultural insurance in different countries of the world (Table 3).
Each country has its own agricultural insurance system with distinctive features. The United States and Poland provide government support to attract more farmers to insurance, thus reducing the cost of insurance services.
In the United States, the government provides subsidies to pay part of the insurance premiums, and in Poland, there is an Insurance Guarantee Fund to pay 100% of insurance indemnity under compulsory insurance contracts. In Canada, the state develops insurance programs and insurance products with state support, which reduces the cost of insurance products, but can be an additional burden on the country's budget. In Germany, the private sector prevails in the agricultural insurance system, but the state is involved in the effective development of the agricultural insurance system and providing farmers with access to it.
Thus, it can be concluded that each country has its own peculiarities of agricultural insurance, but in general, its goal is the same -to ensure financial stability of farmers and support sustainable agricultural development. Each country may use different mechanisms to support agricultural insurance, from subsidies to government programs and support for the development of the insurance market.
Based on the above features of agricultural insurance in different countries, the following elements of foreign agricultural insurance practice can be introduced in Ukraine: 1. State involvement through subsidies can reduce the cost of insurance services and make them more affordable for farmers. A program similar to the American model, where the state pays a portion of the insurance premiums actually paid by insureds, could be considered.
2. State insurance system: Development of state insurance programs and state-supported insurance products, as in Canada, could provide lower cost insurance products for farmers.
3. Introducing compulsory insurance, as in Poland, can provide protection against certain risks for farmers. 4. Establishment of an Insurance Guarantee Fund, similar to the Polish example, can ensure 100% payment of insurance indemnity under compulsory liability insurance contracts for persons engaged in agriculture.
ЕКОНОМІКА ТА УПРАВЛІННЯ ПІДПРИЄМСТВАМИ (ЗА ВИДАМИ ДІЯЛЬНОСТІ)
In Ukraine has mandatory state-supported crop insurance and voluntary insurance against risks associated with livestock farming. The state promotes the development of the insurance market, in particular by reducing tax rates for insurance companies that raise funds for the development of agricultural insurance.
Ukraine is currently implementing a program of state support for agricultural insurance. This means that each insurance company will be able to receive a certain compensation for paying insurance premiums in the agricultural sector.
According to the Resolution dated 09.12.2021 «On Approval of the Procedure for Providing State Support for Agricultural Insurance», state support may be provided to agricultural producers -legal entities and individual entrepreneurs. The compensation is provided in the amount of up to 60 percent of the insurance premium, but may not exceed 10 thousand minimum wages established as of January 1 of the relevant year.
To receive compensation under the concluded insurance contracts, the insured shall submit to the authorized insurer: 1. an application in the form established by the Ministry of Agrarian Policy; 2. consent to the provision of information containing personal data by the authorized insurer to the Ministry of Agrarian Policy in the form determined by the authorized insurer; 3. a bank-certified copy of the payment order for the payment of insurance premiums; 4. a certificate of opening a current account issued by a bank.
Applications are submitted: -before April 1 of the current year -for insurance contracts concluded from November 10 of the previous year to March 31 of the current year; -until August 1 of the current year -for insurance contracts concluded from April 1 to July 31 of the current year; -until November 10 of the current year -for insurance contracts concluded from August 1 to November 9 of the current year [15]. On February 7, 2023, the Ministry of Agrarian Policy and Food published a draft order «On a standardized insurance product for insuring winter grain crops with state support against agricultural insurance risks for the entire period of cultivation» [16]. The adoption of this order will determine the procedure and conditions for insuring winter grain crops with state support against agricultural insurance risks for the entire period of cultivation.
The draft document provides for the approval of, inter alia: Table 3 Features of agricultural insurance in different countries of the world Country
USA
The American model involves the state in the insurance of farmers by providing subsidies to cover a portion of the insurance premiums actually paid by the insured. This reduces the cost of insurance services and makes them more attractive to consumers. The standard amount of reimbursement by the state of a portion of the insurance premiums paid by insurers is 50% of the cost of the insurance service.
Canada
Canada has a state agricultural insurance system. The state develops insurance programs and insurance products with state support, for the implementation of which it uses funds from the central and local budgets. The advantage of such a system is that the cost of insurance products is lower with the help of the state, but at the same time it is an additional burden for the country's budget.
Poland
Polish farmers are required to insure at least 50% of their crops and livestock. A Polish farmer must insure livestock against at least one of the following insurance risks: hurricane, flood, rain, hail, lightning, landslides, lava flows, and forced slaughter. The following livestock are subject to compulsory insurance: cattle, horses, sheep, goats, poultry and pigs. The maximum sum insured is determined by an order of the Ministry of Agriculture and Rural Development. In Poland, the state ensures the functioning of the Insurance Guarantee Fund to pay 100% of insurance indemnity under compulsory liability insurance contracts for persons engaged in agriculture.
Germany
The private sector predominates in the agricultural insurance system, where no subsidies are paid from the state budget at all. However, the state is directly involved in the effective development of the agricultural insurance system: it actively manages the system by developing basic insurance products, introducing a number of restrictions and establishing the technical framework for the system. The country also has programs of situational payments from the state budget in the event of natural disasters. These programs are structured, and payments are made only if authorized by the European Union.
Spain
More than 70% of farmers, about 90% of crops and 70% of livestock are insured. On average, subsidization is at 53% of total premiums. Of these, 40-45% are subsidized by the central government and 10-15% by regional governments. Catastrophic losses are compensated primarily to farms that have insured their crops or animals. Crop insurance is an integral part of the national agricultural policy. All risks are reinsured through the state reinsurance company.
ПРОБЛЕМИ СУЧАСНИХ ТРАНСФОРМАЦІЙ. Серія: економіка та управління https://reicst.com.ua/pmt/issue/view/issue_9_2023 -Terms and conditions of the contract of insurance of winter grain crops with state support against agricultural insurance risks for the entire period of cultivation; -Application form for insurance of winter grain crops with state support against agricultural insurance risks for the entire period of cultivation; -forms of the List of crop plots of the insured crop; -the form of the Inspection Report of the insured crops and instructions for its completion [16].
These changes in agricultural insurance should lead to an increase in farmers' confidence and an increase in the volume of insurance of agricultural products, livestock and machinery, as well as an increase in the number of companies engaged in agricultural insurance.
To solve the existing problems in Ukraine, the following measures should be taken: 1.
To improve the quality of service of insurance companies, to create convenient websites where the client can calculate the cost of insurance at any time.
2. Assess all risks associated with agricultural insurance and reassess insurance rates accordingly.
3. Develop special programs that would cover all stages of agricultural production.
4. The state should consider various methods of stimulating the development of agricultural insurance, which may include subsidizing a part of insurance premiums, creating special funds for compensation of losses, etc.
5. The state should create an environment in which there will be cooperation between insurance companies, agricultural organizations, banks and government agencies in order to create a favorable environment for the development of agricultural insurance.
It is also worth highlighting the prospects for the development of agricultural insurance in Ukraine. Below are some of them: 1. The draft Recovery Plan for Ukraine identifies lending and insurance as the main elements of state support for the agricultural sector for the period 2023-2025, which may result in new agricultural insurance products that will be even more accessible to farmers.
2. Application of the latest technologies in diagnosing risks related to agricultural insurance will help to develop effective insurance strategies.
3.The development of new insurance products could increase interest in insurance and help reduce risks.
Given the prospects for insurance development in the Ukrainian agricultural sector, it can be concluded that agricultural insurance has the potential for further development. The development of the insurance market can lead to an increase in the number of insurance products that meet the needs of farmers and have an affordable price. The use of new technologies and the development of new insurance products can help reduce risks and increase farmers' interest in insurance.
Thus, taking into account these factors, agricultural insurance can become an important tool for protecting farmers from the negative effects of various risks and help develop Ukraine's agricultural sector.
Conclusions from this study and prospects for further research in this direction. According to the above, the concept of agricultural insurance is a rather important component of the sustainable development of the agricultural sector. In Ukraine, agricultural insurance is currently provided by a small number of companies, including PZU, Universalna, Krayina, and UPSK. Their services are quite limited, which may be due to the low demand from farmers.
In foreign countries, agricultural insurance is a widespread practice that allows farmers to reduce the risks associated with weather-related losses, protect their investments, and ensure stability in production. In the countries studied in this paper, the goal of agricultural insurance is quite similar, namely to ensure financial stability, although each country has different mechanisms to support agricultural insurance, such as subsidies, development of insurance programs and insurance products with state support, introduction of compulsory agricultural insurance, and reinsurance of losses through the state reinsurance service.
State support for agrarians in Ukraine is currently being actively transformed. The Resolution «On Approval of the Procedure for Providing State Support for Agricultural Insurance» provides for reimbursement of part of the insurance premiums to insureds, which will ensure their financial stability in the context of the unstable Ukrainian economy. In addition to this resolution, the Ministry of Agrarian Policy also plans to develop a mechanism for winter crops insurance.
However, agri-insurance is currently quite unpopular and has problems, including farmers' distrust of insurance companies, unbalanced tariffs, lack of multi-risk insurance, and the absence of a guarantee mechanism from the state. Several steps need to be taken to address these problems. First, it is necessary to increase farmers' confidence in insurance companies by improving the quality of service and transparency of insurance terms. Next, it is important to revise agricultural insurance tariffs to reflect actual risks and ensure the availability of centralized statistical information. In addition, it is necessary to actively promote and support multirisk insurance throughout the entire crop production cycle. The state can play an important role by providing financial support, guarantees and creating № 9. 2023 ЕКОНОМІКА ТА УПРАВЛІННЯ ПІДПРИЄМСТВАМИ (ЗА ВИДАМИ ДІЯЛЬНОСТІ) a favorable environment for the development of agricultural insurance. In addition, it is important to develop a loss assessment infrastructure and provide advisory support to farmers. The path to successful development of agri-insurance in Ukraine lies in a comprehensive approach, cooperation between different stakeholders and government support. | 5,551 | 2023-07-24T00:00:00.000 | [
"Economics"
] |
Interaction of Tungsten tips with Laguerre-Gaussian beams
Interaction of femtosecond laser pulses with metallic tips have been studied extensively and they have proved to be a very good source of ultrashort electron pulses. We present our study of interaction of Laguerre-Gaussian (LG) laser modes with Tungsten tips. We report a change in the order of the interaction for LG beams and the difference in the order of interaction is attributed to ponderomotive shifts in the energy levels corresponding to the enhanced near field intensity supported by numerical simulations.
Introduction
Laguerre-Gaussian (LG) modes are a solution to the paraaxial Helmholtz equation with a cylindrical symmetry. The most interesting properties of LG beams are a result of their topological phase structure. The phase of an LG beam continually increases counter-clockwise, on a crosssection of the beam, along a closed loop from 0 to 2πl, where l is an integer called the topological charge. Since the angle 0 is equal to 2πl, a continuous distribution of phase is obtained resembling the topological structure of a mobius strip [1]. A phase singularity, also called a vortex is formed at the centre of the beam where the phase remains undefined. Due to this azimuthal phase variation, the wavefront appears twisted in shape. This twisting of the beam is identified as the OAM of the beam and the pitch of the twist determines the magnitude of the OAM which is related to the topological charge. It was shown that the projection of the OAM along the direction of propagation is equal to l per photon averaged over the beam [2]. Thus, individual photons carry orbital angular momentum in addition to spin angular momentum in LG modes.
In practice, such beams are obtained from Gaussian modes using computer generated holograms [3,4] or spiral phase plates [5,6]. Other methods involve astigmatic mode conversions where high order Hermite-Gaussian (HG) beams are passed through a pair of cylindrical lenses and the Gouy phase shift thus introduced results in the conversion of the HG mode to an LG mode [7]. Lower order LG modes were generated in the cavity of solid state lasers using nanoscale mode selection elements [8]. The detection and characterization of the mode of the resultant beam can be done using interferometric techniques [9,10] or by studying diffraction of the beam through different apertures [11,12].
LG beams are central to various applications in diverse fields. In the field of communication, LG beams were used to encode multiple bits in a single photon [13] with an inherent security feature that does not require any mathematical or quantum mechanical encryption. LG beams are used in imaging applications to achieve super-resolution upto the order of λ/25 in Stimulated Emission Depletion (STED) microscopy [14,15]. LG modes are well suited for Gravitational wave detectors as they are reported to reduce the influence of thermal noise in such systems [16,17]. Other applications include micromachining [18] and optical tweezing [19].
Considerable amount of research has been done on the properties of LG modes and the transfer of OAM to matter at various scales. For instance, it was shown in the case of photoexcitation and photoionization of atoms by OAM beams, the selections rules can be different than those of plane waves due to transfer of OAM [20]. Properties of above threshold ionization spectra by OAM beams were calculated and predicted the existence of photoelectrons emitted in the direction of laser propagation [21]. In high harmonic generation, it was shown that the OAM was conserved in the process of generation of higher harmonics of the fundamental beam [22]. In metal nanoparticles, transfer of OAM to surface plasmons have been theoretically demonstrated and the excited plasmon mode was shown to be determined by the Total Angular Momentum (TAM) transferred in the excitation process [23]. For larger particles, the mechanical equivalence of spin and orbital angular momentum was established by the observation of cancellation (or addition) of spin angular momentum by orbital angular momentum to give total angular momentum [24].
In this report, we have presented our study of interaction of femtosecond LG beams with metallic tips. The electron emission and their dynamics from metallic tips illuminated by femtosecond laser pulses are well known [25,26]. When light is focused on metallic tips of sizes smaller than the wavelength of the incident light, the field value near the apex of the tip rises several times the incident field [27]. This enhanced near field leads to emission of photoelectrons from the tip surface. In the case of vortex beams, this near field enhancement can be different and may lead to different emission properties. A comparison of the electron emission rates is presented and we show a change in the electron emission property for LG modes.
Experimental Methods
A schematic representation of the experimental setup is shown in figure (1). A Ti-Sapphire laser lasing 25 fs, 800 nm pulses at a repetition rate of 1 kHz was used as the source in the experiment. A combination of halfwaveplate and polarizer was used to control the intensity of the laser beam. A spiral phase plate was used to obtain the desired LG mode of topological charge, l = 1. To detect the emitted electrons, a channeltron detector was placed parallel to the tip axis. The output signal from the detector was then fed to a Digital Oscillosocope triggered by a high speed photodiode. The position of the tip with respect to the laser beam was constantly monitored using a CCD camera.
Results
Inset in figure (1) shows the Interferometric Autocorrelation Trace (IAT) with the tip used as the nonlinear element. The emission process was fast enough for the measurement of 35 fs pulses ruling out thermal emission with characteristic timescales of 100 fs to 1 ps [28,29]. Figure (2) shows the electron emission rates as a function of the average power for Gaussian beams. The data depicted in a log-log plot forms a straight line and the value of the slopes are 2 and 1.6 respectively for tip voltages 0V and 40V. For the case of an LG beam, figure (3), containing OAM (l = 1), the slopes of the log-log plot are nearly equal to 1 for both 0V and 40V. Figure 2: Gaussian beam when focused on the tip. In a log-log plot, the slope of the curve represents the exponentiating index (Γ = α n I n ). The values of n are 2 (solid) and 1.6 (dash). Γ is the emission rate, I the intensity of the radiation, n the number of photons absorbed and α n the cross-section of the interaction.
Discussions
It is seen that in both Gaussian and LG beams, the yield of 40V is higher than 0V. This is because applying a potential on the tip decreases the height of the potential barrier giving rise to a higher ionization probability. The slope of the plot represents the number of photons absorbed in the process of ionization. As seen in numerical simulations, the intensity distribution indicates a higher field value at the tip apex for Gaussian beams. The ponderomotive energy due to this field distribution on the apex of the tip shifts the continuum [30] higher for Gaussian beams than LG beams. As a result of this shift in the continuum, the order of the interaction for the Gaussian beam is higher than that of the LG beam.
Conclusion
In this work we have explored the interaction of metallic tips with LG laser modes. The order of interaction for Gaussian beams was seen to be higher than LG beams. It is a result of a larger field enhancement factor for Gaussian beams near the apex of the tip that lifts the continuum higher so that more photons are required for ionization. | 1,800.4 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Prediction of tissue-specific cis-regulatory modules using Bayesian networks and regression trees
Background In vertebrates, a large part of gene transcriptional regulation is operated by cis-regulatory modules. These modules are believed to be regulating much of the tissue-specificity of gene expression. Results We develop a Bayesian network approach for identifying cis-regulatory modules likely to regulate tissue-specific expression. The network integrates predicted transcription factor binding site information, transcription factor expression data, and target gene expression data. At its core is a regression tree modeling the effect of combinations of transcription factors bound to a module. A new unsupervised EM-like algorithm is developed to learn the parameters of the network, including the regression tree structure. Conclusion Our approach is shown to accurately identify known human liver and erythroid-specific modules. When applied to the prediction of tissue-specific modules in 10 different tissues, the network predicts a number of important transcription factor combinations whose concerted binding is associated to specific expression.
Background
A cis-regulatory module (CRMs) is a DNA region of a few hundred base pairs consisting of a cluster of transcription factor (TF) binding sites [1]. By binding CRMs, transcription factors either enhance or repress the transcription of one or more nearby genes. Coordinated binding of several transcription factors to the same CRM is often required for transcriptional activation, thus allowing a very specific regulatory control.
High-throughput experimental identification of CRMs remains inaccessible, especially for distal enhancers. Methods like genomic localization assays (also known as ChIP-chip) using whole genome tilling arrays may soon improve the situation, but the cost of such extremely large arrays will limit their utilization. Because of this, several computational approaches have been developed for predicting cis-regulatory modules. Some attempt to identify regulatory modules with a particular function (e.g. muscle [2] or liver [3] specific CRMs, and many others [4][5][6]) by building or learning a model of the binding site content of such modules, based on a set of known modules. These methods generally obtain a reasonable specificity, but their applicability is limited to the few tissues, cell types, or conditions for which sufficiently many experimentally verified modules can be used for training. Others seek more generic signatures of cis-regulatory regions, like inter-species sequence conservation [7], sequence composition [8], or homotypic and heterotypic binding site clustering [9,10]. These methods are more widely applicable, but their predictions may be of lesser accuracy, because they do not rely on any prior knowledge. Furthermore, the predictions made by these algorithms are not accompanied by any annotation regarding the putative function of the modules. The PReMod database [11] contains more than 100,000 human CRM computational predictions, mostly consisting of putative distal enhancers.
By adjoining other types of information to the predicted module information, additional insights can be gained into the function of specific modules. For example, in yeast, Beer and Tavazoie have used gene expression data to train a algorithm to predict expression data based on sequence information. In human, Blanchette et al. [12] and Pennacchio et al. [13] have used tissue-specific gene expression data from the GNF Altas2 [14] to identify certain transcription factors involved in tissue-specific regulation and Pennachhio et al. [13] have further developed models to predict the tissues-specificity of regulatory modules based on their binding site content. In this paper, we propose a new approach to the detection of tissue-specific cis-regulatory modules. Our algorithm uses a Bayesian network to combine binding site predictions and tissue-specific expression data for both transcription factors and target genes. It identifies the transcription factors and combinations thereof whose presence bound to a module appears to be resulting in tissue-specific expression. Our approach takes advantage of the facts that tissue-specific CRMs are likely 1) to be located next to genes expressed in that same tissue, 2) to contain many binding sites for TFs that are also expressed in that tissue, and (3) to contain binding sites whose presence in other modules also appears to be associated to tissue-specific expression. Our approach falls under the category of unsupervised learning, as it does not rely on any labeled training set or any type of prior knowledge regarding the TFs that may be important for a given tissue. Importantly, the Bayesian network contains at its core a regression tree to represent the dependence between the regulatory activity of a CRM and the set of TFs predicted to bind it. A new unsupervised Expectation-Maximizationlike algorithm is developed to infer the parameters of the network, including the structure of the regression tree. Our approach is related to that of Segal et al. [15,16] but differs in that it takes advantage of available TF position weight matrices and TF expression data to allow tissuespecificity predictions. Moreover, based on the candidate modules predicted by PReMod, our approach is allowed to detect distal enhancers that are involved in tissue-specific expression.
We show that our method is able to accurately discriminate between known liver and erythroid-specific modules, even in the presence of a large fraction of modules with neither function, by discovering important combinations of transcription factors associated to these tissues. When applied to a larger set of putative modules and tissues, several known tissue-specific TFs were recovered, and many interesting new TF combinations were predicted to be linked to tissue-specific expression.
Methods
The goal of the method developed in this paper is to predict whether a given putative cis-regulatory module is responsible (at least in part) for the expression of a given gene in a particular tissue. Since the problem of predicting regulatory modules has already been studied extensively, we assume that a set of candidate CRMs has been identified in the genome under consideration and we focus on determining their tissue-specificity. We emphasize that many of these predicted CRMs are likely to be false-positives (i.e. they have no regulatory function whatsoever), and most are probably not specific to any tissue; our goal is to identify those that are. Given a putative CRM M m , a gene G, and a tissue (or cell type) T, we want to determine whether module M m up-regulates gene G in tissue T. (We focus only on the identification of enhancers, rather than repressors, because it is difficult to distinguish between repressed genes and genes that are not expressed due to the lack of activators.) To this end, we define a Bayesian network that is used to combine various types of evidence, including the putative transcription factor binding sites contained in M m , the expression levels of the set of transcription factors predicted to bind M m , and the expression level of gene G.
Importantly, and perhaps counter-intuitively, we train a single Bayesian network that will be applicable to predicting tissue-specific regulatory modules in all the tissues considered. This stems from the hypothesis that the enhancer activity of a module should depend only on its binding site content and on the expression levels of the transcription factors binding it, and not directly on the tissue considered. By allowing sharing regulatory mechanisms across tissues, we hope to improve our sensitivity to subtle regulatory mechanisms. One obvious drawback of this method is that unobserved entities like the presence or absence of tissue-specific transcriptional co-activators may affect the regulatory effect of a given module in different tissues even if the set of TFs bound to it does not change.
Typically, a Bayesian network consists of a set of observed variables, a set of unobserved variables, and an acyclic directed graph describing the direct dependencies between these. In this section, we first introduce the set of variables present in our network, which is depicted in Fig-ure 1. We then describe the dependencies between these variables and the algorithms used to learn the parameters of the network.
Bayesian network variables
Let Φ = {Φ 1 ,...,Φ |Φ| } be a set transcription factors, let be a set of tissue (or cell) types, let be the set of all known human proteincoding genes, and let be a set of predicted cis-regulatory modules. Since the notation describing the network requires many types of subscripts, we adopt the following convention: Right-subscripts refer to transcription factor indices; Right-superscripts refer to module indices; Left-superscripts refer to tissue indices; Left-subscripts refer to gene indices (for example, The bayesian network used for predicting tissue-specific regulatory modules Figure 1 The bayesian network used for predicting tissue-specific regulatory modules. See section 'Bayesian network variables' for a description of the variables, and section 'Bayesian network architecture' for a description of their dependencies. ). We start by defining the observed variables for our network, shown in unshaded ovals in Figure 1. More detailed definitions pertaining to the specific data set analyzed in this paper will be found in Section 'Data sets'. Consider the following domains of index variables: • is the real-number predicted affinity of transcription factor Φ f for module M m . It should reflect our confidence that, provided factor Φ f is expressed, it will bind module M m . It is a function of the number and the quality of Φ f 's predicted binding sites in M m .
• t F f is a boolean variable describing whether transcription factor Φ f is expressed in tissue t T.
• is a boolean variable describing whether gene g is expressed in tissue t T.
To model the relationships between the observed variables, it is necessary to introduce a set of hidden variables.
• is the actual state (active or inactive) of transcription factor Φ f in tissue t T. State may not equal the observed expression level t F f because of post-transcriptional regulation (e.g. activation due to external stimuli for nuclear receptors) or errors in the measurements of mRNA abundance.
• is the actual transcriptional status (transcribed or not transcribed) of gene g G in tissue t T, which could be different from the observed mRNA abundance because of mRNA degradation or errors in the measurements of mRNA abundance.
• is a boolean variable indicating whether, in tissue t T, module M m is bound by sufficiently many copies of factor Φ f for this factor to achieve its function.
• The fact that a module is bound by a transcription factor does not necessarily translate into this module being reg-ulatorily active. Indeed, the presence of other transcription factors may be required for the module to become active. We represent the regulatory activity of module M m in tissue t T by a boolean variable t R m , which takes the value 1 when the module M m actively (and positively) regulates its gene. This is the variable whose value is of the most interest for predicting tissue-specific regulatory modules.
We acknowledge that using binary variables to represent expression levels and regulatory activity is a very crude approximation. Although all these variable should in theory be continuous, the quantitative relations between transcription factor expression levels, their binding affinity to a module, and the contribution of that module to the expression of the target gene remain poorly understood, so a more qualitative approach is preferable. Furthermore, due to the computational complexity of network inference, such a simplification was necessary. In fact, by reducing the size of the parameter search space, this simplification might actually be improving generalization from small data sets.
Bayesian network architecture
In a Bayesian network, dependencies between variables are modeled as directed edges connecting the cause to the effect. The conditional probability of a node given the value of its parent(s) is described by a set of parameters that are either fixed or learned from the data. When the variables at hand have a finite domain, these conditional probabilities can be represented by a conditional probability table (CPT).
Conditional distributions of E and F
The observed expression levels E and F depend on the true expression levels and respectively. Since all variables are boolean, the conditional probability tables are the following: Here, α E and β E are the probabilities of false-positive and false-negative in the discretized gene expression data, respectively. We assume that these parameters are shared among all genes, i.e. expression measurement errors are equally likely for all genes. Similarly, α F and β F are the probabilities that the discretized expression measurement for a given factor does not reflect their actual regulatory potency. Again, these parameters are shared among all transcription factors, although this might to be inaccurate for factors like nuclear receptors, which require external signals for activation.
Conditional distribution of B
The probability of , the random variable that describes whether module M m is bound by factor Φ f in tissue t T, depends on whether the factor is expressed in that tissue, and on the affinity of the factor for that module. We assume that the parameters describing this conditional probability are the same for all m and t, so we drop some subscripts and superscripts to write We model this conditional probability indirectly, by instead modeling Pr[A f |B f = 1], the distribution of binding site affinities for a module that is bound, using a normal distribution with parameters μ f and that will be estimated during training. Since the mathematical derivation is tedious (but relatively simple), it is left in Appendix 1.
Conditional distribution of R using regression trees
The most challenging set of conditional probabilities to represent is that of t R m , which depends on the values of . Again, we assume the parameters that describe this dependency are the same for all tissues t T and all module M m , so we drop these indices. This assumption is equivalent to saying that the regulatory effect of the binding of a certain set of transcription factors does not depend on the module bound, the gene being regulated, or the tissue type.
How should we represent the probability that a module is regulatorily active, given the set of transcription factors bound to it, i.e. Pr[R|B 1 ,..., ]? Given that all variables are boolean, this conditional probability can be represented by a × 2 CPT containing parameters. In our application where contains several hundred transcription factors, this is obviously not practical, because (1) the CPT would be too large to store, and (2) we would need a huge amount of training data to learn the parameters. We thus use a more compact representation for this CPT, based on regression trees [17]. A regression tree is a rooted tree whose internal nodes are labeled with tests on the value of some variable B f . See Figure 2 for a small example. For boolean variables (our case here), each node N tests whether the some variable takes the value true or false. Each leaf l of the tree is associated with a probability distribution Pr [R|l]. Let be the set of variable assignments obtained by following the path from the root to l.
Then, the regression tree defines a complete conditional probability distribution: ). When many of the B i 's are irrelevant to R, the representation is much more compact than the standard CPT and can be estimated from less data. We will jointly refer to the tree topology, the node labelings, and the probability distributions at the leaves as the metaparameter Ψ. Inferring Ψ will be the most significant difficulty of this approach.
Conditional distribution of
The last set of dependencies is that of a gene's transcriptional activity on the regulatory activity of the neighboring regulatory modules. This raises the difficult question of determining which gene is being regulated by each module. This is relatively straight-forward when the module is located in the promoter region of a gene, but much less so when it is located 100 kb away from any gene. Here, for lack of more accurate information, we assume that a module M m only has the potential of regulating the gene g G whose transcription start site is the clos- Example of a regression tree representing a small 2-variable conditional probability table
Learning the network's parameters
Our Bayesian network contains a number of parameters whose values are not known a priori. We collectively refer to these parameters as .
The network will be trained using the set of all pairs (module, tissue). Let A, E, and F be the set of all TF affinity data, all gene expression data, and all TF expression data, respectively, over all tissues considered. A typical approach to estimating the network's parameters is to seek the value Θ* that maximizes the joint likelihood of the observed variables, i.e.
An Expectation-Maximization algorithm can be used to learn the parameters Θ of the Bayesian network [18], whereby a local maximum of the likelihood function is reached by alternatively estimating the expected value of hidden variables given the observed variables and the current estimate Θ 0 , and then reestimating the maximum likelihood values for the parameters Θ. However, since Θ contains the tree structure, we cannot apply the standard EM algorithm for learning Bayesian networks, as this algorithm relies on the ability to analytically derive a maximum likelihood estimate for the parameters (see however [18]). Instead, a new EM-like algorithm with regression tree learning is developed to infer the tree within the network.
Estimating posterior probabilities for hidden variables
Our first step is to calculate the expectation (or equivalently, the probability of taking the value 1, since all hidden variables are binary), for all hidden variables, given the value of the observed variables. These posterior probabilities can be calculated using the formulas given in Appendix 2. The derivation of most of these formulas is fairly straight-forward, except for the calculations involving the regression tree. Computing can be done efficiently thanks to the regression tree representation.
Maximum likelihood parameter estimation
Once the posterior probabilities of the hidden variables are computed, maximum likelihood estimators for the parameters of the network can be derived as given in Appendix 3. Again, the regression tree representing the dependence of R on B 1 ,..., poses significant challenge, as no efficient algorithm exists to choose the tree topology . Instead, we developed a new tree learning algorithm, which adapts ideas from standard decision tree algorithms (e.g. C 4.5 [19], J48 [20]). The problem at hand is novel and challenging for several reasons:
Learning regression trees from probabilistic instances
Most decision tree learning algorithms are based on a greedy tree-growing approach trying to find the tree that minimizes the number of misclassifications [21]. Our tree learning algorithm is an adaptation of the standard approach using information gain as a method to select which attribute to select to split a node. Consider a node N that is currently a leaf and that we are considering splitting based on some attribute B i . The weight of a probabilistic instance is the probability of the path from the root to N, under the attribute probability distributions given by x.
More precisely, We can now define the weighted entropy at node N as: N (m, t). Then, the information gain obtained by splitting a leaf N with attribute B i to obtain two new leaves N' and N" is defined as The attribute B i with the largest weighted information gain is chosen as label for N and corresponding children nodes N' and N" are added. The tree grows this way until no pair of node and attribute yields a positive information gain. This is a very loose stopping criterion and trees learned this way tend to be very large.
In order to avoid the problem of overfitting, a method called reduced-error pruning is used [21]. It uses a separate validation data set to prune the tree, and each split node in the tree is considered to be a candidate for pruning. When pruning a node, a operation called subtree replacement is performed, which involves removing the subtree rooted at that node and replacing the subtree with a single leaf. Whether pruning is performed depends on the classification accuracy obtained by the unpruned tree and by the pruned tree over the validation set. Pruning will cause the accuracy over the training data set to decrease; but it may increase the accuracy over the test data set.
Results
Our approach was used to identify tissue-specific CRMs in human. First, we show, using a small set of experimentally verified tissue-specific CRMs, that our approach is able to discriminate between modules involved in different tissues. Then, we apply our method to a larger data set consisting of more than 6000 putative CRMs associated to genes specifically expressed in one of ten tissues, and show that interesting combinations of transcription factors can be linked to tissue-specific expression.
Data sets
We used a set of cis-regulatory modules predicted in the human genome by Blanchette et al. [12], based on a set of 481 position weight matrices from Transfac 7.2 [22]. The modules are available from the PReMod database [11]. Criteria used for the PReMod predictions include interspecies conservation of binding site predictions and homotypic clustering of binding sites. The complete data set consists of more than 100,000 predicted CRMs, but only subsets of those were used (see below). For each pre-dicted module M m , the predicted binding affinity is represented by the negative logarithm of the p-value of the binding site weighted density for factor Φ f in module M m , as reported in PReMod. Gene expression data came from the GNF Atlas 2 data set [14], downloaded from the UCSC Genome Browser [23]. A gene g G was identified as "expressed" (i.e. = 1) if and only if its expression level was at least two standard deviations above its mean expression level, over the 79 tissues for which data was available.
Only 231 of the 481 Transfac PWMs were confidently linked to transcription factors for which GNF expression data is available. Only these | | = 231 PWMs were considered in our analysis. Some transcription factors are actually linked to several different PWMs, but our approach actually seems to take advantage of this to improve the quality of the predictions (see below).
Validation experiments
We first use a set of experimentally verified tissue-specific CRMs, together with a set of negative control regions, to validate our algorithm. To further evaluate the performance of our approach, we compare our results with the results obtained with several simpler classifiers.
Validation data sets
To demonstrated the ability of our approach to identify tissue-specific regulatory modules, we used it to discriminate between known liver-specific CRMs, known erythroid-specific CRMs, and other modules not likely to be involved in these two cell types. Each validation data set was composed of five subsets: 1. knownLiver: 11 experimentally verified liver-specific modules [3].
knownErythroid: 22 experimentally verified erythroidspecific modules [24].
3. putativeLiver: A set of 31 PReMod modules located in the vicinity of the genes associated to the knownLiver modules. These modules are possibly involved in liverspecific regulation and are included only to help the Bayesian network learning the association between a module's binding site composition and tissue-specificity of the target gene. ( , ) . negative: For each knownErythroid or knownLiver module with associated closest gene g, a set of r neg (see below) PReMod modules associated to genes that are expressed in neither erythroid nor liver is randomly selected and artificially associated to gene g. These are modules that, if placed in the vicinity of gene g, would be unlikely to cause liver or erythroid-specific expression.
The ratio r neg of the number of negative modules to the number of known modules determines in part the difficulty of the classification task. Two types of validation data sets were thus created: In our 1X experiment (see below), we used r neg = 1, whereas in our 2X data set, we used r neg = 2.
Each 1X data set thus contains 143 modules, each of which was considered as a possible liver or erythroid specific.
Three simple classifiers
To assess the quality of our method, we compare it to three other simpler approaches. The first classifier, called the expressionOnly classifier, simply predicts that any module located next to a gene that is expressed in a given tissue is a tissue-specific module for that tissue. That is, the binding site content of the module is ignored, and only the expression g E is used to make the prediction.
The second simple classifier, called SupervisedNaiveBayes, is a classical supervised Naive Bayes approach that takes as input a simplified, observable version of the B variables, where we set = F m ·A f , as well as the expression of the target gene and is trained to distinguish between labeled positive and negative examples (see Appendix 4 for the complete details). Finally, the third simple classifier, called NaiveBayesInNet, is a version of our Bayesian Network classifier in which the regression tree representing the conditional probability of R is replaced by a Naive Bayes classifier, but where the rest of the structure is preserved. See Appendix 5 for more details.
Validation results
One hundred different runs of our EM-like algorithm were done on 1X and 2X datasets, each time with a different sample of negative modules. Each run used 100 EMlike iterations (taking approximately 10 minutes of run-ning time), which was sufficient to achieve convergence, although different runs converge to slightly different likelihoods and regression trees (see Additional File 1). Since we do not know which of the putativeLiver and puta-tiveErythroid CRMs are actually tissue-specific modules, we evaluate the performance of our algorithm based only on the positive and the negative modules. For each run, the network with the best likelihood over 100 EM-like iterations is used to compute Pr[ t R m |A, E, F] for all examples and a module-tissue pair is predicted positive if this probability exceed some threshold t. The resulting precision-recall curve, averaged over all 100 runs, is shown in Figure 3, for both the 1X and 2X data set.
Since 13 out of the 33 known CRMs have target genes expressed neither in liver nor in erythroid (based on our discretization of expression data), the ExpressionOnly classifier yields a recall = 60.6% and precision = 50% on the 1X data set, but only precision = 33% on the 2X data set.
As seen from the curves, our method significantly outperforms both the Naive Bayes-based approaches for mid-to high-precision predictions. Our method can improve the precision to 72% for the 1X data sets and 66.2% for the 2X data sets. Notice that the highest precision for the 2X data sets remains close to that for the 1X data sets, although almost twice as many negative examples are considered. This indicates that our approach provides a way to improve the precision of prediction by combining the sequence data and the expression data. Figure 4 shows the regression trees generated from one run for the 1X and 2X data sets. Each internal node tests the value of an attribute B f , which indicates whether factor Φ f is predicted to bind the module in the tissue under consideration. Each leaf shows the conditional probability predicted, which is the probability of R = 1 on the condition specified by the path from to root to the leaf.
Regression trees
The tree structure indicates what are the most important TFs or combinations of TFs for explaining liver-specific and erythroid-specific expression. Our algorithm successfully detects most known liver-specific TFs and combinations of thereof, like HNF1 + HNF4, HNF1 + C/EBP, and HNF4 + C/EBP, which are reported in the literature [3]. The erythroid-specific TF GATA1 is also reported in the trees. The trees do not contain many erythroid-specific nodes, firstly because there are only two TFs (GATA1 and NF-E2) that are erythroid-specific based on our expression data, and secondly because NF-E2 has very few predicted binding sites on the genome. We observe from the trees that the leaves associated with TF combinations usually have higher regulatory probabilities than the leaves associated with individual TFs. This indicates that the ability to identify TF combinations is key to being able to identify cis-regulatory modules. We emphasize that the trees were obtained without any prior information about which of the 231 PWMs used are involved in liver or erythroid-specific expression.
Notice that TF PPAR is reported in our trees. PPAR is indeed an important factor regulating expression in liver [25], but was absent from Krivan and Wasserman's paper [3] from which we obtain the known liver-specific CRMs. Most importantly, the expression of PPAR is low in both liver and erythroid, so erythroid F PPAR = liver F PPAR = 0. This shows that our approach is robust to noise in the expression data of TFs, provided the association between the binding sites in modules and the target gene's expression is sufficiently high. Finally, we note the unexpected selection of several different matrices for the same transcription factor along the same path in the tree (for example C/ EBP M770 and M190 on the tree obtained for the 1X data set on Figure 3). This is caused by the fact that these matrices are quite actually different from each other, but the presence of sites for both matrices increases the association to the target gene's expression.
Genome-wide CRM prediction in ten tissues
We next extended our analysis to ten different tissues from the GNF Atlas2: = {brain, erythroid, thyroid, pancreatic islets, heart, skeletal muscle, uterus, lung, kidney, and liver}. 923 genes are specifically expressed (i.e. = 1) in at least one of these tissues and a total of 6278 modules are associated to these genes. We thus trained our Bayesian network on a set of 10 × 6, 278 = 62, 780 (module, tissue) pairs. Ten parallel runs of 100 EM-like iterations were performed from different random initializations, each taking approximately 24 hours.
The regression tree obtained obtained from the best run is shown in Figure 5. We can clearly observe from the tree that the positive assignments along each path leading to a leaf typically consists of TFs expressed in the same tissue. Several known tissue-specific combinations of TFs are recovered in the tree, such as C/EBP + HNF1 and C/EBP + HNF4 in liver. Also, many new and potentially meaningful TF combinations are predicted, such as C/EBP + AR in liver and Tax/CREB + GATA1 in erythroid.
The tree only contains the TFs expressed in four tissues: liver, erythroid, heart, and skeletal muscle. The other six tissues are not represented in the tree because of one of the following reasons: (1) The TFs that regulate the genes expressed in those tissues have low expression levels. FN). The blue curve (diamond markers) is generated from the results of our approach, the brown curve (× markers) is generated from the results of the Supervised-NaiveBayes approach (see Appendix 4), and the green curve (circle markers) is generated from the results of the NaiveBayesInNet classifier (see Appendix 5). The pink triangle shows the result obtained by the expressionOnly classifier. Error bars denote one standard deviation of the precision, over 100 random choices of negative examples. The increase in the standard deviation on precision at lower recall is due to the small number of predictions made for these thresholds.
Recall Precision
The regression tree generated by the iteration with the best likelihood for a 1X (top) and 2X (bottom) data sets Figure 4 The regression tree generated by the iteration with the best likelihood for a 1X (top) and 2X (bottom) data sets. Internal nodes corresponding to liver-specific transcription factors are colored yellow, and those corresponding to erythroid-specific factors are red. Regression tree obtained from the best of ten runs on the set of 6,278 modules and 10 tissues Figure 5 Regression tree obtained from the best of ten runs on the set of 6,278 modules and 10 tissues. Nodes are colored based on the tissue in which a particular factor is expressed. The complete set of tissue-specificity predictions are available at http://www.mcb.mcgill.ca/~xiaoyu/tissue-specific Module.
Statistical analysis of TF combinations
The regression trees obtained in the 10 runs vary substantially in their structure but share many of their factors and combination of factors. The frequency with which factors or combination of factors are found in these trees is an indication of their role in regulating tissue-specific expression. A pair of factors is said to co-occur in a regression tree if the tree contains a path along which both factors take value 1. As seen in Tables 1 and 2, several factors and pairs of factors are consistently identified as part of the Transcription factors present in the regression tree in at least five of the 10 runs. References in bold refer to papers arguing for tissue-specificity of the given factor in the given tissue, whereas those in normal font point to papers showing the involvement of the given TF for the proper expression of some gene(s) expressed in the given tissue, but where the TF is not tissue-specific. Transcription factor pairs present together on the same path of the regression tree in at least three of the 10 runs.
tree. Most TFs found are either known to be directly involved in tissue-specific regulation (in bold in Table 1, or to be essential for the expression of certain genes in the given tissues, but to also have other non-tissues-specific roles (normal font in Table 1).
Predicting gene tissue-specificity
To further validate our module tissue-specificity predictions, we investigated whether a gene's tissue-specific finegrain expression level could be predicted based on the modules regulating it. To this end, for each tissue t, we separated genes between highly expressed ) and low expressed ( = 0). Let be the maximum of the predicted regulatory activity t R m of the modules associated to gene g. We asked whether is predictive of the raw, non-thresholded expression level of gene g. In the case of genes with = 0, such a correlation would show that we are able to detect tissue-specific genes even if their expression level is below the threshold. For genes with = 1, this correlation would show that genes with very high tissue-specific expression levels are associated to stronger module predictions than those that barely meet our threshold. We note that in both cases, such a correlation could not be explained by any kind of training artifact, since raw expression data is not part of the input.
Considering genes showing tissue-specific expression ( = 1), we find that eight of the ten tissues considered (all but "whole brain" and "erythroid") exhibit a positive correlation between and the raw gene expression.
Somewhat surprinsingly, the correlation is strongest for thyroid (p-value = 0.028) and skeletal muscle (p-value = 0.015), two factors that were relatively poorly represented in our regression tree. Among genes with = 0, the correlation is weaker but is positive in seven of the ten tissues (all except heart, skeletal muscle, and liver). These results indicate that our predictions yield a weak predictor of gene tissue-specificity. Clearly, it is easier to predict modules responsible for a gene's observed tissue-specificity than to predict the tissue-specificity of a gene based on its modules.
Discussion and conclusion
The approach we introduced here is the first to integrate binding site predictions and tissue-specificity of expres-sion of both transcription factors and target genes to predict cis-regulatory modules involved in regulating tissuespecific gene expression. By introducing a regression tree at the heart of the network and deriving practical algorithms to train it, we are able to accurately identify important combinations of transcription factors regulating gene expression in a tissue-specific manner. The algorithms derived for learning this type of network will undoubtedly be applicable to a wide range of problems.
Many of the choices made in the design of the Bayesian network were made for computational practicality reasons. As we improve the learning algorithm, it will become possible to use real-numbered expression measurements.
Furthermore, our network could easily be extended by introducing additional sources of information as observed variables. For example, ChIP-chip and other binding assay data, when available, can be used to affect our belief in . Reporter assays and DNA accessibility assays could be used to modify our belief in t R m . If modeled correctly, these types of experimental data may greatly increase the accuracy of our predictions, not only for the modules or the factors for which data is available, but also for other regions or factors associated to similar functions.
The approach we described is potentially applicable to a wide range of data sets. While the relative inefficiency of the current learning algorithm prevented us from analyzing the complete set of tissue-specific expression from GNF, it is clear that this analysis, involving 79 tissues, would yield a wealth of information. Another possible application is to identify and characterize cis-regulatory modules involved in time and tissue-specific regulation during fish development. The large body of in situ hybridization data available in zebrafish [26] would provide an excellent basis for this analysis. where γ = 0.01 is a parameter that indicates the prior probability that an expressed TF will bind a generic genomic region. | 8,612.8 | 2007-12-21T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench
Big Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.
machine data, system data, and browsing history [2]. This massive amount of digital data becomes a challenging task for the management to store, process, and analyze.
The conventional database management tools are unable to handle this type of data [3]. Big data technologies, tools, and procedures allowed organizations to capture, process speedily, and analyze large quantities of data and extract appropriate information at a reasonable cost.
Several solutions are available to handle this problems [4]. Distributed computing is one possible solution considered as the most efficient and fault-tolerant method for companies to store and process massive amounts of data. Among this new group of tools, MapReduce and Spark are the most commonly used cluster computing tools. They provide users with various functions using simple application programming interfaces (API). MapReduce is a framework used for distributed computing used for parallel processing and designed purposely to write, read, and process bulky amounts of data [1,5,6]. This data processing framework is comprised of three stages: Map phase, Shuffle phase and Reduce phase. In this technique, the large files are divided into several small blocks of equal sizes and distributed across the cluster for storage. MapReduce and Hadoop distributed file systems (HDFS) are core parts of the Hadoop system, so computing and storage work together across all nodes that compose a cluster of computers [7].
Apache Spark is an open-source cluster-computing framework [8]. It is designed based on the Hadoop and its purpose is to build a programing model that "fits a wider class of applications than MapReduce while maintaining the automatic fault tolerance" [9]. It is not only an alternative to the Hadoop framework but it also provides various functions to process real streaming data. Apart from the map and reduce functions, Spark also supports MLib1, GraphX, and Spark streaming for big data analysis. Hadoop MapReduce processing speed is slow because it requires accessing disks for reads and writes. On the other hand, Spark uses memory to store data reducing the read/write cycle [1]. In this paper, we have addressed the above mentioned critical challenges. According to our knowledge, none of the previous works have addressed those challenges. Our proposed work will help the system administrators and researchers to understand the system behavior when processing large scale data sets. The main contributions of this paper are as follows: • We introduced a comprehensive empirical performance analysis between MapReduce and Spark frameworks by correlating resource utilization, splits size, and shuffle behavior parameters. As per our knowledge, few previous studies have presented information regarding that. Considering this point, the authors have focused on a comprehensive study about various parameters impact with large data set instead of a large number of workloads. • We accomplished comprehensive comparison work between Hadoop and Spark where large scale datasets (600 GB) are used for the first time. The experiments present the various aspects of cluster performance overhead. We applied two Hibenchmark workloads to test the efficiency of the system under MapReduce and Spark, where the data sets are repeatedly changing.
• We selected several parameters covering different aspects of system behavior. Multiple parameters are used to tune job performance. The results of the analysis will facilitate job performance tuning and enhance the freedom to modify the ideal parameters to enhance job efficiency. • We measured the scalability of the experiment by repeating the experiment three times, getting the average execution time for each job. Besides, we investigate the system execution time, maximum sustainable throughput and speedup. • We used a real cluster capable of handling large scale data set (600 GB) with benchmarking tools for a comprehensive evaluation of MapReduce and Spark.
The remainder of the paper is organized as follows: "Related work" section presents a critical review of related research works, and then describes Hadoop and Spark systems. The difference between Hadoop and Spark is explained in "Difference between Hadoop and Spark" section. The experimental setup is presented in "Experimental setup" section. In "The parameters of interest and tuning approach" section, we explain the chosen parameters and tuning approach. "Results and discussion" section presents the performance analysis of the results and finally, we conclude in "Conclusion" section.
Related work
Shi et al. [10] proposed two profiling tools to quantify the performance of the MapReduce and Spark framework based on a micro-benchmark experiment. The comparative study between these frameworks are conducted with batch and iterative jobs. In their work, the authors consider three components: shuffle, executive model, and caching. The workloads, Wordcount, k-means, Sort, Linear Regression, and PageRank, are chosen to evaluate the system behavior based on CPU bound, disk-bound, and network bound [11]. They disabled map and reduce function for all workloads apart of a Sort. For the Sort, the reduce task is configured up to 60 map tasks, and the reduce task conFigured to 120. The map output buffer is allocated to 550 MB to avoid additional spills for sorting the map output. Spark intermediate data are stored in 8 disks where each worker is configured with four threads. The authors claim that Spark is faster than MapReduce when WordCount runs with different data sets (1 GB, 40 GB, and 200 GB). The TeraSort is used by sort-by-key() function. They have found that Spark is faster than MapReduce when the data set is smaller (1 GB), but Mapreduce is nearly two times faster than Spark when the data set is of bigger sizes (40 GB or 100 GB). Besides, Spark is one and a half times faster than MapReduce with machine learning workloads such as K-means and Linear Regression. It is claimed that in a subsequent iteration, Spark is five times faster than MapReduce due to the RDD caching and Spark-GraphX is four times faster than MapReduce.
Li et al. [12] proposed a spark benchmarking suite [13], which significantly enhances the optimization of workload configuration. This work has identified the distinct features of each benchmark application regarding resource consumption, the data flow, and the communication pattern that can impact the job execution time. The applications are characterized based on extensive experiments using synthetic data sets. There are ten different workloads such as Logistic Regression, Support Vector Machine, Matrix Factorization, Page Rank, Tringle Count, SVD++, Hive, RDD Relation, Twitter, and PageView used with different input data sizes. An eleven nodes virtual cluster is used to analyze the performance of the workloads. The workload analysis is carried out concerning CPU utilization, memory, disk, and network input/output consumption at the time of job execution. They have found that most of the workloads spend more than 50% execution time for MapShuffle-Tasks except logistic regression. They concluded that the job execution time could be reduced while increasing task parallelism to leverage the CPU utilization fully.
Thiruvathukal et al. [14] have considered the importance and implication of the language such as Python and Scala built on the Java Virtual Machine (JVM) to investigate how the individual language affects the systems' overall performance. This work proposed a comprehensive benchmarking test for Massage Passing Interface (MPI) and cloud-based application considering typical parallel analysis. The proposed benchmark techniques are designed to emulate a typical image analysis. Therefore, they presented one mid-size (Argonne Leadership Computing Facility) cluster with 126 nodes, which run on COOLEY [14] and a large scale supercomputer (Cray XC40 supercomputer) cluster with a single node which runs on THETA [14]. Significantly, they have increased some important Spark parameters (Spark driver memory, and executor memory) values as per the machine resource. They have recommended that COOLEY and THETA frameworks are be beneficial for immediate research work and high-performance computing (HPC) environments.
Marcue et al. [15] present the comparative analysis between Spark and Flink frameworks for large scale data analysis. This work proposed a new methodology for iterative workloads (K-Means, and Page Rank) and batch processing workloads (WordCount, Grep, and TeraSort) benchmarking. They considered four most important parameters that impact scalability, resource consumption, and execution time. Grid 5000 [16] has used upto 100 nodes cluster deploying Spark and Flink. They have recommended that Spark parameter (i.e., parallelism and partitions) configuration is sensitive and depends on data sets, while the Flink is highly extensive memory oriented.
Samadi et al. [7] has investigated the criteria of the performance comparison between Hadoop and Spark framework. In his work, for an impartial comparison, the input data size and configuration remained the same. Their experiment used eight benchmarks of the HiBench suite [13]. The input data was generated automatically for every case and size, and the computation was performed several times to find out the execution time and throughput. When they deployed microbenchmark (Short and TeraSort) on both systems, Spark showed higher involvement of processor in I/Os while Hadoop mostly processed user tasks. On the other hand, Spark's performance was excellent when dealing with small input sizes, such as micro and web search (Page Rank). Finally, they concluded that Spark is faster and very strong for processing data in-memory while Hadoop MapReduce performs maps and reduces function in the disk.
In another paper, Samadi et al. [9] proposed a virtual machine based on Hadoop and Spark to get the benefit of virtualization. This virtual machine's main advantage is that it can perform all operations even if the hardware fails. In this deployment, they have used Centos operating system built a Hadoop cluster based on a pseudo-distribution mode with various workloads. In their experiments, they have deployed the Hadoop machine on a single workstation and all other demos on its JVM. To justify the big data framework, they have presented the results of Hadoop deployment on Amazon Elastic Computing (EC2). They have concluded that Hadoop is a better choice because Spark requires more memory resources than Hadoop. Finally, they have suggested that the cluster configuration is essential to reduce job execution time, and the cluster parameter configuration must align with Mappers and Reducers.
The computational frameworks, namely Apache Hadoop and Apache Spark, were investigated by [17]. In this investigation, the Apache webserver log file was taken into consideration to evaluate the two frameworks' comparative performance. In these experiments, they have used Okeanos's virtualized computing resources based on infrastructures as a Service (IaaS) developed by the Greek Research and Technology Network [17]. They proposed a number of applications and conducted several experiments to determine each application's execution time. They have used various input files and the slave nodes to find out the execution time. They have found that the execution time is proportional to the input data size. They have concluded that the performance of Spark is much better in most cases as compared to Hadoop.
Satish and Rohan [18] have shown a comparative performance study between Hadoop MapReduce and Spark-based on the K-means algorithm. In this study, they have used a specific data set that supports this algorithm and considered both single and double nodes when gathering each experiment's execution time. They have concluded that the Spark speed reaches up to three times higher than the MapReduce, though Spark performance heavily depends on sufficient memory size [19].
Lin et al. [20] have proposed a unified cloud platform, including batch processing ability over standalone log analysis tools. This investigation has considered four different frameworks: Hadoop, Spark, and warehouse data analysis tools Hive and Shark. They implemented two machine learning algorithms (K-means and PageRank) based on this framework with six nodes to validate the cloud platform. They have used different data sizes as inputs. In the case of K-means, as the data size increased and exceed memory size, the latency schedule and overall Spark performance degraded. However, the overall performance was still six times higher than Hadoop on average. On the other hand, Shark shows significant performance improvement while using queries directly from disk.
Petridis et al. [21] have investigated the most important Spark parameters shown in Table 4 and given a guideline to the developers and system administrators to select the correct parameter values by replacing the default parameter values based on trial-anderror methodology. Three types of case studies with different categories such as Shuffle Behavior, Compression and Serialization, and Memory Management parameters were performed in this study. They have highlighted the impact of memory allocation and serialization when the number of cores and default parallelism values change. Therefore, there are 12 parameters chosen with three benchmarking applications: sort-by-key, shuffling, and k-means. The sort-by-key experiments used both 1 million and 1 billion key-values of lengths 10 and 90 bytes and the optimal degree of partition is set to 640. The Hash performance is increased to 127 s, which is 30 s faster than the default parameter, and shuffle.file.buffer is increased by 140 s. The rest of the parameters do not play any important role in improving the performance. For another Shuffling experiment, they used a 400 GB dataset. The Hash shuffle performance is degraded by 200 s, and Tungsten-Sort speed is increased by 90 s. By decreasing the buffer size from 32 to 15 KB, the system performance was degraded by about 135s, which is more than 10% from the primary selection. For K-means, they used two sizes of data input (100 MB and 200 MB). They have not found significant k-means performance improvement by changing the parameters. Therefore, they have concluded that based on their methodology, the speedup achievement is tenfold. However, the main challenges of tuning Hadoop and Spark configuration parameters are due to the complicated behavior of distributed large scale systems while the parameter selection is not always trivial for the system administrators. Inappropriate combination of parameter values can affect the overall system performance. Inappropriate combination of parameter values can affect the overall system performance.
The published literature in Table 1 presents some empirical studies. None of these studies have considered larger data sizes (600 GB), more parameters, and real clusters. In our study, we chose a conventional trial-and-error approach [21], larger data set, and 18 important parameters (listed in Tables 3 and 4) from resource utilization, input splits, and shuffle category.
Difference between Hadoop and Spark
Hadoop [22] is a very popular and useful open-source software framework that enables distributed storage, including the capability of storing a large amount of big datasets across clusters. It is designed in such a way that it can scale up from a single server to thousands of nodes. Hadoop processes large data concurrently and produces fast HDFS [23] splits the files into small pieces into blocks and saves them into different nodes. There are two kinds of nodes on HDFS: data-nodes (worker) and name-nodes (master nodes) [24,25]. All the operations, including delete, read, and write, are based on these two types of nodes. The workflow of HDFS is like the following flow: firstly, the name-node asks for access permission. If accepted, it will turn the file name into a list of HDFS block IDs, including the files and the data-nodes that saved the blocks related to that file. The ID list will then be sent back to the client, and the users can do further operations based on that.
MapReduce [26] is a computing framework that includes two operations: Mappers and Reducers. The mappers will process files based on the map function and transfer them into the new key-value pairs [27]. Next, the new key-value pairs are assigned to different partitions and sorted based on their keys. The combiner is optional and can be recognized as a local reduces operation which allows counting the values with the same key in advance to reduce the I/O pressure. Finally, partitions will divide the intermediate key-value pairs into different pieces and transfer them to a reducer. MapReduce needs to implement one operation: shuffle. Shuffle means transferring the mapper output data to the proper reducer. After the shuffle process is finished, the reducer starts some copy threads (Fetcher) and obtains the output files of the map task through HTTP [28]. The next step is merging the output into different final files, which are then recognized as reducer input data. After that, the reducer processes the data based on the reduced function and writes the output back to the HDFS. Figure 1 depicts a Hadoop MapReduce architecture.
Spark became an open-source project from 2010. Zahari has developed this project at UC Berkely's AMPLab in 2009 [4,29]. Spark offers numerous advantages for developers to build big data applications. Spark proposed two important terms: Resilient Distributed Datasets (RDD) and Directed Acyclic Graph (DAG). These two techniques work together perfectly and accelerate Spark up to tens of times faster than Hadoop under certain circumstances, even though it usually only achieves a performance two to three times more quickly than MapReduce. It supports multiple sources that have a fault tolerance mechanism that can be cached and supports parallel operations. Besides, it can represent a single dataset with multiple partitions. When Spark runs on the Hadoop cluster, RDDs will be created on the HDFS in many formats supported by Hadoop, likewise text and sequence files. The DAG scheduler [30] system expresses the dependencies of RDDs. Each spark job will create a DAG and the scheduler will drive the graph into the different stages of tasks then the tasks will be launched to the cluster. The DAG will be created in both maps and reduce stages to express the dependencies fully. Figure 2 illustrates the iterative operation on RDD. Theoretically, limited Spark memory causes the performance to slow down.
Cluster architecture
In the last couple of years, many proposals came from different research groups about the suitability of Hadoop and Spark frameworks when various types of data of different sizes are used as input in different clusters. Therefore, it becomes necessary to study the performance of the frameworks and understand the influence of various parameters. For the experiments, we will present our cluster performance based on MapReduce and Spark using the HiBench suite [23,23]. In particular, we have selected two Hibench workloads out of thirteen standard workloads to represent the two types of jobs, namely WordCount (aggregation job) [32], and TeraSort (shuffle job) [33] with large datasets. We selected both the workloads because of their complex characteristics to study how efficiently both the workloads analyze the cluster performance by correlating MapReduce and Spark function with a combination of groups of parameters.
Hardware and software specification
The experiments were deployed in our own cluster. The cluster is configured with 1 master and 9 slaves nodes which is presented in Fig. 3. The cluster has 80 CPU cores and 60 TB local storage. The implemented hardware is suitable for handling various difficult situations in Spark and MapReduce.
The detailed Hadoop cluster and software specifications are presented in Table 2. All our jobs run in Spark and MapReduce. We have selected Yarn as a resource manager, which can help us monitor each working node's situation and track the details of each job with its history. We have used Apache Ambari to monitor and profile the selective workloads running on Spark and MapReduce. It supports most of the Hadoop components, including HDFS, MapReduce, Hive, Pig, Hbase, Zookeeper, Sqoop, and Hcatalog" [34]. Besides, Ambari supports the user to control the Hadoop cluster on three aspects, namely provision, management, and monitoring.
Workloads
As stated above, in this study we chose two workloads for the experiments [32,33]: WordCount: The wordCount workload is map-dependent, and it counts the number of occurrences of separate words from text or sequence file. The input data is produced by RandomTextWriter. It splits into each word by using the map function and generates intermediate data for the reduce function as a key-value [35]. The intermediate results are added up, generating the final word count by the reduce function.
TeraSort: The TeraSort package was released by Hadoop in 2008 [36] to measure the capabilities of cluster performance. The input data is generated by the TeraGen function which is implemented in Java. The TeraSort function does the sorting using the MapReduce, and the TeraValidate function is used to validate the output of the sorted data. For both workloads, we used up to 600 GB of synthetic input data generated using a string concatenation technique.
The parameters of interest and tuning approach
Tuning parameters in Apache Hadoop and Apache Spark is a challenging task. We want to find out which parameters have important impacts on system performance. The configuration of the parameters needs to be investigated according to work-load, Tables 3 and 4.
Results and discussion
In this section, the results obtained after running the jobs are evaluated. We have used synthetic input data and used the same parameter configuration for a realistic comparison. Each test was repeated three times, and the average runtime was plotted in each graph. For both frameworks, we show the execution time, throughput, and speedup to compare the two frameworks and visualize the effects of changing the default parameters.
Execution time
The execution time is affected by the input data sizes, the number of active nodes, and the application types. We have fixed the same parameters for the fair comparative analysis, such as the number of executors to 50, executor memory to 8 GB, executor cores to 4. b a d c Fig. 4 The performance of the WordCount application with a varied number of input splits and shuffle tasks Figure 4a, b show how MapReduce and Spark execution time depend on the datasets' size and the different input splits and shuffle parameters. The execution time of MapReduce WordCount workload with the default input split size (128 MB) and shuffle parameter (sort.mb 100, sort.factor 2047) obtained better execution time for entire data sizes compared to other parameters. Hadoop Map and Reduce function behave better because of their faster execution time and overlooked container initialization overhead for specific workload types. This result suggests that the default parameter is more suitable for our cluster when using data sizes from 50 to 600 GB.
In Fig. 4c the default input splits of Spark is 128 MB. Previously, we have mentioned that the number of executors, executor memory, and executor cores are fixed. From the above Fig. 4c, we see that the execution time of input split size 256 MB outperforms the default set up until 450 GB data sizes. In fact, the default splits size (128 MB) is more efficient when the data size is larger than the 450 GB. Notably, we can see that the default parameter shows better execution performance when the data set reaches 500 GB or above. The new parameter values can improve the processing efficiency by 2.2% higher than the default value (128 MB). Table 5 presents the experimental data of WordCount workload between MapReduce and Spark while the default parameters are changing.
For the Spark shuffle parameter, we have chosen the default serializer, the (JavaSerializer) because of the simplicity and easy control of the performance of the serialization [37]. In this category, the serializer is PL100 object [37]. We can see from Fig. 4d that the improvement rate is significantly increased when we set the PL value to 300. It is evident that the best performance is achieved for sizes larger than 400 GB. Also, it shows that when tuning the PL value to 300, the system can achieve a 3% higher improvement for the rest of the data sizes. Consequently, we can conclude that input splits can be considered an important factor in enhancing Spark WordCount jobs' efficiency when executing small datasets. Figure 5a is comparing MapReduce TeraSort workloads based on input splits that include default parameters. In this analysis, we have set (Red_Task and InSp) value fixed with default split size 128 MB. We have changed the parameter values and tested whether the splits' size can keep the impact on the runtime. So, for this reason, we have selected three different sizes: 256 MB, 512 MB, and 1024 MB. We have observed that with a split size of 256MB, the execution performance is increased by around 2% in datasets with up to 300 GB. On the contrary, when the data sizes are larger than 300 GB, the default size outperforms split size equals 512 MB. Moreover, we have noticed that the improvement rates are similar when the data sizes are smaller than 200 GB. Figure 5b illustrates the execution performance with the MapReduce shuffle parameter for the TeraSort workload. We have seen that the average execution time behaves linearly for sizes up to 450 GB when the parameter change to (Reduce_150 and task. io_45) as compared to the default configuration (Reduce_100 and task.io_30). Besides, We have also noticed that the default configuration is outperforming all other settings when the data sizes are larger than 450 GB. So, we can conclude that by changing the shuffled value, the system execution performance improves by 1%. In general, this is very unlikely that the default size has optimum performance for larger data sizes. Figure 5c illustrates the Spark input split parameter execution performance analysis for the TeraSort workload. The Spark executor memory, number of executors, and executor memory are fixed while changing the block size to measure the execution performance. Apart from the default block size (128 MB), there are 3 pairs (256 MB, 512 MB, and 1024 MB) of block size is taken into this consideration. Our results revealed that the block size 512 MB and 1024 MB present better runtime for sizes up to 500 GB data size. We have also observed a significant performance improvement achieved by the 1024 block size, which is 4% when the data size is larger than 500 GB. Thus, we can conclude that by adding the input splits block size for large scale data size, Spark performance can be increased. Figure 5d shows Spark shuffle behaviour performance for TeraSort workloads. We have taken two important default parameters (buffer = 32, spark.reducer.maxSizeIn Flight = 48 MB) into our analysis. We have found that when the buffer and maxSizeIn-Flight are increased by 128 and 192, the execution performance increased proportionally b a d c Fig. 5 The performance of the TeraSort application with a varied number of input splits and shuffle tasks up to 600 GB data sizes. Our results show that the default execution is equal, with a tested value of up to 200 GB data sizes. The possible reason for this performance improvement is the larger number of splits size for different executors. Table 6 presents the experimental data of the TeraSort workload between MapReduce and Spark, while the default parameters are changing. Figure 6a illustrates the comparison between Spark and MapReduce for WordCount and TeraSort workloads after applying the different input splits. We have observed that Spark with WordCount workloads shows higher execution performance by more than 2 times when data sizes are larger than 300 GB for WordCount workloads. For the smaller data sizes, the performance improvement gap is around ten times. Figure 6 shows a TeraSort workload for MapReduce and Spark. We can see that Spark execution performance is linear and proportionally larger as the data size increase. Also, we noticed that the runtime for MapReduce jobs are not as linear in relation to the data size as Spark jobs. The possible reason could be unavoidable job action on the clusters and as a result that the dataset is larger than the available RAM. So, we conclude that MapReduce has slower data sharing capabilities and a longer time to the read-write operation than Spark [4].
Throughput
The throughput metrics are all in MB per second. For this analysis, we only considered the best results from each category. We have observed that MapReduce throughput performance for the TeraSort workload is decreasing slightly as the data size crosses beyond 200 GB. Besides, for the WordCount workload, the MapReduce throughput is almost linear. For the Spark TeraSort workload, it can be observed that b a Fig. 6 The comparison of Hadoop and Spark with WordCount and TeraSort workload with varied input splits and shuffle tasks the throughput is not constant, but for the WordCount workload, the throughput is almost constant. In this analysis, the main focus was to present the throughput difference between WordCount and TeraSort workload for MapReduce and Spark. We found that WordCount workload remains almost stable for most of the data sizes, and concerning the TeraSort workload, MapReduce remain stable than Spark (see Fig. 7). Figure 8a, b depicts individual workload speedup. The best results are taken into this consideration from each category in order to get a speedup. From the above figures, we can see that as the data size increases, WordCount workload speedup decreases with some non-linearity. Besides, we can see that the TeraSort speedup decreases when data reaches sizes larger than 300 GB. Notably, as the data size increases to more than 500GB for both workloads, the speedup starts to increase. Figure 8c illustrates the speedup comparison between the workloads. It can be seen that the TeraSort workload outperforms WordCount workload and achieves an all-time maximum speedup of around 14 times. The literature presents that Spark is up to ten times faster than Hadoop under certain circumstances and in normal conditions, and it only achieves a performance two to three times faster than MapReduce [38]. However, this study found that Spark performance is degraded when the input data size is big.
Conclusion
This article presented the empirical performance analysis between Hadoop and Spark based on a large scale dataset. We have executed WordCount and Terasort workloads and 18 different parameter values by replacing them with default set-up. To investigate the execution performance, we have used trial-and-error approach for tuning these parameters performing number of experiments on nine node cluster with a capacity of 600 GB dataset. Our experimental results confirm that both Hadoop and Spark systems performance heavily depends on input data size and right parameter selection and tuning. We have found that Spark has better performance as compared to Hadoop by two times with WordCount work load and 14 times with Tera-Sort workloads respectively when default parameters are tuned with new values. Further more, the throughput and speedup results show that Spark is more stable and faster than Hadoop because of Spark data processing ability in memory instead of store in disk for the map and reduced function. We have also found that Spark performance degraded when input data was larger.
As future work, we plan to add and investigate 15 HiBench workloads, consider more parameters under resource utilization, parallelization, and other aspects, including practical data sets. The main focus would be to analyze the job performance based on autotuning techniques for MapReduce and Spark when several parameter configurations replace the default values. | 7,481.6 | 2020-08-17T00:00:00.000 | [
"Computer Science"
] |
Determinants of the Adoption of Physical Soil Bund Conservation Structures in Adama District, Oromia Region, Ethiopia
Abstract: This study emphasizes the adoption of physical soil bund structures including the major factors influencing the adoption process. The study is based on the data collected from 120 households. Two analytical techniques, descriptive statistics and logistic regression function were employed in analyzing the data. The findings indicate that a host of factors, most of which are policy related, were responsible for poor technology adoption. In this regard, adoptions of technologies are predominantly influenced by economic variables such as land size, livestock holdings and income of the households. Furthermore, institutional factors, such as access to credit, mass media, and extension services as well as the educational level of the farmers are primarily influencing the adoption decision. The results of the study confirm that past extension approaches have been biased against natural resource management. With the exception of physical soil bund structures, other components of soil conservation packages were found to be marginalized. Overall, survey results reveal that integrated natural resource oriented approaches were not adopted. Based on the findings, it is strongly recommended that policy makers and technical institutions should readdress the policy-related issues to facilitate extension systems that will ensure environmentally sustainable development. Keywords: Adoption; Conservation; Physical Structures; Small Scale Farmers
Introduction
The agriculture sector in Ethiopia must nearly double its yields on existing farm land to meet food needs, which are increasing due to the high growth rate of the population. In Ethiopia, agriculture contributes a significant share of family food self-sufficiency and national food security, playing an important role in the development of the national economy. In this regard, the Ethiopian Economic Association contends that agriculture is the mainstay of the national economy, where has accounted for about half (47%) of the Gross Domestic Product (GDP) in recent years and more than 80% of the economically active rural population earning their livelihood from crop and livestock production (EEA, 2005). However, despite its importance for national development and food security, agricultural land productivity is declining as time progresses while the population is increasing at a fast growth rate. The main reason behind the low productivity of farm land is attributed to land degradation which is commonly concerned with soil degradation of the arable land. In Ethiopia in general and in East Shewa Zone in particular, agricultural land has been under continuous cultivation for the past several decades and it is physically and chemically degraded. The Central Rift Valley, (the study area), is among the severely degraded areas, where the severity of the problem is aggravated by erosive agricultural practices.
In this regard, the fundamental attempts for agricultural and rural development necessitate the extension of intervention to promote improved agricultural technologies and appropriate natural resources management. To this end, the Ethiopian government has initiated a massive program of soil and water conservation with the support of international organizations. In addition to these efforts made through conservation related projects, considerable attention has been put in place for the promotion of soil and water conservation practices through national extension package programs as a part of the agricultural development strategy. However, from experiences over the past years it appears to have not made progress with respect to bringing major impacts on the adoption of modern technologies (Wagayehu, 2003). On the other hand, despite widespread soil degradation and a low level of technology adoption, the limited efforts have been made to identify the nature of physical soil bund conservation structures adoption and have not been sufficient to summarize defined conclusions. Therefore, this study examines the adoption of physical soil bund conservation structures and determines the influencing factors in the study community.
The Study Methodology
This study was conducted in Adama District in East Shewa Zone, Oromia Regional State, Ethiopia. The target population was farmers who are living in the peasant associations (PAs) of the district who have participated in the extension package program and soil conservation projects. The sampling procedure adopted was stratified cross-sectional sampling method. The district was divided into three sub-groups based on the agro-ecology, and then two PAs were selected from the peasant associations of each agro-ecology. A sampling frame was prepared from a list of farmers on a membership registration book. For data collection purposes, six PAs were included in the study group and 120 farmers (20 from each PA) were selected by random sampling procedure. The selection of sample PAs was also conducted by random sampling procedure within each sub-category of peasant association. Therefore, based on a suggestion made by Poate and Daplyn (1993) in the first stage of sampling, six PAs were selected by random sampling techniques in each category and, in the second stage, twenty farmers were selected from each selected PA using simple random sampling technique. In order to maximize the reliability of the results, relevant information was collected from primary and secondary data sources for analytical purposes, as well as for crosschecking of the information. The primary information was collected from sampled farmers by enumerators who administered the structured interview schedule. Finally, data collected from different primary and secondary sources were summarized and transferred into Statistical Package for Social Sciences (SPSS) computer program. Using SPSS sub programs, the descriptive statistical techniques were employed to determine the nature of data for final decisions and recommendations.
In the meantime, Logit statistical model was selected for further data analysis and interpretation. According to Karki and Bauer (2004), this is the most commonly used econometric model with limited dependent variables and is used to examine the relationship between adoption and determinants of adoption. Based on Gujarati (1988) and Bohrnstedt and Knoke (1994), the following Logistic distribution model was selected and employed to determine the odds (probability) of physical soil bund structure adoption decision of the farmers.
Thus, the logit (L i ) multiple regression model (logistic distribution) containing 12 predictors (binary and continuous variable) was specified and regressed against dependent binary of soil bund technology adoption. In order to estimate the probability of adoption of the physical soil bund conservation structure, the above model (equation 3) was employed considering that technology adoption is a dichotomous dependent variable and independent variables are socio-economic factors that can influence the adoption process.
Demographic Variables and Physical Soil Bund Structures Adoption
The demographic variables considered in this study are age of the family head, educational level, family size, sex and marital status of the respondent. To determine the influence of the age characteristics of the sample households on the adoption of physical soil bund structure, a comparison was made between different age categories of the respondents and was tested using frequency of each category. Results showed that about 78.3% of the respondents were within the age range of 30 to 60 years which is considered to be the effective age group to produce food, whereas 10 and 11.7% were below 30 and above 60 years, respectively (Table 1). These findings are consistent with other findings in Arsi zone (Haji, 2002) which indicated that the proportions of young and older farmers are lower compared to other age categories and the same source contends that the low proportion of this age group is due to lack of access to land resources. More specifically, about 68% of physical soil bund structure adopters were found to be within the age range of 50 and below years, while the remaining 32% of the adopters were found to be above the age of 50 years. These findings are consistent with literature, which confirms that younger farmers are more likely to be adopters of technology. When a comparison is made between the different categories, the farmers within age category of 30 to 40 years were found to be high adopters (of total respondent, 29.2%) of the physical soil bund structures, while only 5.8% of this group had not adopted the technology. On the contrary, out of the total respondents, only 8.3% of the farmers within the age category below 30 years were found to be adopters. The proportion of the older farmers (above 60 years) in the whole sample was about 12% and, within this age category, 9.2% were found to be adopters of soil bund structure technology (Table 1). These findings were also consistent with the findings from North Shewa Zone reported by Mulugeta (2000) which stated that, as the age increases, the decision to invest on land conservation decreases.
Family members are considered to be all persons related to the particular farmer and dependent on family farm land (Mulugeta, 2000). The survey results show that the average family size of the respondents was found to be 6.54 persons which, according to CSA (1995) cited by Mulugeta (2000), is above the national average of 5.17 persons per family and also greater than the regional average of 5.4 persons reported in the CACC (2003). When the adoption situation of the respondent is considered, about 67% of soil bund technology adopters were found to be those respondents with 3 to 8 family size out of the total sample farmers, whereas this group of farmers constitutes about 82% when considered only among the adopters category ( Table 2). The proportion of non-adopters in such categories of family size (3 to 8 family members) was found to be 68.2%, whereas the remaining 31.8% are within the family size categories of below three and above eight when considering the nonadopters category only. Based on the survey information, the results related to level of education of the respondent are summarized and presented in Table 3. According to these findings, out of the total sampled households, 80 respondents (about 67%) have formal education. Out of the total adopters, about 65% have formal education, whereas the remaining 35% are adopters with no formal education. Literature on soil conservation, for example, Tesfaye (2003) confirms that the better educated farmers show better positive response to soil conservation technology adoption and better decisions on soil bund retention on their farm land, which is adequately consistent with these findings. More specifically, the numbers of those who have primary education were relatively high in both adopters and non-adopters with about 36 and 45% of each category, respectively. On the other hand, survey results show that, out of 71 respondents who have above adult education, nearly 82% adopted physical soil bund structures (Table 3) and this result is more or less closer to the findings of Mulugeta (2000) who reported that 89.7% of farmers who attend formal education were users of physical soil conservation structures. Moreover, Weir and Knight (2000) suggested, indicating that the more educated the farmers, the more rapidly adoption and diffusion would take place in that particular community. Table 4 provides the sex composition of the respondents as related with farmer's adoption trends of physical soil bund structure in the sampled farmers.
Based on the survey results, it was evident that, out of the total of 120 respondents, 78% were found to be maleheaded households, while about 22% of respondents were female-headed households. The proportion of households headed by males is substantially higher than that of females, reflecting the fact that males in most Ethiopian societies assume execution of the major roles of the agricultural activities and the head is considered as the main bread winner in the household as well as the one who bears responsibility for the household. In general, the findings of the survey indicate that there is a strong relationship between technology adoption and sex of the respective farmer and this result is consistent with the results reported by Yisehak (2002), who indicated the existence of a significant relationship between sex of the respondents and use of improved seeds in the study community. Regarding adoption rate, out of a total of 94 male respondents, about 83% were found to be adopters of physical soil bund structures, whereas the proportion of female adopters in the female category was nearly 77% (Table 4). In addition, about 80% of the adopters were male and 20% were female adopters in the adopters' category of respondents and, in the same manner, the proportion of male respondents was higher than female respondents in the category of non-adopters. Furthermore, the analysis of survey data shows that, among the total respondents, the majority (93.3%) were married, whereas the remaining 6.7% were found to be single household headed, due to either not being married, or being widowed or divorced, and among total nonadopters, the largest proportion (about 82%) of respondents were found to be married and only the remaining 18% of the non-adopters were single farmers. Concerning physical soil bund practices adoption, nearly 96% in the adopter category were married males and the remaining 4% were married female respondents.
Adoption Status and Comparison of Major Soil Conservation Practices
The overall analysis and comparisons for many introduced soil conservation practices were conducted and are presented in Table 5 in order to determine the status of adoption and make relative comparison between different practices with respect to farmers adoption of each practice which can help the researcher to generate conclusions concerning the attention and support of those particular practices and recommendations which is the ultimate goal of the study. Nearly 82% adoption rate for physical soil bund structures is found to be encouraging by ignoring the resources consumed with respect to the promotion of these practices in the past Food for Work Program (WFP) implementation years. Contrary to the adoption of soil bunds, the adoption rates of conservation tillage (0.8%), fallowing (2.5%) and use of crop residue (7.5%) were found to be discouraging and they are first, second and third from the last, respectively. The other worst aspect of these practices is that 80.8%, 76.7 and 74.2% of the respondents are not aware of conservation tillage, fallowing and use of crop residue, respectively (Table 5). In these aspects, the findings show that past extension approaches were lacking appropriate packaging and integration of agricultural and natural resources oriented technologies to sustain land resource. From an agricultural point of view, land is an indispensable factor for crops and livestock production and the proper utilization of land under different components would contribute to the development of national agricultural production (CACC, 2003). However, the results of this study indicate that the attempt to promote proper land utilization to sustain agricultural land productivity looks minimal in the study community.
Socio-Economic Variables Associated with Soil Bund Adoption
Before moving on to look at the detailed analysis of the farmer's and farm characteristics effect on technology adoption, the usual procedure to test for means differences and tendency of association between variables were conducted using independent T-test and Chi-square test techniques, respectively. The results of these two test statistics are presented in Table 6 for continuous variables and in Table 7 for categorical variables. As shown in Table 6, except land holding, all selected variables were found to be statistically significant, indicating that physical soil conservation technology adoption decisions had significant association with the mentioned respective variables. In this aspect, characteristics of the household, such as age, education level attained by farmers and family size of the respondent, appeared highly significant (P < 0.01). Moreover, the remaining variables that are livestock holding and yearly income of the household also were found to be significant (P < 0.05) ensuring dependency of physical soil conservation technology adoption on these two variables. Age of the household head is negatively associated with adoption of physical soil bund structure (Table 6), which is similar to other study findings, while the result of the negative association of the landholding was unexpected and uncommon in most of the previous empirical studies. Federe et al. (1982) suggested relatively closer or similar results with these findings, stating that farm size is one of the factors on which empirical adoption study is focused but that farm size can have different effects on the rate of adoption depending on the characteristics of technologies and institutional setting of the service delivery system.
On the other hand, the relationships between adoption of physical soil bund structure and other variables, like education level of the household, family size, age, livestock holding and yearly income of the household were found, as expected, to have positive association to adoption. In the meantime, influence of landholding on adoption of physical soil conservation practices appeared to be insignificant in this particular study. According to Wegayehu (2006), age of household head can influence the availability of labor and that is one of the most important factors of production to farmers in rural areas. This, in turn, determines the decision of households as to which soil conservation type to adopt on their farm land and our results are consistent with his findings. In the meantime, it has been realized from literature reviews that many categorical variables practically affect the adoption of soil conservation technologies in the small scale farming systems of Ethiopia in general and in the study community in particular. Table 7 shows detailed investigations of these categorical variables in this study. Regarding the effect of sex on technology adoption, Wegayehu (2006) suggested that sex of household determines access to soil conservation technological information provided by extension agents and soil conservation related projects operating in the area. Apparently, the marital status and social participation (responsibility in PAs) would also influence the adoption of any particular technology. The results of this survey indicate a strong association between social characteristics of the farmers and soil conservation technology adoption. The sex of the respondent with Chi-square of 4.99 and the marital status of the household with Chi-square value of 57.41 were found to be statistically significant, (P < 0.05) and (P < 0.01), respectively. In addition, the main farming system of the respondent also formed part of this study and it was found to be statistically significant with a Chi-square value of 13.67, indicating strong association between soil conservation technology adoptions and farming system. Among the many institutional variables, it was realized that credit facility, with Chi-square value of 25.76, and source of extension information, with Chisquare value of 17.65, were statistically significantly (P < 0.01) different (Table 7).
Economic Variables and Physical Soil Bund Structure Adoption
The economic variables include the estimated yearly income, land holding and main occupation of the farmers. Concerning family yearly income, the result shows that the minimum income of the respondents who reported was 300 Birr and the highest was found to be 23,260 Birr, indicating an average household income of 3,038.5 Birr with 2,863.0 Birr standard deviation and tossing coefficient of variation (CV) of about 94%. Furthermore, the results indicate that about half (45.8%) of the respondents earned a yearly income in the range of 1,000 to 3,000 Birr. Those in the yearly income categories of less than 1,000 Birr and those with greater than 7,000 Birr constitute nearly 15.8 and 7.0%, respectively (Table 8). In general, based on these results, it is possible to predict that the better the yearly income, the more such farmers would adopt the introduced conservation technology to alleviate the land degradation process.
Coefficient of variation (CV) ≈ 94%
The discussion of this section is based on the results of household farm size summarized in Table 9, in which the overall average landholding of the households was found to be 2.54 ha with corresponding standard deviation of 1.2 ha, leading to about 47% coefficient of variation. The findings indicate that the average farm land holding of the study PAs is greater than one hectare of national average in the country, as reported by EEA (2000) cited in Haji (2002) and as well as the regional average of 1.36 ha per household. In the study group of the district, a total of 64 respondents (53.4%) are reported to be in the range of farm land holding category of 0.5 to 2.5 ha of land and these findings are closer to the 52.1% reported by the CACC (2003) and 39.2% of the respondents were land holders within the range of 2.5 to 4.0 ha ( Table 9). The remaining 7.5% includes the holders of less than half and greater than four hectares.
Coefficient of variation (CV) ≈ 47%
With respect to adoption of soil conservation structures, a total of 41 farmers (34.2%) in the land holding category of 2.5 to 4.0 ha were adopters of the introduced physical soil bund conservation practices in the study areas with the corresponding 5.0% of non-adopters. Furthermore, out of the group with farm land size in the range of 0.5-2.5 ha, the adopters and non-adopters constituted 43.2 and 10.0% of the total respondents, respectively ( Table 9). The results showed that optimum land size ownership might be the major factor in promoting technology adoption in the small scale farming systems of Ethiopia in general and Adama District in particular. Moreover, the results of the investigation on different occupational opportunities for farmers considered in the study revealed that crop farming and mixed croplivestock farming are the two major occupations (Table 7) while livestock farming (pastoralist) is not commonly practiced in this particular farming community. In this respect, the results indicate that about 72% of the total respondents are engaged in crop farming out of which 56.7% were found to be adopters of physical soil bund structures, whereas the rest, 15% of the sample size, were not adopters. On the other hand, out of a total of 120 respondents, 34 (28.3%) were engaged in crop-livestock mixed farming and 30 farmers, 88.2% of this group or 25.8% of the total sample size (Table 7) were found to be adopters of physical soil bund structures.
Farm Land Related Variables and Adoption of Soil Bund Structures
In this study, farm land related variables include the physical conditions of particular farms, farm land distance from household residence and public facilities. Data of the respondents' farm land condition (erosion status) and farm land distance from the residence of the respondents are presented in Tables 10 and 11, respectively, and farm land distance from other public support providing facilities are also discussed in this section. Basically, natural farm land characteristics and the erosive features of the soil represent major factors in dictating human intervention in small scale farming systems. With respect to biophysical condition of the farm land, the overwhelming majority of respondents (95%) reported very severe and severe soil erosion problems, including fertility decline and water logging, whereas only 5% of the total sample had only minor or no soil degradation problems on their farm lands (Table 10). Of the group with very severe and severe soil erosion problems, about 85% adopted physical soil bund conservation practices. On the contrary, only 16.7% of the group with minor or no soil degradation problem adopted the physical soil bund conservation structures, while the remaining 83.3% reported that they had no relevant reason to adopt physical soil bund structures.
Moreover, with respect to farm land distance, out of the total respondents, 63 farmers (53.4%) whose location of farm land is less than 2 km from their home were found to be physical soil bund structures adopters (Table 11). In addition, farmers constituting 24.6% of the total respondents in the 2 to 4 km distance category have adopted introduced technology. Five respondents (4.2%) of the category with farm land located at more than 4 km distance were found to be adopters of soil bunds. In the same manner, about 69% of the total respondents, whose farm land is within the near and medium distance (below 4 km) to development centers, category, were more likely to adopt physical soil conservation structures and, on the contrary, nearly 18% of the respondents in the same distance category to development center were nonadopters of conservation structures. With regards to road infrastructure facility, the results of this study indicate that 42% of the total sample who adopted the physical soil bund structures was those whose farm land distance from road facility were more than 6 km. This value is relatively higher than 39.5% of the total respondents whose farm land is within 4 km distance from primary roads and found to be adopters of the soil bunds. As argued by the EEA (2006), these findings also revealed that under development and poor infrastructure in the country in general and in the study area in particular are raising doubt about the economic feasibility of the technology adoption.
Institutional Support Related Variables Without Respect to Adoption
This section deals with the influences of institutional support related variables, mainly extension service, access to mass media and farmers' experience in physical soil conservation related projects including level of farmers' participation in the decision making process, on adoption rates of the conservation structures. With regard to extension services delivery, 36.4% of the respondents, confirmed that they were visited 1 to 2 times (days) per month by Development Agents (DAs), followed by 34.8% being visited 3 to 4 days per month. On the other hand, 6% of the total respondents reported no visit by DAs to their home or farm land. The investigation on DAs' visits to farmers shows that the farmers who were visited 3 to 4 days per month amounted to 41 out of whom 97.6% were found to be the adopters implying that the more visits received from development agents, the more likely farmers were to adopt physical soil bund structures to reduce soil degradation process on their farm lands. Out of the total respondents, only a few (2.5%) of non-visited farmers were found to be adopters of the promoted soil bund technology. In the extension information delivery system, mass media are the most common extension channels to reach even the remotest areas and the majority of rural population in the country. As a result, the survey results reveal that 73.3% of the respondents had access to mass media (Radio, News Papers and Television) and were helped by it to adopt physical soil conservation practices, while the remaining 26.7% had no access to any kind of mass media in the past three to five years. Furthermore, about 82% of the total farmers who had access to radio were found to be adopters of physical soil bund structures to sustain agricultural land productivity.
In this study, farmers' experience of soil and water conservation related projects indirectly refers to any form of assistance rendered to the farmers in the area of soil conservation with the ultimate goal to promote adoption of soil conservation technology by avoiding resource limitation. Tables 12 and 13 present summary of survey data of farmers' experience in conservation related projects and level of farmers' participation in planning and evaluation processes, respectively. As indicated in Table 12, the majority (98.3%) of the total respondents were involved in different soil conservation related projects for 5 to 20 years and out of this group, about 83% were found to be adopters of physical soil and water conservation (soil bund) structures. Concerning participation in planning and evaluation processes of conservation projects, about 66% of the responding farmers reported their participation in same as poor and/or had no participation at all in the process of the development projects (Table 13). However, 76.6% of this particular group was found to be adopters of physical soil bund structures which might be due to the heavy promotion or publicity by the projects regardless of participation. The remaining 34.2% reported that their participation was very good to satisfactory and the adoption rate within this group which was about 93% is a very good indication of the influence of participation on technology adoption. Furthermore, the survey related to level of farmers' participation went further and included assessment of their participation in problem identification, priority setting and decision making process. In this regard, about 24, 39 and 37% of the relevant respondents confirmed that they had poor participation in problem identification, priority setting and decision making process, respectively. The remaining proportion reported their participation in the mentioned project process as very good, good and satisfactory. In general, the results indicate that there is positive correlation between farmers' participation in the project process and technology adoption. In summary, the findings of the survey indicate that, in the past extension intervention, farmers' participation at different stages of the development project, including soil conservation related projects, was a neglected area.
Logistic Regression Summary and Discussion.
In this particular study, to look for a suitable model for selecting variables among total independent variables, different techniques and tools were employed to establish a relevant regression line to determine the relationship between dependent and independent variables. The dependent variable, which is adoption of physical soil bund structures, was taken as a categorical (dichotomous) variable with binary representation; while independent variables were a mixture of continuous and categorical variables, in which categorical variables were arranged in a binary manner as indicated in Table 14. 0.513 ns ***, ** and * = Significant at P < 0.01, P < 0.05 and P < 0.10, respectively; ns = Non-significant at P < 0.10.
Regarding the fitness of the selected regression line, the model Chi-square (X 2 ) of 35.39 appeared statistically significant, indicating that including selected explanatory variables significantly reduced the log likelihood ratio of the model when compared with the model established using only intercept. The classification table classified and correctly predicted 95.7% of the adopters and 50% of the non-adopters, whereas the model's overall correct prediction was found to be 87.5%. From regression analysis, access to mass media was found to be the leading variable in influencing the change in odds ratio of the technology adoption. The observed 15.25 odds ratio for accessibility of farmers to mass media (Table 14) indicated that the odds of adoption were higher for each one point increase in respondent's accessibility to any kind of mass media. On the other hand, odds ratio of land renting was smallest of all, in the opposite direction, indicating that with a one point increase on the experience of land renting scale being associated with the odds of disapproving (non-adoption) the technology would increase by a multiplicative factor of about 0.25 point. For the sex (dummy variable), the 6.32 odds ratio means that the odds (probability) of approval of the technology adoption by the farmer would increase by this point as the binary dummy variable changed to one point.
Furthermore, seven explanatory variables (education level, source of information, farm land distance from development center, land size, farmers' experience in conservation related projects, livestock holding and farmer training) make a different contribution to odds ratio in the expanded model varying between greater than one and less than two, indicating positive association between predictors and technology adoption. On the other hand, three of the explanatory variables-age, labor shortage and experience of land renting-influence the odds ratio of technology adoption by less than one factor, indicating negative association between explanatory variables and binary technology adoption. In general, eleven explanatory variables, except farm land distance from development center, provided similar association as predicted and, out of the variables, farm land distance moved in the opposite direction to hypothetical assumption which suggests negative association with technology adoption. Overall, out of the selected twelve explanatory variables, 50% were found to be significant at different probability levels. In this regard, the age of respondents was statistically highly significant (P < 0.01), sex of household head was statistically significant (P < 0.05), and the remaining four explanatory predictors (farm land distance, education, access to mass media and land renting) were found to be statistically significant (P < 0.1) among the variables attaining significance at different statistical significance levels (Table 14). The model results confirm that the educated farmers are more likely to adopt physical soil bund structures compared to those who did not attain formal education due to the fact that educated farmers would have more access to information. This indicates that farmers with formal education are likely to be aware of soil degradation severity which motivates them to seek appropriate innovation in order to mitigate the degradation process.
Conclusion
The survey results indicated that a majority of respondents perceived soil erosion and soil fertility decline as the major threats to their farm land sustainability, since the problem of soil degradation is very serious on crop land. However, despite the widely prevailing problems of farm land degradation, adoption of most of the biological and physical soil conservation technologies appeared minimal. Basically, practices such as crop rotation, intercropping, fallowing, conservation tillage and crop residue management are essential components of soil conservation packages to enhance soil fertility of farm lands, but the adoption rate of those practices was found to be poor compared with soil bund structure indicating lack of appropriate packaging of the technologies in the farming system.
Due to lack of emphasis on extension service delivery systems in the past extension package program implementation, almost all soil conservation practices have been marginalized throughout the past many years, leading to non-sustainable farm land productivity. According to the findings of this study, participation of the farmers in extension package programs has improved the use of agricultural technologies among the farming community in previous years, but integration of agricultural technologies with environmentally-sound technologies and management is lower than the theoretical recommendations, leading to natural resources degradation and threats to environmental sustainability.
The study further revealed that almost all predicted socio-economic factors appeared to influence the adoption of soil bund structures in the small farming communities. In this regard, participation of farmers in soil conservation programs and adoption of introduced technologies are predominantly influenced by economic variables such as land size, livestock holdings and yearly income of the households. As confirmed by the findings of the study, farmers with greater resources are more likely to participate in the program and then adopt the introduced technologies compared to resource-poor farmers. Furthermore, institutional factors, which are mostly concerned with access to credit, mass media and extension services, primarily affect the physical soil bund structures adoption. Moreover, educational level of the farmers was also observed to facilitate the technology promotion process and the adoption decisions of the farmers. In this regard, farmers with a higher educational level were found to be greater technology adopters compared to non-educated farmers.
The survey findings further revealed that participation of farmers in the decision-making process of the development project was poor which is contrary to stated principles in national strategy. In reality, most of the approaches lacked elements of participation and were not encouraging the farmer's active participation in decisionmaking process. Overall, based on the evidence of this and other empirical studies, many policy-related issues need to be considered to promote economically and environmentally-sustainable development in the smallscale farming system. Generally, according to the study results, most of the major soil conservation practices that are important for packaging with physical soil bund conservation structures were found to be neglected. Hence, based on the findings of the present study, it is recommended that policy makers and technical departments should pay particular attention to those practices that have been marginalized in the past implementation years and follow an integrated intervention approach in order to mitigate soil resource degradation. Furthermore, appropriate policies and emphasis should be in place to facilitate farmers' accessibility to education, mass media and institutional support which ultimately influence technology adoption in the small-scale farming community.
Acknowledgement
The authors of this article are grateful to the Sasakawa Africa Fund for Extension Education (SAFE) project for financially supporting this study. In addition, the contributions of all stakeholders (especially Adama district farmers and extension field staff) are greatly appreciated and acknowledged by the authors. | 8,267 | 2010-03-31T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
The risk linked to ionizing radiation: an alternative epidemiologic approach.
Radioprotection norms have been based on risk models that have evolved over time. These models show relationships between exposure and observed effects. There is a high level of uncertainty regarding lower doses. Recommendations have been based on the conservative hypothesis of a linear relationship without threshold value. This relationship is still debated, and the diverse observations do not allow any definitive conclusion. Available data are contradictory, and various interpretations can be made. Here we review an alternative approach for defining causation and reconciling apparently contradictory conclusions. This alternative epidemiologic approach is based on causal groups: Each component of a causal group is necessary but not sufficient for causality. Many groups may be involved in causality. Thus, ionizing radiation may be a component of one or several causal groups. This formalization reconciles heterogeneous observations but implies searching for the interactions between components, mostly between critical components of a causal profile, and, for instance, the reasons why specific human groups would not show any effect despite exposure, when an effect would be expected.
Setting limits for exposure doses to prevent adverse outcomes in humans is an old debate in radioprotection (1). Various threshold values have been proposed. Historically, the first permissible dose proposed was linked to deterministic effects of ionizing radiation. These values were around 500 mSv (50 rems). The threshold doses at which nonstochastic effects appear were based on these figures. They have not been modified to date and represent the dose limitation levels for isolated organs.
Given recommendations from the National Council of Radiation Protection (NCRP), in 1956, the International Commission on Radiological Protection (ICRP) adopted the proposed dose limits, expressed as the following simple equation for occupational exposure and assuming that people under 18 are not occupationally exposed: The limits for occupational dose accumulated at various ages is the dose D = 5 (N - 18), where N is the age in years, and D is expressed in rems (2).
The risk level for the population was proposed to be one-tenth of the occupational risk, computed for 1 year (i.e., 0.5 rems/year) (2). It is therefore not only stochastic effects that will dominate radioprotection, but, more decisively, it is the cumulative aspect of radiation risk, especially for cancers. The cumulative risk model allows one to take into account all radiations, even the very small ones for individuals and for a group. The consequences of this new concept had an enormous impact for the nuclear energy production industry, for example, which now faces great difficulties as a result of this risk concept (3).
With the ICRP 26 (4), the stochastic (i.e., cancer) and gonadic risks became the basis of radioprotection. Indeed, as the first data on cancers following the bombings of Nagasaki and Hiroshima became available, the ICRP (2) proposed the computation of the total body equivalent dose on the basis of a total cancer risk per organ, and a gonadic risk, the sum of weighting factors being equal to 1.
This computation supposed that a) there is no threshold level of risk at the level of organs, b) there is a hierarchy of risk between organs, c) there are two reference effectsstochastic effects of fatal cancers and a complementary genetic risk, and d) there is a simple relation between risk and ionizing radiation based on an "absolute risk model." A known quantity of exposure induces a known quantity of effect, so a change in the baseline incidence rate in nonexposed persons does not affect the magnitude of increase in risk in exposed subjects. But this reasoning has been challenged over time.
Analysis of the Normative Approach
Normative recommendations. Except for the induction of leukemia and bone cancers, the epidemiologic model to consider is a relative risk model (5). A known quantity of exposure modifies the outcome by a certain multiplicative effect. A change in the baseline incidence thus affects the level of outcome difference.
This model has several consequences: On one hand, assuming that the cancer rates increase over time, this new model implies an increased risk difference. On the other hand, the relative risk model means that ionizing radiation is a cofactor of cancers, which was not necessarily the case for an absolute risk model. Ionizing radiation may be considered a component, which modifies proportionally a general risk, whereas it is the specific appearance rate of the general risk that determines the noxious action of radiations. In other words, it is the conjunction of several elements that explains the effect of ionizing radiation, among which is the background rate of cancer for the corresponding organ.
Fundamental research. Carcinogenesis is a complex phenomenon that includes multiple steps, some of which are sequential (6). This is consistent with the cofactors necessary for carcinogenesis.
Mutations necessary for carcinogenesis may be linked not only to the action of an external mutagenic factor, but also to increasing instability of the nucleus (e.g., mutating phenotype due to the loss of p53) (7). The indirect action of ionizing radiation is currently under intense scientific scrutiny. If the action of ionizing radiation is indirect, this will modify the relationship between exposure and effect, reinforcing the cofactorial aspect.
The initial lesion consists of DNA double-strand breaks, stabilized quite rapidly within 2 hr. Why does such a lesion induce chromosome aberrations a few generations later (8)? The process is not well understood. The concept of an indirect action of ionizing radiation on carcinogenesis is therefore reinforced. Kennedy and Little (9) showed more than a decade ago the dissociation between causes and effects in ionizing radiation.
Cancer transformation cannot be dissociated from apoptosis of the progeny of irradiated cells (10): This occurs belatedly, around the 12th cellular generation, following a mechanism that is not yet understood that includes antiapoptosis processes that stop progressively according to a biologic apoptosis clock. Understanding the mechanisms of apoptosis is necessary to understanding cancer transformation (11).
Suprasensitivity to low doses underlies the hypothesis of supralinearity between exposure and effects at the same dose, and reflects the effect of cellular adaptability to recurrent radiations (12)(13)(14)(15). In addition, this adaptability combines with weaknesses in the defense mechanisms of the nucleus against other genotoxic elements (16). This is the price organisms pay for adaptation.
Research also focuses on nuclear instability before cancer transformation. We know about instability after the loss of control of the p53 system. The idea of nuclear instability goes beyond this and proposes an early generalization of this instability: Instability would be the cause, not the effect, of genetic modifications necessary to cellular transformation and would be expressed progressively through increasing cellular division (17). But it may also be expressed itself along with cellular aging (18).
Another emerging notion is that of indirect genetic effects on cellular cytoplasm, or even on the cytoplasm of neighbor cells. The genotoxic target of ionizing radiation is therefore larger than the genome itself (19).
Epidemiology. Stochastic effects are observed during exposure at higher doses of ionizing radiation than those encountered in a normal environment. Indeed, from single or cumulated doses above 250 mSv, observations are rather consistent. In contrast, observations of effects from lower radiation levels are surprisingly heterogeneous and contradictory.
An interesting example is the disparity of findings within a Canadian study of the risk of breast cancer among women irradiated by fluoroscopies during the treatment of pulmonary tuberculosis (20). The results of all provinces are consistent among each other, except Nova Scotia, which shows the most important risk despite similar quantities of irradiation (relative risks per Gray are, respectively, 1.20 and 3.03). The difference between Nova Scotia and other provinces has not been explained. One hypothesis might be the difference due to a higher dose delivery rate during examinations in Nova Scotia.
Debates exist about threshold values and the relationship between exposure and stochastic effects (21). One point of view is that the stochastic risk is likely to have a threshold level, which would be between 100 and 250 mSv (22).
Another is that the stochastic risk has no threshold level. The relationship may be supralinear (increased effects at low doses); it may be linear (23); or it may be quadratic (minimized effects at low doses) (24).
Current recommendations propose standards that account for a linear conservative risk without threshold value, together with a stochastic model computed on a quadratic basis using the dose/dose-rate effectiveness factor relationship (DDREF) = 2 below 0.2 Gy (i.e., a proportional decrease from the linear extrapolation without threshold value computed at high doses) (25). But the relationship between very low exposure and effect for residential radon is controversial (26). In this respect, the controversy between the ecologic study of Cohen (27) and case-control studies is interesting. Cohen (27) explains the discrepancy between these studies as proof of the absence of linearity between exposure and effect in very low radon exposures. On the other hand, Lubin and Boice (28) think that the discrepancy is caused by the "epidemiologic fallacy" (29). Controversies over the weaknesses of ecologic studies have been discussed by many authors (30,31). Other cofactors may explain the discrepancy. For example, smoking is an important cofactor (32). The effect is neither additive nor multiplicative, but somewhere in between.
Smoking limits the risk of radon versus nonsmoking habit [relative risk (RR) no smoking = 1 + 0.0103 working-level month (WLM); RR smoking = 1 + 0.0034 WLM]. Other explanations have been proposed, such as latitude or rural/urban status (33,34). Rothman (35) defines the cause of a disease "as an event, condition, or characteristic that plays an essential role in producing an occurrence of the disease" (p. 11). He also argues that the appearance of a disease is linked to a process involving several components, each of them being necessary but not sufficient to cause disease. When we examine the complex, sequential process of cancer, this hypothesis is obviously more plausible than the unique causality determining a unique effect. Causality seems therefore to be multiple. In each sufficient causal group, there would be a number of necessary but not sufficient components, except in the case of a causal group that comprises a single component (Figure 1).
Discussion
If one component is missing, the causality disappears. In this approach, the less frequent component will be the limiting factor of causality. Given relative scarcity of this component compared to others, it will be a determining factor. From this point of view, the cumulative importance attributed to all the necessary factors of causality can obviously exceed 100%. The paradigm of this approach lies in the fact that the component that appears to be the most noxious and the most determining is actually the least important because of its scarcity in the causal group.
Rothman (35) proposes that "each constellation of component causes is minimally sufficient (i.e., there are no redundant or extraneous component causes) to produce the disease. Component causes may play a role in one, two or three causal mechanisms. … Thus, the apparent strength of a cause is determined by the relative prevalence of component causes. A rare factor becomes a strong cause if its complementary causes are common" (p. 12). The rare factor is a critical component. In contrast, a frequent factor becomes of minimal effect if its complementary causes are infrequent.
Difficulties lie in the exposure and risk assessment of all contributing toxic substances of environmental concern (36). When the Rothman model is applied to the risk induced by ionizing radiation, some inconsistencies or controversies in the observations may be better understood. The same may be said of the application of fundamental research. Difficulties may be resolved by seeking other components of different causal groups in which ionizing radiation is one of the factors. Risk assessment should take into account all the factors, provided that the importance of each individual factor cannot be assumed. In other words, the problems will not be resolved completely by considering only the ionizing radiation.
Examples
In a case-control study, in which Stewart et al. (37) observed an increase of leukemia in children exposed in utero to simple X rays, a controversy arose immediately because the dose required for risk of leukemia was decreased by at least an order of magnitude (10 times less, dose around 10 mSv) (38). Surprising in this debate was the position of MacMahon (39), who found an effect of the same size in a cohort study of 700,000 children, but refused to admit the causality, arguing first the presence of an unknown confounding factor (40). MacMahon (41) stated that three observations seemed incompatible with the results of Stewart: first, the observed cancer risks of the children irradiated in utero at the time of the atomic bombs; second, a significant difference between the absolute risk coefficients for infants exposed prenatally and those exposed after birth; and third, the finding of an equivalent increase in solid cancers, which seemed to him uncharacteristic. In fact, differences seem to exist between childhood and adult cancer sensibility, cancer latency, types of tumors, and period of cancer sensibility during pregnancy. There is also limited information about cancer risk for children irradiated early in postnatal life (42). Doll and Wakeford (43) and Boice and Miller (44) studied the elements in favor of causality and related uncertainties in the case of leukemia in children exposed in utero, with contrary conclusions about a causal epidemiologic association between prenatal irradiation and childhood leukemia and cancer. The finding in the nuclear industry, at Oak Ridge National Laboratory (45), of increased mortality due to leukemia among the workers was strongly debated (46). The results at the Oak Ridge National Laboratory led to investigations of an unknown confounding factor (47). A better appraisal of the healthy worker effect and the observation of a higher susceptibility to radiation with age over 45 have been proposed to explain certain results of the Oak Ridge study (38). Studies about occupational radiation exposure in Canada show similar results (48) (with an estimated excess RR of 3% per 10 mSv), whereas other observations failed to show any harmful effect.
Gardner et al.'s (49) hypothesis that childhood leukemia is attributable to the father's low occupational exposure to radiation was the center of an intense controversy (42). Geographic disparities (50) together with confounding factors (51) were proposed as explanations for the cluster observed.
The cluster found in Berlin showing an increase in the rate of Down syndrome in children of mothers exposed to the Chernobyl nuclear accident was explained by an iodine deficiency in these populations (52).
The study of susceptible and fragile populations is essential in this respect because the genetic aspect is obviously one of the components of the causal groups. Persons with a heterozygous gene for ataxia telangectasia (53), or carrying a gene for Li-Fraumeni syndrome (54) are examples of susceptible populations. Lavin et al. (55) found higher sensitive subgroups of breast cancer patients.
Analyses of studies of the survivors of nuclear bombings are not always consistent (56)(57)(58). It is important to note that the controversy occurs because of heterogeneity of effects in a possibly biased cohort compared to exceptionally healthy persons. Stewart (59) found a selection bias in the Life Span Study Cohort. A susceptibility difference to radiation cancer effects in the exposed population may be explained by age (cutoff at ± 50 years of age) and by amounts of irradiation (threshold for excessive marrow damage). Thus, atomic bomb survivors may not be representative of populations exposed to radiation by other means (38): "As a result of these biases, atomic bomb data are not a reliable source of cancer risk coefficients, but they can still be used to study factors with immune system associations" (p. 96).
New concepts regarding the oxidative cellular stress induced by irradiation, which produces free radicals, involve oxidative stress in complex mechanisms, which is not only linked to the genetic profile, but is also common to other toxicologic mechanisms. In the same way, the production of transmissible cell-to-cell effects, between hit and nonhit cells (bystander effects), and a transmissible effect of an instability phenotype reinforce the theory of necessary but not sufficient nor unique components of a causal group and the necessary synergy with other toxic, physiologic, or genetic components (19).
Conclusion
The hypothesis of a causal group with necessary but not sufficient components implies that the hypothesis of a risk without threshold value should be maintained until all the components of the causal groups are defined. Because, in principle, it will never be possible to know all the causal groups and their composition, the hypothesis of no threshold value should be maintained. It is, moreover, possible that among all the causal groups including the component of ionizing radiation, there is at least one group that includes only ionizing radiation.
The supralinear, linear, quadratic, or other relationship is linked not necessarily to the irradiation itself, but to the combination of various components. If the existence of these components is admitted, one must allow reasonable pessimistic hypotheses because uncertainties necessarily remain in the composition of the causal group.
Eliminating one of the components will lead to the elimination of causal group(s) where this component is present, and therefore a decrease of the global risk associated with the other components of these causal groups may occur.
Research must focus on components other than ionizing radiation, because these factors might be equally important (not restrictive of the effect). It may be possible to operate on these factors to make them become restrictive of the effect. This is reflected by environmental cancer prevention.
The hypothesis of causal groups implies that we must investigate not only the reason for an effect linked to exposure to ionizing radiation, but also the reason for a lack of effect after exposure. In this particular situation, a (partial) eliminated (critical) component must be investigated. This is implicit in the hypothesis that a different restrictive component is included in the same causal group as ionizing radiation. | 4,160.8 | 2001-08-15T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Physics"
] |
A web-based GIS for managing and assessing landslide data for the town of Peace River , Canada
Assessment of geological hazards in urban areas must integrate geospatial and temporal data, such as complex geology, highly irregular ground surface, fluctuations in pore-water pressure, surface displacements and environmental factors. Site investigation for geological hazard studies frequently produces surface maps, geological information from borehole data, laboratory test results and monitoring data. Specialized web-based GIS tools were created to facilitate geospatial analyses of displacement data from inclinometers and pore pressure data from piezometers as well as geological information from boreholes and surface mapping. A variety of visual aids in terms of graphs or charts can be created in the web page on the fly, e.g. displacement vector, time displacement and summaries of geotechnical testing results. High-resolution satellite or aerial images and LiDAR data can also be effectively managed, facilitating fast and preliminary hazard assessment. A preliminary geohazard assessment using the web based tools was carried out for the Town of Peace River.
Introduction
Rapid urban development over the past 50 years has increased the risk of communities to geohazards such as landslides, subsidence, rock falls, avalanches, frost, river icejams, earthquakes, and flooding.Such events can have devastating consequences for any municipality as infrastructure can be severely damaged.Recent studies have shown that during the 20th Century many of the casualties reported after Correspondence to: H. X. Lan (hlan@ualberta.ca)rain storms, large floods and earthquakes are actually caused by landslides generated by these events (Fig. 1: Solheim et al., 2005).In today's risk adverse society, communities are expected to identify geohazards that affect their existing and planned developments and infrastructure and prepare zoning maps based on these geohazards, as part of their infrastructure risk management and emergency response planning.
Geological hazards are complex processes involving both surface and subsurface conditions and their interactions with triggering factors (Renaud, 2000;Krahn, 2003).For many communities assessing risk associated with geological hazards is challenging, as there are often no formal guidelines for such procedures.Their assessment demands a thorough understanding of site characterization technology and complex geological processes in spatial and temporal environments (Tsai and Frost, 1999;Parsons and Frost, 2002).It is therefore important to accurately capture the essential features determined from the site characterization.Often a municipality will have a long history of site investigations with respect to geotechnical and environmental issues that is largely paper based for specific projects/issues.The challenge is to synthesize this data into a common format that can be readily accessed by geo-engineering professionals.As noted by Morgenstern and Martin (2008), such synthesis can often lead to an enhanced understanding of the ground.Culshaw (2005) reviewed the advances in technology that have taken place over the pass 20 years to facilitate data integration and outlined the developments needed to advance the technology to routine site investigations.Culshaw (2005) also noted that an enhanced understanding of ground conditions leads to better definition of hazard zoning in urban environments, linking hazard, infrastructure and risk, and an improved recognition where mitigation is required.New technologies are needed to facilitate synthesis of data from site investigation and characterization for geological hazard assessment in urban areas (Culshaw, 2005).While there are many technologies in use today, the geographic information system (GIS) is increasingly viewed as a key tool for managing spatial and temporal data for natural hazards (Nathanail, 1998;Kimmance et al., 1999;Parsons and Frost, 2000;Lan et al., 2004Lan et al., , 2005;;Kunapo et al., 2005;Forte et al., 2005;Köhler et al., 2006).Similar approach has been widely used for managing data for tunnel construction in the urban area, for example the GDMS (Geodata Data Management System) developed for Porto and Torino Metro projects (Guglielmetti et al., 2008).Engineers responsible for hazard assessments could benefit through cross-fertilization and mutual support of different technologies (Brimicombe, 2003) and obtain more useful and robust solution by effectively using various types of data.In this paper we describe a web-based GIS framework that was used to managing various kinds of data related to landslides in the Town of Peace River, Canada.Specialized tools were developed that facilitated geospatial and temporal analyses of displacement data from inclinometers and pore pressure data from piezometer as well as geological information from borehole logs and surface mapping.A variety of x-y plots can be created in the web page on the fly, e.g.displacement vector, time displacement and other geotechnical test graphs.The web-based GIS can also effectively manage high resolution satellite or aerial images and LiDAR data.
Web-GIS development
The Town of Peace River (Fig. 2) has been exposed to both flooding and landslide hazards since initial settlement almost 100 years ago (Ruel 1988;Cruden et al., 1990).In contrast to many settlements located along the Peace River, the Town of Peace River is situated on the broad flood plains in the river valley bottom and on the unstable valley walls.Complex geology, including buried valleys (Cruden et al., 1997) as well as microclimatic conditions add to the complexity of possible geological hazards in this area.Significant growth of the Town of Peace River during recent years resulted in urban developments in areas that may be more vulnerable to geological hazards than considered during the design/planning.It is important that these locations are identified and stakeholders informed about these geohazards and associated consequences.In 2006 a project was initiated to provide a better understanding of the landslide processes and extents for the stakeholders, of which the development of a web-based tool for visualizing data was required (Froese, 2007).The web based system concentrates on improving the understanding of the extent, rate and drivers for landslide geohazards in this area.Lan and Martin (2007) outlined a work flow for general geotechnical engineering problems (Fig. 3).It can be summarized in three stages:
System framework
1. Stage 1 involves the data collection, management and synthesis.
2. Stage 2 is the development of a comprehensive ground model that includes ground behaviour and 3. Stage 3 is the engineering analyses.
The web-based GIS tools were designed to facilitate the tasks in Stage 1.The data handled in Stage 1 could be basic site investigation data (such as geomorphology and geology conditions) and diverse, continually evolving geotechnical parameters (such as displacement and pore pressure readings from geotechnical instruments).The tasks in this stage involve data collection, data design, data integration, data presentation, data visualization and data communication.GIS (Geographic Information System) technology provides effective tools for the handling, integrating and visualizing diverse spatial data sets (Brimicombe, 2003).The functionalities of web-based GIS technology play an essential role in Stage 1 in collecting, storing analyzing, visualizing and disseminating geospatial information.However, most GIS tools have limitations in representing time series data such as the displacement data from inclinometer or pore pressures from piezometers.Additional functional tools were required to address these deficiencies and were implemented as add-on functions to commercial GIS software.These tools provide capability for users to interact and communicate with various geotechnical data sets.
A three-tier architecture has been used for the geohazards Web-GIS system (Fig. 4): Database tier, Server tier and Client tier.The users in Client tier are acting as terminals via the Internet/Intranet.Some users such as public users could simply use internet browsers (Internet explorer or Firefox) to access the data and functions which the server provides.Other professional users such as consulting companies, government agencies or university researchers could use more powerful desktop tools (i.e.ArcMap) to access the Server and perform sophisticated tasks.The customized tools are shown as toolbar extended to the desktop package.ArcGIS Server, one of server GIS products from ESRI was chosen as the basic platform for the server application.It offers open Web access to maps, analyses, models, and facilities to add user functionality.It is a server-based GIS that enables organizations to share information through focused, easyto-use Web services and applications.The additional functions were implemented as add-on tools linked to the application server using ADF (Application Development Framework) development Kit and Visual studio.net,a software developing package by Microsoft.The use of a database engine (such as ArcSDE) can boost the efficiency and speed of data access.A database engine was not used at the initial stage of Peace River project, but may be employed in the future as the amount of data increases.
Database
Web-based central database was used in the system to enable effective data collecting, maintaining, upgrading and communicating.It is the core component for the Peace River System (PRS).The project is based on the historical and recent data available from the Town of Peace River that have been collected over the past 30 years.These data include borehole data, geotechnical parameters, field mapping, testing, monitoring, groundwater information and geophysical investigations and reports.Data from over 1400 boreholes were available in this area in terms of geotechnical borehole, oil and gas wells and groundwater wells.The long history of the geohazards encountered in the Town of Peace River has created rich data legacy.That data existed in the form of consulting and government agency reports, mainly in hard copy (paper) format.The Alberta Geological Survey (AGS) undertook the task of collecting and processing the primary data.All the data were digitized and input into a central database located in a database server in AGS.
The data related to geohazard site investigation can be categorized into three classes: (1) geospatial data and (2) nonspatial attribute data and (3) temporal data.The geospatial data involves the slope surface, slope boundary, subsurface geology, groundwater conditions, and borehole location, location of rainfall gauges and location of other infrastructure, such as roads, buildings, and utilities.Non-spatial data consists of attribute information that mainly includes borehole details information (collar coordinates and downhole survey information) and geotechnical parameters from laboratory or field test.The temporal data are related to environmental data, such as rainfall and monitoring data from instrument, such as displacement monitoring and water pore pressure.All the data can be a relational database based on standard geotechnical structure and linked to the geospatial layers.Table 1 shows an example of the standard structure for managing inclinometer data and geological data in a relational database.Each dataset is composed of multiple tables managing different information.For example, inclinometer dataset consists of table of installation, survey and data.This database structure is fairly standard today and is supported by many engineering software packages.Different datasets are related each other using key field.A central storage location for data with restricted access minimizes the chances for data errors.
Interface and functions
The main Peace River System interface is shown in Fig. 5.All the information has been integrated seamlessly into the website.Access to the GIS server is embedded inside the web application and typically hidden from the user of the application.Three different panels are divided in the user interface: the left functional panels, middle map panel and right data managing panels.The middle and right zones are responsible for data management and data displaying.By turning on different layers on the right panel, various datasets can be quickly and seamlessly displayed in the middle map panel.The high resolution imagery such as Quickbird images and LiDAR (Light Detection and Ranging) data can also be easily managed.
The left panel contains general and advanced GIS toolsets for performing navigations, data queries, specialized anal- A customized geotechnical data analysis tool called Geotechnical Tool was developed to improve the efficiency of site investigation and site characterization for geohazard assessment.Users can use this tool to explore geotechnical information of any boreholes in the map panel.Geotechnical information includes time-series displacements from inclinometers, pore pressures from piezometers, geotechnical test results and stratigraphy from geological/geotechnical logging.
The Geotechnical Tool provides all of the standard types of plots for analyzing slope inclinometer data.The user can quickly check the cumulative displacement plot and incremental displacement plot.From these standard plots discrete movement zones can be defined by specifying the from-to-depths.The resultant time-displacement plots for these discrete zones show acceleration or deceleration of slope movement.In addition to the displacement versus time plots, plots of displacement vector directions and displacement rate offer the ability to identify and evaluate the spatial and temporal character of the deformations (Fig. 6).
In order to quickly and easily compare different dataset, dataset plots can be created in a single form.Figure 7 shows a comparison plot for deformation data, geological data and geotechnical test.
Application and discussion
An understanding of the deformation process is an essential part of landslide hazard assessment.This process understanding should involve not only comprehensive assessment spatially, but temporally.Landslide problems in an urban setting usually develop as the urban infrastructure grows.Hence landslide investigations are usually carried out over decades at various locations.As discussed in the previous part, central database has been established in dealing with the spatial and the temporal data and multi-level clients can connect to it.With the support of web-based GIS tools, different-level assessments can be conduced in addition to the general map-ping.When conducting studies of multiple landslide events, the first step is to synthesize the available baseline of data.Such synthesis can reveal the ground deformation patterns in space and history in time.These deformations directly impact infrastructure such as pipelines and roads.Using the data available for the Town of Peace River a preliminary geohazard assessment was carried out using the information extracted from the historical records.The process used to enable this assessment is described below.
Landslide catalogue
One of the first steps for the project was the transfer of the documented landslides from the historical paper format to the digital database.These historic landslides first had to be geo-referenced and the relevant data compiled and entered into the database.The effort for this was significant as instrumentation data and geotechnical laboratory test data also had to be compiled and checked.Figure 8 shows the inventory of the historical landslides impacting on the Town of Peace River combined with high resolution LiDAR and optical Quickbird imagery.The bare-earth model from the 2007 LiDAR also still captures the outline of the historic landslides even though some of these landslides happened more than 20 years ago.The web-based GIS tools provides an excellent means of communicating to the stakeholders the spatial location and temporal distribution of these historic landslides even though many of the slides may not be active now.Figure 8 shows spatial and temporal distribution of 8 landslides in 6 sites along the river valley and Table 2 shows the location, occurrence time and sliding type of these 8 landslides.The top figures in Fig. 8 shows the oblique view of two specific slides that will be discussed in more detail below: the Shop slide (left) and 99/101 Street slides (right).
There are a significant number of landslide related problems along both west and east bank of Peace River.Two examples of such landslides are the Shop Slide, which occurs on the west valley wall and the 99/101 Street slides on the east valley wall (Fig. 8).For each of these slides the scarps and major translational blocks moving along the rupture surfaces can be easily identified from the high resolution digital elevation model from LiDAR data, which can be viewed in both 2-D and 3-D views in the web-GIS tools.
Shop Slide
The Shop Slide is located along the old Highway 2 climbing the valley wall of the Peace River.The road consists of cut and fill, and has 30 m height and 4:1 (horizontal vs. vertical) slope inclination.The general subsurface stratigraphy of the Shop slide consists of a lacustrine clay deposit, till deposit and bedrock formation (Fig. 9).Clays deposited on top of the slope may be mixed with the embankment fill used for the road construction.Beneath the clay fill, the post glacial lacustrine clay is underlain by glacially deposited www.nat-hazards-earth-syst-sci.net/9/1433/2009/Nat.Hazards Earth Syst.Sci., 9, 1433-1443, 2009 overconsolidated clay till deposits.The clay shale bedrock encountered in the lowest part of the slide is part of the marine Shaftesbury formation of Cretaceous age.Some coarse material deposits including sand, silt and gravel were found in the nearly flat lying ground below the CN rail track (see Fig. 9).The approximate failure surface can be estimated by the location of ground movement established in several boreholes using slope inclinometers.The failure surface is located in the clay deposits with a maximum depth of 17 m below ground surface adjacent to borehole TH05-03.Based on the slope inclinometer measurements, the failure surface of the Shop slide can be divided into two parts.One, which is located on the upper slope above the old Highway 2, appears to be old and less active.The other, located in the lower slope above the CN rail is active and directly affects the road.This information is based on the notion, from historical data for the Peace River valley, that this type of slide is expected to be a translational block slide.transition zone is located a little further south from the end of 99 Street slide area (i.e. between the end of 99 and 101 Streets).The movement of the slides was observed in 1992 and accelerated in 1993.Between 1993 and 1994, a number of homes were demolished and a large berm constructed to slow movements that were impacting on the subdivision.
It is evident from the subsurface stratigraphy and borehole data that the general failure surface controlling the translational movement occurred at 99 and 101 Streets (Fig. 10).The slope instability area has an elevation of 330 to 333 m, is located in the Shaftesbury clay shale formation and has
Geology and landslide kinematics
One of the major advantages of being able to systematically examine the various landslide features with the web-GIS tools is the ability to explore the landslide kinematics and establish the geological factors controlling the formation of the rupture surface.The borehole data from the central database was utilized, along with field mapping, to build a regional geological model of the subsurface as well as at each landslide location.Figure 11 shows a schematic cross section of the geology at the town of Peace River, along with the stratigraphic column proposed by Morgan et al. (2008).Within this geological framework, the borehole information at each specific landslide can be utilized to determine where the landslide site fits within the larger geological framework.The inclinometer data can then be used to establish the location of the rupture surface in order to establish the kinematics of the deformation patterns.Using the Web-GIS tools, the deformation velocities of shear zones for each site can be easily extracted and explored (see Figs. 6, 7).Spatial interpolation method then can be performed on these site data to create various velocity raster layers in terms of different time periods which allow visualizing the spatial variability of slope deformation (Nathanail and Rosenbaum, 1998).The moving rate of the shallow zone changed frequently while the deeper zone remains relatively constant, which indicates that the shallow zone is more sensitive to minor changes in boundary conditions such as rainfall.Localized small scale slope failure is more prone to occur in this shallow zone.Such information is important when evaluating the impact of these movements on existing infrastructure and evaluating various mitigative measures.
Geostatistical interpolation of the site movement data enables us to examine the spatial distribution of the slope movements measured in boreholes.Figure 14 shows the velocity distribution in October 2005 and May 2007 in the area of Shop Slide at West bank of Peace River.It can be seen that the spatial distribution of slope deformation in this region was similar for both time periods although there appears to be a slight change in the direction of slope movement.The velocity and displacement history of Site TH05-3 from October 2005 to May 2007 were also plotted.The magnitude of deformation velocity decreased in the winter season, and accelerated during the summer season (Fig. 14).These minor and subtle changes in velocity are likely related to the shallow portions of the landslide responding to the rainfall that occurs during the spring and early summer period.Utilizing this data with the 2-D and 3-D LiDAR images displayed in the web-GIS, zones with similar slope morphology can be used to bound data and compare movement rates in different zones of the landslides, which typically move differentially.This type of displacement information can aid in developing hazard-zoning maps for the slide.
Conclusions
In general, the assessment of geohazards in urban areas should address complex geological, geotechnical and hydrological issues.To improve the understanding of these interacting processes, new knowledge and techniques are required.Development of specialized tools, specifically tailored to link spatial and temporal data to geotechnical analysis, is needed such that parameter variability and hazard analyses can be linked.A web-based GIS tool was designed and developed to enable effective integration of data collected from site characterization.The interface and specialized tools were enhanced by providing the capability within web-GIS system to interactively query and analyze the various datasets.The tools developed are specifically designed to facilitate geohazard assessment and enhance geological modeling using geological stratigraphy information and geotechnical data from traditional boreholes, landslide inventory from high resolution LiDAR and aerial photos and historical documents, and landslide kinematics using velocity information from continuous monitoring of instruments The sophisticated Web-GIS tool offers functions that provide better information for proactive planning and developing mitigation strategies.The tool also provides an efficient method to communicate the effect of geohazard to Stakeholders.
Fig. 1 .
Fig. 1.Comparison of casualties from different hazards in the 20th century (Source: EM-DAT: The OFDA/CRED International Disaster Database.
Fig. 2 .
Fig. 2. Location of the Town of Peace River, Alberta.The map in the left corner shows shaded digital elevation model derived from SRTM (Shuttle Radar terrain Model) and LiDAR (Light Detection And Ranging) Datasets.
Fig. 3 .
Fig. 3.The architecture of integrated system for geotechnical engineering solution.It is composed of three different stages which required the implementation of specific tasks (modified from Lan and Martin,2007).
Fig. 5 .
Fig. 5. Interface of Web-GIS for Peace River Project.
Fig. 6 .
Fig. 6.Sample result for specialized geotechnical tools which shows the different plot for inclinometer readings.The displacement vectors are shown directly on the plan map.
yses and 3-D model manipulations.The general toolset includes navigational functions such as zoom in, zoom out, pan and full extent.A Help and Refresh for redrawing map layers are also part of the toolset.The advanced toolset consists of more sophisticated tools such as querying features in a dataset individually (Identify), by selecting multiple features interactively based on the spatial location (Select Feature(s)), and by specifying search criteria based on dataset attributes (Query).The 3-D Model tool enables user to manipulate and view high resolution remote sensing images such as RadarSat, QuickBird, Orthophoto, LiDAR and Digital Elevation Model (DEM) at 360 • .
Fig. 7 .
Fig. 7. Example of comparing different datasets: Inclinometer data versus geological stratigraphy and geotechnical test data.
Fig. 8 .
Fig. 8.The landslides distribution in Peace River valley.The maps on the top show the oblique view of the Shop slide (left) and 99/101 slide (right).The map in the bottom right shows the temporal distribution of the Landslide occurrences.
Fig. 9 .
Fig. 9. Cross section of the Shop Slide (A-B).The location of section is shown in Fig. 8.
101 Street slides can be divided into three sections, end of 101 Street (A-B), end of 99 Street (E-F), and the transition zone (C-D), due to the relatively large areas affected by slope instabilities in the east bank.The end of 101 Street is located on the southern end of the residential subdivision.The topography of this area was changed in 1974 and 1975 with a large placement of fill to facilitate the development of the residential subdivision.The end of 99 Street is located north of the intersection of 99 and 101 Streets.The initial ground movement occurred in 1985.Instabilities in this area mainly consist of three different portions of slides: shallow surface slides in the slope below 99 Streets and a shallow translational slide affecting 99 Street pavement, which occurred consecutively with the surficial slide in 1985.The
Fig. 10 .
Fig. 10.Major slide blocks and their cross sections in 99 and 101 Streets slide area.The location of cross sections shown in Fig. 8.
Fig. 11 .
Fig. 11.General stratigraphy of the Peace River Valley, after Morgan et al. (2008).Vertical cross section A-A' as shown in the plan of Fig. 2. Horizontal distance 18 km, 12× vertical exaggeration.
Fig. 12 .
Fig. 12. Classification of Landslide movement rate of sliding mass in the Peace River valley.
Fig. 13 .
Fig. 13.Displacement or movement history of three boreholes within 99/101 Street landslide and the measured rainfall.
Fig. 14 .
Fig. 14.Movement rate of Shop Slide on the west bank of Peace River.The top two raster images show the spatial distribution of velocity at two time period (October 2005 and May 2007).
Table 1 .
Database structure for Geology and slope Inclinometer readings.The fields in bold are key fields jointing different data tables.
Table 2 .
Example of Landslide Inventory in the Town of Peace River | 5,695.2 | 2009-08-14T00:00:00.000 | [
"Environmental Science",
"Geography",
"Geology"
] |
Sequential Farkas lemmas for convex systems
In this paper we introduce two new versions of Farkas lemma for two kinds of convex systems in locally convex Hausdorff topological vector spaces which hold without any constraint qualification conditions. These versions hold in the limits and will be called sequential Farkas lemmas. Concretely, we establish sequential Farkas lemmas for cone-convex systems and for systems which are convex with respect to a sublinear function. The first result extends some known ones in the literature while the second is a new one.
INTRODUCTION
Farkas lemma is one of the most important results from fundamental mathematics.It is equivalent to the Hahn-Banach theorem [10] and has had many applications in economics [9], in finance [8], in mechanics, and in many fields of applied mathematics such asmathematical programming and optimal control.For more details, see the recent survey paper [7].
The first correct version of Farkas lemma for a linear system was introduced by the physicist Gyula Farkas in 1902.Since then, many generalized versions of this "lemma" have been proved.Most of these extensions are nonasymptotic versions and hold under some qualification conditions [5,7].In the recent years, several asymptotic versions for generalized Farkas lemma have been established and found many applications in optimization problems [4,6,11].
In this paper, we first introduce an asymptotic version of Farkas lemma for a coneconvex system which extends some known results in the literature [6,11].Imitating the idea in [5], we then establish the corresponding asymptotic version of Farkas lemma for systems which are convex with respect to a sublinear function, which appears for the first time and may pay the way for applications to some areas in applied mathematics.
Trang 21 NOTATIONS AND PRELIMINARIES
Let X and Y be locally convex Hausdorff topological vector spaces (l.c.H.t.v.s.), with their topological dual spaces X and Y , endowed with w -topologies, respectively.Given a set * X A , we denote by cl A the closure of A (with respect to the The function . The set of all proper, lower semi-continuous (lsc) and convex functions on X is denoted by .
X
The conjugate function regarding the set The indicator function of the set We add to Y a greatest element with respect to K , denoted by K , which does not belong to Y and let for every .
Y y
We shall use the following conventions on subset in the product space Y X , and in this case, the set Here the relation " Given an extended sublinear function } { : , we adapt the notion S - convex (i.e., convex with respect to a sublinear function) (see [13]) and introduce the one corresponding to an extended sublinear function Some properties of limit inferior and limit superior of nets of extended real numbers will be quoted in the next lemma.
) ( be nets of extended real numbers.Then the following statements hold: provided that the right hand side of the inequalities are well-defined.Moreover, the equalities hold whenever one of the nets is convergent.
SEQUENTIAL FARKAS LEMMA FOR K - CONVEX SYSTEMS
In this section we will introduce a version of Farkas lemma for cone-convex systems which holds in the limit form without any qualification condition.The result extends the ones in [6], [11] and [4] in some senses.
Theorem 2 [Approximate Farkas lemma 1]
Let Y X , be locally convex Hausdorff topological vector spaces, Then the following statements are equivalent: Thus, there are nets * * * ( ) ,( ) , , . Note also that for any and so, it follows from the definition of Taking the limit superior both sides of (2) we get 0 = ( ) limsup Assume that (ii) holds, i.e., there exists a net for all .
The proof is complete.
Remark 3 The equivalence between (i) and
(ii) in Theorem 2 was established in [6], [11] under the assumption that X C = and g is a continuous, K -convex mapping with values in Y .This equivalence was also established in [4] by another approach, called dual approach, for the case where Y X g : , which is much stronger than our assumption that g is K -epi-closed (see [2]).Our result extends all the results in the mentioned papers.
SEQUENTIAL FARKAS LEMMA FOR SUBLINEAR-CONVEX SYSTEMS
In this section, we will establish a sequential version of Farkas lemma for systems of inequalities involving sublinear-convex mappings.The key tools used here are the technique of switching a sublinear-convex system to a cone-convex system used in the recent paper [5] and Theorem 2 from the previous section.
(5) Then the following statements are equivalent: 1 When this condition holds, it is also said that the function is (S, g)-compatible [13] Now let K ~ be the closed convex cone defined by 0} ) , ( : Then it is easy to see that g ~ is K ~-convex as well. The assumption (4) and K playing the roles of X , Y , C , g , f , and K , respectively, and with 0. = From (a) and the definitions of f ~, g ~, we have which shows that (i) from Theorem 2 holds, and hence, by this theorem there exists a net Since Therefore, (7) can be rewritten as . | 1,247.4 | 2014-12-31T00:00:00.000 | [
"Mathematics"
] |
An Introduction to Next Generation Sequencing Bioinformatic Analysis in Gut Microbiome Studies
The gut microbiome is a microbial ecosystem which expresses 100 times more genes than the human host and plays an essential role in human health and disease pathogenesis. Since most intestinal microbial species are difficult to culture, next generation sequencing technologies have been widely applied to study the gut microbiome, including 16S rRNA, 18S rRNA, internal transcribed spacer (ITS) sequencing, shotgun metagenomic sequencing, metatranscriptomic sequencing and viromic sequencing. Various software tools were developed to analyze different sequencing data. In this review, we summarize commonly used computational tools for gut microbiome data analysis, which extended our understanding of the gut microbiome in health and diseases.
Introduction
The gut microbiome is a complex ecosystem with great impacts on the overall health of the host [1][2][3]. These microorganisms living in the gastrointestinal tract have various functionalities, such as absorption of nutrients and minerals, fermentation of fibers to short-chain fatty acids, synthesis of vitamins, breakdown of toxic components, and regulation of the immune system. The gut microbiome changes over time depending on host's age and dietary habits [4]. Its status is in close correlation to many diseases such as liver diseases [5][6][7], diabetes [8], inflammatory bowel disease [9,10], autoimmune diseases [11,12], colorectal cancer [13] and diseases of the central nervous system [14].
Widely used high-throughput sequencing methods in microbiome research include PCR amplicon-based sequencing, e.g., 16S rRNA, 18S rRNA, internal transcribed spacer (ITS) sequencing, DNA-based shotgun metagenomic sequencing, RNA-based metatranscriptomic sequencing, and viromic sequencing ( Figure 1). The first decade of gut microbiome research has mainly focused on DNA-based 16S rRNA gene sequencing and shotgun metagenomic sequencing, which elucidate the microbial composition and gene content. Recently, more attention has been drawn on RNA-based approach, metatranscriptomic sequencing, as well as on fungi and viruses, instead of solely focusing on bacteria. Various computational techniques have been developed to analyze different types of highthroughput sequencing data. The best practice for performing a microbiome study has been reviewed by Knight et al., including experiment design, choice of molecular analysis technology, etc. [15]. In this review, we will summarize commonly used computational tools used for the analysis of different types of sequencing data in the gut microbiome studies, which help to extend our knowledge in the role gut microbiome plays in human health and disease pathogenesis.
16S rRNA Sequencing
16S ribosomal RNA subunit gene contains both regions that are conserved throughout bacterial species and hypervariable regions that are unique for specific genera. 16S rRNA sequencing has been widely used to characterize the bacterial community, which utilizes PCR to target and amplify portions of the hypervariable regions (V1-V9) of the bacterial 16S ribosomal RNA subunit gene. Various bioinformatics tools have been developed in the last decade to analyze the 16S rRNA sequencing data, with most of them containing three core steps, including data preprocessing and quality control, taxonomic assignment, and community characterization ( Figure 2). Quality control is the first step in the analysis pipeline, which includes quality checking, adapter removal, filtering and trimming to remove artifacts, low-quality and contaminant sequencing reads resulting from sample impurities or inadequate samples preparation steps [16]. Many quality control software packages use PHRED algorithm score to assess the base quality [17]. The taxonomic assignment is a key step in the 16S rRNA sequencing data analysis pipeline. Currently, there are two different strategies to perform this analysis: operational taxonomic unit (OTU)-based analysis and amplicon sequence variant (ASV)-based analysis. OTUs are determined by the sequence similarity. Reads are considered as the same OTU when their sequence similarity reaches a predefined similarity threshold, most commonly 97% [18]. Generally, an OTU-based analysis first clusters sequences into different OTUs and then performs taxonomic assignment. Many OTU-based methods have been developed, such as UCLUST [19], UPARSE [20], CD-HIT [21], hc-OTU [22], ESPRIT [23], ESPRIT-TREE [24]. On the other hand, an ASV-based analysis does not resolve sequence variants by an arbitrary dissimilarity threshold as used in the OTU-based analysis. Instead, ASV-based methods utilize a denoising approach to infer the biological sequences in the sample before the introduction of amplification and sequencing errors, which allows to resolve sequences differing by as little as a single nucleotide [25]. Therefore, an ASV-based analysis is able to provide a higher-resolution taxonomic result. Several ASV-based methods have been developed, including DADA2 [26], UNOISE 2 [27], and Deblur [28]. In the following part, we will introduce three representative tools that have been successfully and widely applied in 16S analysis starting from raw sequencing data, including Quantitative Insights Into Microbial Ecology (QIIME) [29,30], Mothur [31], and DADA2 [26]. QIIME 1 [29] and its next-generation, QIIME 2 [30], are open-source bioinformatics platforms for microbial community analysis and visualizations. A typical 16S analyzing workflow in QIIME 1 is: (1) Demultiplexing and quality filter, which assigns the multiplexed reads to each sample and filters sequences that cannot meet defined quality thresholds; (2) Chimera detection and filter, which applies ChimeraSlayer or USEARCH 6.1 to remove chimeric sequences; (3) OTU picking and taxonomy assignment, in which sequences will be clustered into OTUs based on their sequence similarity, and taxonomy will be assigned to each representative sequence of OTUs; (4) Community analysis, in which the community composition, phylogenetic tree, alphaand beta-diversity can be computed or analyzed based on OTU tables. QIIME 2 allows third parties to contribute functionality, and many latest-generation tools are embedded into the system as QIIME 2 plugins, such as DADA2 denoising and filtering. Moreover, in addition to the command-line interface like QIIME 1, QIIME 2 provides the QIIME 2 Studio graphical user interface, which is much friendlier for end-user biologists. Comparing with most of the software, both QIIME 1 and QIIME 2 provide many interactive visualization tools that allow users to generate principal coordinate analysis (PCoA) plots, alpha rarefaction plots and taxonomic composition bar plots.
Mothur is another well-known package [31]. Mothur website provides examples for data acquired from different sequencing platforms, including Illumina, Pyrosequencing, and Sanger sequencing. For Illumina 16S data, a typical analyzing workflow includes the following steps: quality control, sequence alignment, chimera removal, assignment of sequences to OTUs, analysis of community characters including taxonomy composition and diversities. Mothur is originally designed for OTU-based analysis, but the current version of Mothur also supports ASV-based analysis, in which cleaned sequences can be assigned to ASVs and taxonomy information can be analyzed based on the ASV table. The performance of Mothur and QIIME system in 16S data analysis has been compared by many previous studies in different contexts [32][33][34]. Although several differences were found between these two tools, both Mothur and QIIME can provide reliable bacterial community information and generate comparable results in general [32,33].
DADA2 is an ASV-based analysis package that utilizes DADA2 algorithm [26], a model-based approach for correcting amplicon errors without constructing OTUs. The basic analyzing workflow in DADA2 includes the following steps: quality control which filters and trims low-quality reads; sample inference and ASV table construction in which sequence variants are inferred by DADA2 algorithm and ASVs are summarized; removal of chimeric ASVs; taxonomic assignment to generate taxonomy tables. DADA2 can resolve fine-scale variation and thus provide a more accurate analysis than other OTU-based methods. DADA2 can perform species-level analysis by matching ASVs to sequenced reference strains, while traditional OTU-based methods only can provide genus or above level taxonomic information.
Although both OTU-and ASV-based methods provide the phylogenetic information, basic 16S analysis methods generally cannot provide the functional gene composition of a bacterial community. However, phylogeny is strongly correlated with biomolecular function which thus makes it is possible to predict metagenome functional content from 16S data. Several software tools have been developed to predict the functional composition of a microbial community's metagenome from 16S data, such as phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) [35,36] and Tax4Fun [37].
The PICRUSt algorithm composes two steps [35]. The first is called "gene content inference", which predicts gene content for organisms in the Greengenes phylogenetic tree by using existing annotations of gene content and 16S copy number from sequenced bacterial and archaeal genomes in the IMG database. This step is pre-calculated and thus users are not required to do it in data analysis. The second step is "metagenome inference", in which the functional gene family counts as well as the abundance of functional pathways for each sample will be predicted and summarized based on the input OTU table. The input OTU table could be generated by other 16S analyzing software, such as QIIME and Mothur. PICRUSt2 [36] is the optimized version of PICRUSt. In addition to the updated and larger database of gene families and reference genomes, PICRUSt2 is compatible with ASV-based 16S analysis. Its input file could either be an OTU table or an ASV table, while PICRUSt input is restricted to OTU tables. Now, PICRUSt2 is embedded in QIIME 2 system as a QIIME 2 plugin [30].
The R package, Tax4Fun [37], also predicts the functional capabilities of microbial communities based on 16S data but adopts a different strategy than PICRUSt. Tax4Fun predicts the metagenome functional content by the nearest neighbor identification based on a minimum 16S rRNA sequence similarity, while PICRUSt performs this by analyzing the topology of the Greengenes phylogenetic tree as described above. The input of Tax4Fun could be the OTU table obtained through QIIME analysis (against the SILVA database) or from the analysis in SILVAngs web server. The functional capabilities of the inputted microbial community are predicted using the precomputed reference profiles of the KEGG organisms. A recent study has indicated that the application of PICRUSt, PICRUSt2, and Tax4Fun on non-human and environmental samples is limited by their default databases [38]. Tax4Fun2 [39] is the updated version of Tax4Fun. Compared with the old version, Tax4Fun2 allows users to build their own reference data sets, which may enhance the accuracy and robustness of predicted functional profiles by utilizing user-defined, habitat-specific metagenome databases. Moreover, Tax4Fun2 also can be used to calculate functional gene redundancies based on 16S data.
There are some other tools that have been developed for estimating the functional capacity of a microbial community based on 16S sequencing data, such as Piphillin [40] and Vikodak [41], and each of them has some distinct features. Whole metagenome sequencing is more expensive than 16S amplicon sequencing. Therefore, functional prediction of microbial community based on 16S data will be used more frequently, in part due to substantial improvement of the accuracy of these bioinformatics tools. In addition to the tools for one or a few specific utilizations in 16S data analysis, some platforms embed various different individual tools, such as the Galaxy server (The Huttenhower Lab; https://huttenhower. sph.harvard.edu/galaxy/), MicrobiomeAnalyst (https://www.microbiomeanalyst.ca/), as well as QIIME 2 (https://qiime2.org/). These platforms allow users to perform a more comprehensive 16S analysis using a single platform.
The gut microbiome data sets are compositional, sparse and high-dimensional, which makes identifying differentially abundant microbial taxa between communities challenging. Widely used software tools optimized for statistical analysis of the microbiome data analysis includes LEfSe, MaAsLin2, etc. LEfSe discover biomarker by way of class comparison, biological consistency tests and estimation of effect size [42]. MaAsLin2 relies on general linear models to accommodate and determine multivariable association between microbial data and phenotypes, which offers a variety of methods for data normalization and transformation [43]. SparCC [44], SPEIC-EASI [45] address the compositional problem by assuming that few species are correlated, and BAnOCC [46] makes no assumptions about the microbial data. Ilr (isometric log ratio transform) is another approach controlling for false positives by testing for changes in log ratios between abundances, which does not assume few species are correlated [15]. Machine learning approaches, such as random forest, have also been applied to gut microbiome data to separate samples based on their categories, which requires a relatively larger sample size to train the model.
18S rRNA Amplicon Sequencing and Internal Transcribed Spacer (ITS) Sequencing
Previously, researchers have mainly focused on studying the bacterial community in the gut microbiome because bacteria constitute a majority part of the gut microbiome [1,47], but recently more studies are analyzing the fungal community. The human mycobiome diversity is relatively low compared with bacterial communities and is dominated by yeast such as Candida, Saccharomyces and Malassezia [48]. Dysbiosis of intestinal fungi has been observed in various diseases, such as alcohol-associated liver disease [5,49], hepatitis B [6], inflammatory bowel disease [9,[50][51][52], colorectal cancer [13,53], autism spectrum disorders [54], Parkinson's disease [55].
When it comes to molecular identification of fungi, amplicon sequencing based on 18S rRNA and ITS are the most widely used methods, both of which use PCR to amplify the DNA with a specific primer, and after sequence processing, sequence analyzing, and comparing the resulting ITS sequence with the known database, the species of fungi can be identified [56,57]. 18S rRNA is a basic component of fungal cells comprising both conserved and hypervariable regions. Similar to 16S rRNA, 18S rRNA gene has nine hypervariable regions. Another commonly used barcoding marker in eukaryotic phylogenetic studies is ITS region, a 500-700 base pair (bp) nuclear ribosomal DNA sequence [56,58]. The ITS region is further separated into two regions: ITS1 (between 18S and 5.8S) and ITS2 (between 5.8S and 28S), where ITS2 is less taxonomically biased than ITS1 [56,59].
Comparing with ITS sequencing, one advantage of 18S rRNA sequencing is that it allows alignment across taxa above species level. ITS sequencing is not able to do so because of its lack of reference sequences. However, this is also a drawback for 18S rRNA sequencing because for some species, 18S rRNA sequencing can only provide information regarding taxonomic levels above species. Whereas ITS sequencing can provide lower-level information at species and subspecies levels because there is more variation in the ITS1 and ITS2 regions than 18S rRNA regions. 18S rRNA sequencing has a relatively large set of references, however, various lengths of 18S rRNA hinders the alignment of all the different regions across taxa [60][61][62][63]. ITS has a high PCR success rate and a better probability of successful fungi identification with a broader range than all other DNA regions [58]. In terms of application, ITS sequencing focuses more on studying the intraspecific genetic diversity of fungi because ITS is more variable, and 18S rRNA emphasis is more on fungi's phylogenetic classification studies [56]. One way to provide more comprehensive classification of fungi is the combination of 18S rDNA and ITS sequencing, such as 5.8S-ITS2 [64].
Shotgun Metagenomic and Metatranscriptomic Sequencing
While amplicon-based sequencing methods oftentimes only target a single gene, shotgun metagenomic sequencing is capable of random sequencing the sample's entire metagenome without a specific primer, which alleviates biases from primer choices. Compared with marker gene-based community profiling, shotgun metagenomic sequencing adds a detailed layer to the taxonomic characterization of the community by providing information on the gene composition and the functional capacity of the gut microbiome, although it is costlier and more time-consuming than marker gene amplification. With the ability to detect organisms from all domain of life, shotgun metagenomic sequencing still represents the most effective and comprehensive approach for obtaining both structural and functional data. The gene composition can also be used to formulate putative functional pathways. Shotgun metagenomic sequencing has been applied to study the functional changes of the gut microbiome in various diseases, such as inflammatory bowel disease [76], irritable bowel syndrome [77], alcohol-associated liver disease [78,79], nonalcoholic fatty liver disease [80,81], hepatic steatosis [82], Crohn's disease [83,84], melanoma [85], Parkinson's disease [86], high blood pressure [87], and pulmonary tuberculosis [88].
The process of shotgun metagenomic sequencing can be summarized as following: sample collection and storage, nucleic acid extraction, metagenomic library preparation, quality control, and data analysis. Quality control is the first step in the shotgun metagenomic analysis pipeline (Figure 3), which involves different tools such as Trimmomatic [89], Ktrim [90], Cutadapt [91], MultiQC [92]. The resulting high-quality reads can be either mapped to reference genomes or assembled with assembly tools. Thus, shotgun metagenomic sequencing analysis generally can be categorized into two approaches: alignmentbased approach and assembly-based approach. It is often recommended to use both approaches in combination to get the most accurate results [93,94]. The alignment-based approach identifies sequencing reads' taxonomy and functional profile through mapping the reads to known microbial reference genomes or searching against databases of characterized protein families by different mappers, such as Bowtie2 [95], DIAMOND [96], BBMap [97], etc. Different marker gene database and protein encoding gene databases are available for taxonomic and functional annotation, such as Kyoto Encyclopedia of genes and genomes (KEGG) [98], protein family annotations (PFAM) [99], gene ontologies (GO) [100], clusters of orthologous groups (COG) [101], evolutionary genealogy of genes: Non-supervised Orthologous Groups (eggNOG) [102] and UniProt Reference Clusters (UniRef) [103].
The assembly-based approach reconstructs multiple genomes even if some are yet unknown. This approach depends heavily on genome coverage. Assembly-based approach assembles short reads into contigs, which allow for multiple sequence alignment of reads relative to the consensus sequence, and then groups contigs into scaffolds, which list the order and orientation of the contigs and the size of gaps between contigs. An important parameter to assess the quality of genome assemblies is N50, which refers to the smallest contig size in a set of contigs that represents at least 50% of the assembly [104]. Metagenomic assembler generally use graph-based approaches, such as the overlap-layoutconsensus and de Bruijin graph to assemble longer and shorter reads, respectively. Due to short sequence reads produced by popular sequencing platforms, de Bruijin graph-based assemblers are widely used, such as Meta-IDBA [105], IDBA-UD [106], MetaVelvet [107] and MegaHit [108], etc. The metagenome assemblers are either based on reference genome for annotation of microorganisms or based on de novo assembly which discover and reconstruct genomes without consulting databases and makes gene prediction more reliable. Generally, in the de novo assembly, metagenomic sequences are divided into pre-defined segments of size k (k-mers) which are over-lapped to form a network of overlapping paths and then form the contigs interactively [109], which is considered as the basis of de Bruijin graphs for short reads assembly [104].
The quality of assembly can be assessed by tools such as MetaQUAST [110]. The assembled genomes can be annotated through the gene family identification system in databases. Metagenomic sequence reads can also be mapped to the assembled genomes to estimate their abundance. There are some automated pipelines which integrate different steps into one convenient package, such as MEtaGenome Analyzer (MEGAN) [111], Metagenomic Phylogenetic Analysis (MetaPhlAn) [112], the HMP Unified Metabolic Analysis Network (HUMAnN2) [113], and some online servers such as Metagenomics RAST server (MG-RAST) [114], Integrated Microbial Genomes and Microbiomes (IMG/M) [115] and JCVI Metagenomics Reports (METAREP) [116], which provide an end-to-end solution. Sometimes multiple metagenomic analysis methods may produce variable results even if the same databases are used. Standardization of data processing and analysis is warranted to enable further integration of shotgun metagenomic analysis into the gut microbiome research to enhance the reproducibility and application of the analysis into clinical practice.
Although metagenomics provides access to microbial gene and genome composition and pathways, it has limited roles in revealing the gene expression in the microbial community. Shotgun metagenomic sequencing is performed on genomic DNA isolated from the biological samples; however, it is hard to distinguish whether this DNA comes from viable or dead cells or whether the genes are expressed under given conditions. Instead, metatranscriptomic sequencing allows scientists to identify whether a microbe is an active member of the microbiome or not, and to identify actively expressed genes in the microbial community to get a deeper understanding of the activity of the gene of interest. Metatranscriptomics complement shotgun metagenomics by elucidating what gens are actively transcribed from a potential repertoire of annotated genes as revealed by shotgun metagenomic analysis. Metatranscriptomic sequencing analysis has been used to study microbial RNA-based regulation and expressed biological signatures in several diseases such as inflammatory bowel disease [117] and rheumatoid arthritis [118]. It provides a snapshot of the gene expression profile under specific conditions and at a given moment, instead of its potential as inferred from DNA-based shotgun metagenomic analysis.
The construction of metatranscriptomic library starts with the isolation of total RNA and removal of host RNA contaminations which can occur to various degrees as well as removal of mRNA with probes targeting certain rRNA regions, followed by cDNA synthesis, adapter ligation and end repair. After that similar to the process of constructing shotgun metagenomic library, cDNA ends are repaired and adapters are ligated, followed by library cleanup, amplification and quantification, and the library is then sequenced on the sequencing platform. Due to the unstable nature and short half-life time, RNA isolation becomes the most difficult task, especially from some biological samples such as feces. The isolation process must be carefully carried out to avoid RNA degradation by contaminated ribonucleases, and multiple approaches specific to different cell types have been developed [119][120][121][122].
Similar to shotgun metagenomic analysis, comprehensive data analysis suites such as HUMAnN2 and MG-RAST also provide an end-to-end solution for metatranscriptomic analysis, which are combinations of multiple specialized tools, such as Trimmomatic for quality control, Bowtie for mapping, CuffDuff [123] for differential gene expression, etc. As always, quality control is the first step for metatranscriptomic analysis. An essential process in quality control step is to filter out non-mRNA reads, in addition to trimming of low-quality reads and host reads. The resulting good quality reads are used for the following analysis which are categorized into alignment-based approach and assembly-based approach. Alignment-based approach maps the sequencing reads to reference database. With assembly-based approach, the sequenced reads are first assembled into contigs, scaffolds, and then mapped to reference genomes. The assembly step is computationally challenging, which requires deeper sequencing depth and higher quality sequencing reads. The assembled transcripts are annotated through software such as Blast2GO [124] to align against protein databases, followed by normalization and calculation of relative gene expression levels and statistical analysis.
Viromic Sequencing
Viruses are key constituents of microbial communities which contribute to their evolution and homeostasis. Viromic sequencing has been used to study the intestinal viruses in different diseases, including type 1 diabetes [8], inflammatory bowel disease [10,125], alcohol-associated liver disease [126], non-alcoholic fatty liver disease [127], colorectal cancer [128,129], human immunodeficiency virus [130], and autoimmune diseases [11]. Because of the highly diverse nature of viruses and the lack of universal marker genes, it is difficult to use amplicon-based approach to amplify them with universal markers. Instead, shotgun metagenomic sequencing approaches can be used to characterize viruses and identify novel viruses.
Although in most environment, viruses outnumber microbial cells 10:1, viral DNA only represents 0.1% of the total DNA in a microbial community. Isolation of viral particles is the initial step in viromic sequencing, which is necessary to obtain a deep sequence coverage of viruses in the human gut microbiome, followed by viral particle purification. Large particles in the fecal samples, such as undigested or partially digested food fragments and microbial cells, are generally removed by serial filtration steps with osmotic neutral buffer or by ultracentrifugation with cesium chloride density gradient. The next step is nucleic acid extraction, during which the nucleic acid of the virus must first be isolated so that all the non-viral origin fractions are removed. DNAase and RNAase are usually used to remove the non-encapsulated nucleic acids. Depending on the type of viruses being studied, the library preparation protocol also varies. For example, bacteriophages are parasitic, special steps are required when isolating the DNA. For RNA virus, due to its unstable nature, reverse transcriptase to cDNA is required. In addition, virome contains active and silent fractions. For studying both the active and silent fraction of the virome, total nucleic acid isolation is needed [131]. For the active fraction of the virome, it is often required to use a filter, chemical precipitation or centrifugation to isolate the virus DNA.
The initial analysis of the sequences obtained after DNA sequencing is also quality control, which includes filtering of bad quality reads, decontamination of 16S rRNA, 18S rRNA and human sequence reads. Viruses have higher homology to prokaryotic or eukaryotic genes, therefore filtering of bad quality sequences is a key step in the viromic analysis. The resulting sequences are analyzed by either alignment-based approach or assembly approach. With alignment-based approach, different mapping algorithms are used to compare the resulting sequence reads against viral genomes and viral databases. Although the databases have expanded recently, the number of genomes deposited in the databases is far less than the sequenced virotypes and most of sequences reads lack similarity to the sequences in the databases, which are poorly annotated. The lack of sequence identity typically results in 60%-99% sequences in the viral metagenomes [132]. Due to the highly diverse nature of viruses and the lack of similarity in current existing databases, de novo assembly approaches are often used in the viromic analysis [131,133,134]. Different assemblers are used for viral metagenomic data, such as VICUNA [135]. Popular shotgun metagenome assemblers such as MetaVelvet has also been applied to viral metagenome assembly. There are some virome-specific computational pipelines available, such as Metavir [136,137] and the Viral MetaGenome Annotation Pipeline (VMGAP) [138], which generally include open reading frame (ORF)-finding algorithms to predict coding sequences, followed by comparison with different protein databases.
Conclusions
In this review, we have discussed different sequencing-based approaches, which provide useful information toward a better understanding of the role of gut microbiome in health and diseases. When studying the gut microbiome in human populations, such as healthy subjects and patients with diseases, confounding factors which could influence the gut microbiome need to be taken into consideration when analyzing the data, such as diet, medication, sex, age, life-style, etc. For example, the composition of the gut microbiome is different in infants, adults or elderly and certain discrete age range should be considered when analyzing the gut microbiota. Stool samples are often used when assessing the gut microbiome as a non-invasive approach. It is noteworthy that fecal microbiome and mucosal-associated microbiome clustered differently [139].
A list of examples of widely used tools are summarized in Table 1. For ampliconbased sequencing approaches, including the 16S rRNA sequencing, 18S rRNA sequencing, ITS sequencing, selection of target region and design of PCR primers must be performed carefully due to the primer biases. Currently, there is no agreement as to the optimal regions to be amplified, and most of the time, it is a balance between amplifying a determinative region and characterizing bacteria or fungi more broadly. For shotgun metagenomic sequencing and metatranscriptomic approaches, the turn-around time and costs need to be reduced to be introduced into clinical practice. The integration of various sequencing approaches each contribute a single piece towards a complex and large puzzle of the gut microbiome and the value of an integrative approach is greater than the sum of each part. In addition to sequencing based approaches, other -omics approaches such as metaproteomics and metabolomics complement the sequencing data, contributing to the understanding of the function and complex pathways in the gut microbial community. The global integrated approach is of great value to enable better understanding of the function of gut microbiome and move from a descriptive study to causal contributions, however, the budget and sample availability need to be taken into consideration for the integrative approach to be introduced into clinical practice.
DADA2
DADA2 is an ASV-based analysis pipeline for modeling and error-correcting Illumina sequence reads. Pros: High accuracy: able to resolve single nucleotide biological differences. Can perform species-level analysis.
Runtime scales linearly as sample number increase, and reasonable memory requirements. Cons: Comparably slow denoising algorithm than UPARSE. [26] UNOISE2 UNOISE 2 is an ASV-based tool for denoising (error-correcting) Illumina sequence reads. It is improved from UNOISE and clusters unique reads in the sequence. Pros: Higher accuracy and speed than DADA2. Cons: Does not use quality scores. [27]
Deblur
Deblur is an ASV-based denoising tool, which uses error profiles to obtain putative error-free sequences. It operates independently on each sample. Pros: Able to obtain single-nucleotide resolution, faster than DADA2, better memory efficiency than DADA2 and UNOISE 2. Better sensitivity and specificity. Cons: Slower than UNOISE 2, limited by read length and sample sequences' diversity.
[28] QIIME/ QIIME2 QIIME and QIIME2 are bioinformatics platforms for microbial community analysis and visualizations. QIIME 2 is engineered based on QIIME and replaced QIIME. QIIME2 use existing bioinformatics tools as subroutines, such as DADA2, deblur, etc. Pros: Have multiple interfaces, continues to grow and adapt to novel strategies. Cons: A large number of dependent programs need to be installed. [29,30] Mothur Mothur is a software analyzing raw sequences and generating visualization tools to describe α and β diversity. It is a combination of multiple analytic tools for describing and comparing microbial communities. It provides examples for data acquired from different sequencing platforms. Pros: Able to perform both ASV-based and OTU-based analysis. Cons: Relatively slow runtime and space complexity. [31]
PICRUSt/ PICRUSt2
PICRUSt is a software for predicting functional composition based solely on marker gene sequence profiles. PICRUSt2 is the improved version of PICRUSt by having a larger reference database, enhanced prediction ability and more accurate de novo amplicon tree-building. PICRUSt2: Pros: Able to identify novel discoveries. Can process 18S and ITS rRNA sequence while the original version only supports 16s rRNA sequence analysis. Cons: Can only differentiate taxa the same level as the amplified marker gene sequence. Can be problematic if the interested microbial community's majority phyla are not yet well-characterized. [35,36]
Software Short Description
Ref.
Tax4Fun/ Tax4Fun2
Tax4Fun is an R package for predicting functional profiles for 16S rRNA data on the basis of SILVA-labeled OUT abundances. Tax4Fun 2 is an improved version of Tax4Fun with more accurate and enhanced prediction power. Tax4Fun 2: Pros: Easy-to-use, platform-independent and highly memory-efficient. Tax4Fun2 has higher accuracies than PICRUSt and Tax4Fun. Cons: Availability of suitable reference genomes may limit Tax4Fun 2's performance. Only supports prediction from 16S rRNA gene. [37,39] Piphillin Piphillin is a web application that produces metagenome predictions based on the nearest-neighbor mappings of 16S rRNA sequences to genome. PEMA PEMA is a software pipeline for metabarcoding analysis based on third-party tools. Its function includes read pre-processing, OTU clustering, ASV inference, taxonomy assignment, and COI marker gene analysis. Pros: Allows partial re-execution. Fast execution time.
[68] IDBA-UD IDBA-UD is a de novo single-cell and metagenomic assembler, which can assemble sequences with highly uneven depth. It is based on de Bruijn graph approach. Pros: Implements local assembly. Cons: Sequence of species with high abundance is more likely to be misidentified as repeats. [106] MetaVelvet MetaVelvet is a de novo short sequence metagenome assembler. It is extended upon the Velvet assembler (single-genome and de Bruijn-graph based) to overcome the limitations of single-genome assembler. Pros: Able to reconstruct scaffold sequences including low-abundance species. Cons: Has slightly higher percentages of chimeric scaffolds. [107] MegaHit MegaHit is a de novo assembler for assembling metagenomics data. It implements succinct de Bruijn graphs. Pros: Fast and memory efficient. Available in both CPU-only and GPU-accelerated versions. Cons: Relatively biased towards the assembly of low abundant genome fragments. [108] MetaQUAST MetaQUAST evaluates and compares the quality of metagenome assemblies. It is improved based on QUAST. Its metagenome specific features includes: unlimited number of reference genome, species content detection, chimeric detection, and visualizations. Pros: Can be fed with multiple assemblies. Cons: Reduced precision in order to get higher time/memory efficiency. [110] MEGAN MEGAN is a BLAST-based automated pipeline for taxonomic and functional analysis of metagenomic and metatranscriptomic datasets. Pros: Allows laptop analysis of large metagenomic data sets. [111]
MetaPhlAn/ MetaPhlAn2
MetaPhlAn is an automated pipeline that profiles the microbial composition from shotgun metagenomic data at the species-level. The microbial community it can profile includes bacteria, archaea, eukaryotes and viruses. It accomplishes profiling with unique clade-specific marker genes. MetaPhlAn 2 is extended beyond the first version with enhanced metagenomic taxonomic profiling ability. Pros: Able to work with large-scale metagenome data. [112]
HUMAnN2
HUMAnN2 is an automated pipeline designed for functional analysis of metagenomic and metatranscriptomic data at the species-level. The general process of HUMAnN2 pipeline is identification of known species, alignment of reads to pangenomes, translated search on unclassified reads, and quantification of gene families and pathways. HUMAnN2 utilizes other pipelines such as MetaPhlAn2 to perform identification of known species. Pros: High accuracy, sensitivity, speed. Cons: A large proportion of sequencing reads remain unmapped and unintegrated. [113] MG-RAST MG-RAST is a web-based fully automated system for metagenomic analysis. It provides phylogenic and functional analysis. Pros: Require only 75 bp or longer for gene prediction or similarity analysis that provides taxonomic binning and functional classification. Able to handle both assembled and unassembled data. Cons: MG-RAST has been optimized for use with the Firefox browser. There are some browser-to-browser issues with visualization of certain diagrams. [114] IMG/M IGM/M is a web-based pipeline that provides comparative analysis for metagenome. It provides structural and functional annotation. Prefer assembled contigs. Pros: Integrates all datasets into a single protein level abstraction. In contrast to MG-RAST, IMG/M includes more computationally expensive tools such as hidden Markov model and BLASTX. Cons: Statistical analysis tool is only available as an on-demand computation to the registered IMG users of the Expert Review IMG site. [115] METAREP METAREP is a suite of web-based tools to view and compare metagenomic annotated data including both functional and taxonomical assignments. Pros: Able to handle extremely large datasets. Able to perform comparison on up to 20+ datasets simultaneously. Cons: No inbuilt annotation workflow. Users need to upload existing annotations. [116] CuffDiff Cufflinks is a suite of programs that assembles transcriptomes, estimates abundance, and performs gene expression differentiations. It implements a parsimony-based algorithm. Pros: High efficiency, sensitivity and precision. Cons: Not optimized for metatranscriptomics analysis. [123]
Blast2GO
Blast2Go is a Blast-based software that provides automatic functional annotation on DNA/protein sequences. It has multiple annotation styles that can be used for various conditions. Pros: Combines multiple annotation strategies. Strong visualization tools. Con: Not optimized for large datasets with large number of genes. [124] Viromic sequencing data analysis VICUNA VICUNA is a de novo assembler targeting viral populations, which have high mutation rates. Its algorithm uses an overlap-layout-consensus based approach. The general process of VICUNA is trimming reads, constructing/clustering contigs, validating contigs, and then extending and merging contigs. Pros: Able to efficiently process ultra-deep sequence data. High accuracy and continuity. Cons: Limited accessibility due to its requirement of local computing power. [135]
Metavir/ Metavir2
Metavir is a web-based pipeline specifically for viral metagenome analysis. Metavir 2 is developed based on Metavir with additional features such as new tools for assembled virome sequence analysis and new dataset comparison strategies.Pros: User-friendly interface. Able to perform analysis on both raw reads and assembled virome sequencesCons: Focuses on the compositional analysis. Functional annotation is lacking. [136,137] VMGAP VMGAP is an automated pipeline for functional annotation of viral shotgun metagenomic data. It first performs a database searches and then functional assignments. Pros: Uses specialized databases. Cons: Requires local installation of several open-source packages, programs and public databases. | 8,134 | 2021-04-01T00:00:00.000 | [
"Biology",
"Medicine",
"Computer Science"
] |
Flavour and CP violation in supersymmetry
After the shutdown of the large electron-positron collider at CERN, the search for CP violation and flavour changing neutral current (FCNC) phenomena acquired a privileged role in our quest for new physics beyond the electroweak standard model (SM). Most extensions of the SM exhibit new sources of CP violation and FCNC. In particular, the minimal supersymmetric extension of the SM presents several new phases and flavour structures in addition to those already present in the SM. Therefore, supersymmetry may have a good chance to manifest some departure from the SM in this particularly challenging class of rare phenomena. On the other hand, it is also apparent that CP violation and FCNC generally represents a major constraint on any attempt at model building beyond the SM. In this work, we review the status of FCNC and CP violation in supersymmetric extensions of the SM and discuss the possibilities of these indirect searches in the quest for supersymmetry.
Introduction
High-energy physics at the end of the 20th century has been strongly characterized by the impressive success of the standard model (SM) of the electroweak interactions as the correct description of fundamental interactions up to energies of O(100 GeV). This tremendous achievement has been possible thanks to the efforts of experimentalists and theorists all over the world and specially to the experiments at the large electron-positron collider (LEP) at CERN. Having emphasized this bright side of the last two decades, one has must also admit that the experimental and theoretical particle physics of these years has been rather unsuccessful in finding a road to follow beyond the SM. If we are all convinced that the SM cannot be the ultimate theory of 'everything' (if only because it does not include gravity), we are in the dark about when, where and how new physics beyond the SM should manifest itself. Searches for 4.3 the decays of B into J/ψK s : a J/ψ = 0.59 ± 0.14 ± 0.05 at BaBar [3], a J/ψ = 0.99 ± 0.14 ± 0.06 at BELLE [4] and a J/ψ = 0.79 +0. 41 −0.44 at CDF [5]. Second, from the theoretical point of view, it is important to emphasize that new physics beyond the SM generically introduces new sources of CP violation in addition to the usual CKM phase of the SM. Indeed, it is a common experience of model builders that if one tries to extend the SM with some low-energy new physics one must somehow control the proliferation of new CP violating contributions. Significant portions of the parameter spaces of new physics models can generally be ruled out by the severe constraints imposed by CP violating phenomena [7]. Even in the really minimal supersymmetric extension of the SM, that passes all the FCNC tests unscathed, one still faces severe problems in matching the experimental results concerning the constraints from EDMs.
The third reason which makes us optimistic in having new physics playing a major role in CP violation concerns the matter-antimatter asymmetry in the universe. Starting from a baryon-antibaryon symmetric universe, the SM is unable to account for the observed baryon asymmetry. The presence of new CP violating contributions beyond the SM looks crucial to produce an efficient mechanism for the generation of a satisfactory ∆B asymmetry [6] †.
The aim of this paper is to then review the current status of FCNC and CP violation in supersymmetry and to discuss the above points with the goal of exploring the potentialities of these observables in our quest for low-energy SUSY [8]. We emphasize that low-energy SUSY does not denote a well defined model; rather, it includes a variety of SM extensions (with a variety of phenomenological implications). We characterize this huge class of models according to their main features in relation to CP violation. In the following, we always stick to the minimal supersymmetric model; we introduce the minimal number of superfields that are strictly demanded to obtain a viable supersymmetrization of the SM. This means that each particle will be accompanied by a superpartner, except in the Higgs sector where we have to introduce a second Higgs doublet in addition to the usual SM Higgs doublet. A second important limitation in the class of SUSY models that we consider here concerns the imposition of R parity. This discrete symmetry is usually added to the gauge and super-symmetries in order to prevent excessive baryon and lepton number violations. Even in minimal supersymmetric versions of the SM (MSSM), where the minimal number of superfields is introduced and R parity is imposed, one is still left with more than 100 free parameters, almost half of them given by CP violating phases [9]. Fortunately most of this huge parameter space is already phenomenologically ruled out. Indeed, FCNC and CP violating processes play a major rule in drastically reducing the parameter space. Obviously it is difficult to make phenomenological predictions with so many free parameters, and so through the years many theoretical further restrictions have been envisaged for the MSSM class. The most drastic reduction on the SUSY parameter space leads to what is called the constrained MSSM (CMSSM) or minimal supergravity [8]. In the absence of phases, this model is characterized by only four parameters plus the sign of a fifth parameter.
In any MSSM, at least two new 'genuine' SUSY CP violating phases are present. They originate from the SUSY parameters µ, M , A and B. The first of these parameters is the dimensionful coefficient of the H u H d term of the superpotential. The remaining three parameters are present in the sector that softly breaks the N = 1 global SUSY. M denotes the common value of the gaugino masses, A is the trilinear scalar coupling, while B denotes the bilinear scalar 4.4 coupling. In our notation, all these three parameters are dimensionful. Two combinations of the phases of these four parameters are physical [10]. We use here the commonly adopted choice: where also arg (Bµ) = 0, i.e. ϕ µ = −ϕ B . The main constraints on ϕ A and ϕ B come from their contribution to the EDMs of the neutron and of the electron. For instance, the effect of ϕ A and ϕ B on the electric and chromoelectric dipole moments of the light quarks (u, d, s) lead to a contribution to d e N of order [11] wherem here denotes a common mass for squarks and gluinos. The present experimental bound, d e N < 1.1 × 10 −25 e cm, implies that ϕ A,B should be <10 −2 , unless one pushes SUSY masses up to O (1 T eV).
In view of the previous considerations, most authors dealing with the MSSM prefer to simply put ϕ A and ϕ B equal to zero. Actually, one may argue in favour of this choice by considering the soft-breaking sector of the MSSM as resulting from SUSY breaking mechanisms which force ϕ A and ϕ B to vanish. For instance, it is conceivable that both A and M originate from one same source of U (1) R breaking. Since ϕ A 'measures' the relative phase of A and M , in this case it would 'naturally' vanish. In some specific models, it has been shown [12] that through an analogous mechanism ϕ B may also vanish.
If ϕ A = ϕ B = 0, then the novelty of SUSY in CP violating contributions merely arises from the presence of the CKM phase in loops where SUSY particles run [10,13]. The crucial point is that the usual GIM suppression, which plays a major role in evaluating ε K and ε /ε in the SM, in the MSSM case (or more exactly in the CMSSM) is replaced by a super-GIM cancellation which has the same 'power' of suppression as the original GIM. Again, also in the CMSSM, as is the case in the SM, the smallness of ε K and ε /ε is guaranteed not by the smallness of δ CKM , but rather by the small CKM angles and/or small Yukawa couplings. By the same token, we do not expect any significant departure of the CMSSM from the SM predictions also concerning CP violation in B physics. As a matter of fact, given the large lower bounds on squark and gluino masses, one expects relatively tiny contributions of the SUSY loops in ε K or ε /ε in comparison with the normal W loops of the SM. Several analyses in the literature tackle the above question or, to be more precise, the more general problem of the effect of lightt and χ + on FCNC processes. In this case sizeable contributions can still occur. The generic situation concerning CP violation in the MSSM case with ϕ A = ϕ B = 0 and exact universality in the soft-breaking sector can be summarized in the following way: the MSSM does not lead to any significant deviation from the SM expectation for CP violating phenomena such as d e N , ε K , ε /ε and CP violation in B physics; the only exception to this statement concerns a small portion of the MSSM parameter space where a very lightt (mt < 100 GeV) and χ + (m χ ∼ 90 GeV) are present. In this latter particular situation, sizeable SUSY contributions to ε K are possible and, consequently, major restrictions in the ρ-η plane can be inferred. Obviously, CP violation in B physics becomes a crucial test for this MSSM case with very lightt and χ + . Interestingly enough, such low values of SUSY masses are at the border of the detectability region at LEP II.
On the other hand, in recent years, the attitude towards the EDM problem in SUSY and the consequent suppression of the SUSY phases has significantly changed. Indeed, options have been envisaged allowing for a conveniently suppressed SUSY contribution to the EDM 4.5 even in the presence of large (sometimes maximal) SUSY phases. Methods of suppressing the EDMs consist of cancellation of various SUSY contributions among themselves [14], nonuniversality of the soft-breaking parameters at the unification scale [15] and approximately degenerate heavy sfermions for the first two generations [16] †. In the presence of one of these mechanisms, large supersymmetric phases are expected, yet EDMs should be generally close to the experimental bounds. In the next section, we focus on flavour changing CP violation in SUSY with nonvanishing SUSY phases, both without new flavour and in the presence of new flavour structures. Then in section 3 we present our conclusions and outlook.
Flavour changing CP violation
Despite the large sensitivity of EDMs to the presence of new phases, so far only neutral meson systems, K 0 -K 0 or B 0 -B 0 , show measurable effects of CP violation. This fact is, at first sight, surprising because in the neutral mesons CP violation is associated with a change in flavour and hence is CKM suppressed, whereas EDMs are completely independent of flavour mixing. The reason for this is that, in the SM, CP violation is intimately related to flavour, to the extent that observable CP violation requires, not only a phase in the CKM mixing matrix, but also three nondegenerate families of quarks [18]. As shown in the previous section, the supersymmetrized SM contains new sources of CP , both flavour independent and flavour dependent. Although the new phases are, in principle, strongly constrained by the EDM experimental limits, we have seen that several mechanisms allow us to satisfy these constrains with large supersymmetric phases. Next, we analyse this possibility and the effects in flavour changing CP violation observables. We first concentrate on an MSSM with a flavour-blind SUSY breaking, and then we study a general MSSM in which the soft-breaking terms include all kinds of new flavour structures.
Flavour-blind SUSY breaking and CP violation
The first step in our review of supersymmetric CP violation is the analysis of an MSSM with flavour-blind SUSY breaking. Flavour blind refers to a softly broken supersymmetric SM in which the soft-breaking terms do not introduce any new flavour structure beyond the Yukawa matrices, whose presence in the superpotential is required to reproduce correctly the fermion masses and mixing angles. Supersymmetry is broken at a large scale, that we identify with M GUT , and from here, the parameters evolve with the standard MSSM renormalization group equations (RGE) [19,20] down to the electroweak scale. In these conditions the most general allowed structure of the soft-breaking terms at M GUT is where all the allowed phases are explicitly written except possible phases in the Yukawa matrices that give rise to an observable phase in the the CKM matrix, δ CKM . It is important to emphasize † In this case, two-loop contributions can become dominant and must be taken into account to constrain SUSY phases, specially in the large-tan β regime [17].
4.6
that, in this flavour-blind MSSM, δ CKM is the only physical phase in the Yukawa matrices and all other phases in Y U and Y D can be rephased away in the same way as in the SM. The absence of flavour structure in the scalar sector means that quarks and squarks can be rotated parallel already at the GUT scale and hence only δ CKM survives. This is not true in the presence of new flavour structures, where additional Yukawa phases cannot be rephased away from quark-squark couplings [21]. Furthermore, we also assume unification of gaugino masses at M GUT and the universal gaugino mass can always be taken as real.
The soft-breaking term structure in equation (3) includes, as the simplest example, the CMSSM where all scalar masses and A-terms are universal and the number of parameters is reduced to six real parameters once we require radiative symmetry breaking, ( [20,22,23]. More general soft-breaking terms in the absence of new flavour structures can arise in GUT models [24]. For instance, in an SU(5) model, we expect common masses for the particles in the5 and in the 10 multiplets, and, in general, different masses for the two Higgses. The new parameters in the soft-breaking sector would then be ( . We take this structure as a representative example of equation (3), since it already shares all the relevant features. In any case, although the number of parameters is significantly increased with respect to the CMSSM, it can still be managed and a full RGE evolution and analysis of the low-energy spectrum is possible.
In this framework, we consider SUSY effects on flavour changing CP violation and, in particular, the CP asymmetry in the b → sγ decay, ε K and B 0 CP asymmetries. However, we include also two CP conserving observables that are relevant in the fit of the unitarity triangle, namely ∆M B d and ∆M Bs . All these processes receive two qualitatively different supersymmetric contributions. As shown in the previous section, supersymmetry introduces new CP violation phases that can strongly modify these observables through their effects in SUSY loops. On the other hand, even with vanishing SUSY phases, the presence of the CKM phase in loops containing SUSY particles induces new contributions that modify the SM predictions for these observables.
Concerning the first possibility, we consider the following extreme situation: we analyse the effects of both φ µ and a flavour-independent φ A in flavour changing CP violation experiments, ignoring completely (as a first step) EDM bounds. The result looks rather surprising at first sight: in the absence of the CKM phase, a general MSSM with all possible phases in the softbreaking terms, but no new flavour structure beyond the usual Yukawa matrices, can never give a sizeable contribution to ε K , ε /ε or hadronic B 0 CP asymmetries [26]. In other words, the effects of SUSY phases in a flavour-blind MSSM are restricted in practice to LR transitions, such as the EDM or the CP asymmetry in the b → sγ decay, and the effects in observables with dominant chirality conserving contributions are negligible even with maximal SUSY phases.
Accordingly, the most interesting CP violation observable in these conditions is probably the CP asymmetry in the b → sγ decay. However, we must take into account that the branching ratio itself is a very strong constraint in any SUSY model [27]. The CP asymmetry is defined as where the different C i are the Wilson coefficients of the current-current, [28]. This asymmetry is predicted to be below 1% in the SM [28,29]. On the other hand, we have seen that the new SUSY phases can modify the b → sγ transition significantly. In fact, several studies showed that the MSSM in the presence of large φ µ and φ A can enhance the CP asymmetry up to 15% [25], [30]- [33], which could be easily accessible at B factories. In any case, it is important to remember that this scenario is viable only if some mechanism reduces the SUSY contributions to the EDM. In the case of the CMSSM or a flavour-blind MSSM with possible EDM cancellations this analysis was repeated in [25,32,33] and the asymmetry can reach at most a few per cent. Figure 1 shows that without EDM constraints (open grey circles) the asymmetry can be above 5% at any value of the branching ratio and can even reach 13% for low branching ratios. In figure 1 the points of the parameter space that fulfil EDM constraints are represented by black dots. The effect on the CP asymmetry can be sizeable at low values of the branching ratio [25], but for larger values of the branching ratio the asymmetry is again around 1%. In the plot, the points of the parameter space fulfilling EDM constraints are represented by black dots. In this regard, it is important to keep in mind, that the phase of the µ term is, in any case, constrained to be φ µ 0.2 for scalar masses m 0 500 GeV [25]. On the other hand, the phases of the A terms are basically unconstrained by EDM in this scheme [25,34].
A more complete analysis under these conditions was made in [25], where the flavour-blind conditions are specified at a large scale M GUT and the standard MSSM RGEs [19,35] are used to evolve the initial conditions down to the electroweak scale. Two representative examples of flavour-blind MSSM were considered, the CMSSM as the simplest model and the SU(5)-inspired model defined above. Here, we are mainly interested in the following part of the low-energy spectrum: χ + , H + andt. Their masses are evolved to M W and then all the relevant experimental constraints are imposed.
• Absence of charge and color breaking minima and directions unbounded from below [36].
• Neutralino as the lightest supersymmetric particle.
In this way, the complete supersymmetric spectrum at the electroweak scale is obtained in terms of six or 11 parameters in the CMSSM or SU(5)-inspired model respectively. Within an MSSM scenario, this kind of analysis was first made in the work of Bertolini et al [20] and has been updated several times since then [39]. We follow Bartl et al [25], who developed a specialized study of the spectrum relevant for FCNC and CP violation experiments. Indeed, the most interesting point of this work is the strong correlation among different SUSY masses that have a strong impact on low-energy FCNC and CP violation studies. Figure 2 shows scatter plots of the mass of the lightest chargino versus the lightest stop mass. In these plots we vary the scalar and gaugino masses at M GUT as 100 GeV < m i < 1000 GeV, the trilinear terms as 0 < |A i | 2 < m 2 H + m 2 q L + m 2 q R with arbitrary phases and 2 < tan β < 50. It is interesting to notice in this plot the very strong correlation among the chargino and stop masses. In fact, this correlation can be easily understood with the help of the one-loop RGE [19]. Neglecting for the moment the so-called D-terms and the small radiatively generated intergenerational squark mixing, we obtain for the stop masses in terms of the soft parameters at the electroweak scale. Thanks to the proximity of the top quark mass to its quasi-fixed point and the relative smallness of µ cot β for tan β ≥ 2.5, we can express equation (5) as a function of the initial parameters at M GUT with only a small variation of the coefficients with tan β. In the CMSSM case we find Moreover, in the CMSSM |µ| √ 3m 1/2 is always larger than M 2 0.81m 1/2 and, hence, the lightest chargino is predominantly gaugino. Then we can replace the initial gaugino mass in terms of the lightest chargino mass and finally obtain From here, we obtain for 100 GeV < m 0 < 1 T eV and with m χ = 100 GeV a maximal allowed range for the lightest stop mass of 230 GeV mt 1 660 GeV. As figure 2 shows, this correlation is maintained for larger chargino masses. In the case of SU(5), the main difference is the fact that the Higgs masses are not tied to the other scalar masses and now may be quite different. This has important effects on the radiative symmetry breaking and, in fact, lower values of µ are possible such that the lightest chargino can have a predominant higgsino component. In the rare scenarios where |µ| M 2 , the stop masses are somewhat lower than for the CMSSM case. In any case, a similar correlation is still maintained. We must emphasize that, due to gluino dominance in the soft-term evolution, this kind of correlation is general in any RGEevolved MSSM from some GUT initial conditions, assuming that gaugino masses unify as well.
In a similar way, we can discuss the charged Higgs-boson mass. The main features here are the fact that, for low tan β, the masses of the charged Higgs boson are above 400 GeV, and that, specially in the CMSSM case, most of these light Higgs-boson masses are eliminated by the b → sγ constraint. For larger values of tan β slightly lighter masses are allowed, but it is still true that we seldom find charged Higgs masses below 300 GeV in any case. The reason for this is again the gluino dominance in the RGE. For instance, at tan β = 5 the charged Higgs is m 2 H + 1.23m 2 0 + 3.31m 2 1/2 and at tan β = 30, m 2 H + 0.72m 2 0 + 1.98m 2 1/2 , from the one-loop RGE [25].
Taking into account the relevant features of the MSSM spectrum discussed above, we can discuss the SUSY contribution in a flavour-blind scenario to the different CP violating observables. First, the b → sγ CP asymmetry has already been discussed in the presence of large supersymmetric phases that survive the EDM constraints through a cancellation mechanism. In this case, the asymmetry could reach a few per cent; however, with vanishing SUSY phases we again obtain an asymmetry in the range of the SM value, well below 1% [28,29]. A similar situation is found in ε /ε, where the SUSY contributions tend to lower the SM prediction [40,41].
Second, the ∆F = 2 observables, i.e. ε K and B 0 -B 0 mixing, which play a fundamental role in the unitarity triangle fit, are also modified by new SUSY contributions. Taking into account the new SUSY contributions, the SM fit of the unitarity triangle is modified and one obtains different restrictions on the ρ and η parameters of the CKM matrix. Moreover this fit has to be compatible with the new direct measurements of the B 0 CP asymmetries [3,4]. As explained elsewhere [25], given that the SUSY contributions tend to interfere constructively with the SM with a factorized CKM dependence, this implies that for a given SUSY contribution the values of η required to saturate ε K are now smaller. The value of |V td V * tb | required to saturate ∆M B d is 4.10 analogously decreased. Hence, it is evident that the addition of SUSY tends to lower the values of η and increase the values of ρ in the fit, therefore reducing the actual value of β. However, as shown by Buras and Buras [42], in any minimal flavour violation model at the electroweak scale, a strong correlation exists among the ∆F = 2 contributions to ε K and ∆M B d , and this allows only a small departure of sin 2β from the SM prediction. Still, different values of α and γ are, in principle, allowed. Nevertheless, as shown above, the relative heaviness [25,43] of the SUSY spectrum implies that the deviation from the SM fit in these models tends to be small for these angles. In summary, a flavour-blind MSSM cannot generate large deviations from the SM expectations in the B 0 CP asymmetries [3,4].
CP violation in the presence of new flavour structures
Flavour universality of the soft SUSY breaking is a strong assumption and is known not to be true in many supergravity and string-inspired models [44]- [47]. In these models, a nontrivial flavour structure in the squark mass matrices or trilinear terms is generically obtained at the supersymmetry breaking scale. Hence, sizeable flavour off-diagonal entries appear in the squark mass matrices, and new FCNC and CP violation effects can be expected. In fact, most of these flavour off-diagonal entries are severely constrained or even ruled out by low-energy FCNC and CP violation observables. A very convenient parametrization of the SUSY effects in these rare processes is the socalled mass insertion approximation [48]. It is defined in the super CKM (SCKM) basis at the electroweak scale, where all the couplings of sfermions to neutral gauginos are flavour diagonal. In this basis, the sfermion mass matrices are not diagonal. The sfermion propagators, now flavour off-diagonal, can be expanded as a series in terms of δ = ∆/m 2 , where ∆ denotes the off-diagonal terms in the sfermion mass matrices, withm 2 an average sfermion mass [49]. As a result, FCNC and CP violation constraints can be expressed as model-independent upper bounds on these mass insertions at the electroweak scale and they can readily be compared with the corresponding mass insertions calculated in a well defined SUSY model.
A complete analysis of this kind was performed (see [49,50] for a more complete discussion) and the constraints from ∆S = 2 processes were later updated [51]. In the following, we present the phenomenological constraints from [49,51].
The main constraints on CP violating mass insertions (with nonvanishing phases) come from ε K , ε /ε and EDMs, although 'indirect' constraints from b → sγ and B-B mixing are also relevant. These constraints are presented in tables 1-5. It is important to emphasize the strong sensitivity of ε /ε to (δ LR ) 12 and of ε K to (δ LL ) 12 , which implies that it is difficult to saturate both simultaneously with a single mass insertion [52,53].
What message should we draw from the constraints in tables 1-5? First, it is apparent that FCNC and especially CP violating processes represent a significant test for SUSY extensions of the SM. Taking arbitrary sfermion mass matrices completely unrelated to the fermion mass matrices would lead to mass insertions of order unity. Consequently, the first conclusion we draw from the small numbers in tables 1-5 is that there must be some close relation between the flavour structures of the sfermion and fermion sectors. Large portions of the parameter spaces of minimal SUSY models are completely ruled out thanks to the severity of the FCNC and CP constraints.
However, then an even more important question emerges after one studies tables 1-5: Given the strong constraints from FCNC and CP violating processes that we have already observed, can we still hope to see SUSY signals in other rare processes? In particular, restricting this question 12 , with A, B = (L, R, LR) including next-to-leading-order QCD corrections and lattice B parameters [51], for an average squark mass mq = 500 GeV and for different values of Table 2. Limits from ε /ε < 2.7 × 10 −3 on Im(δ d 12 ), for an average squark mass mq = 500 GeV and different values of Table 3.
Limits on Im(δ LR ) 11 from EDMs, for mq = 500 GeV and ml = 100 GeV. to CP violation, can we still hope to find a significant disagreement with the SM expectations when we measure CP violation in various B decay channels? Fortunately for us, the answer to this last question is yes. For example, it has been shown [54] that considering the CP asymmetry in several B decay channels, which in the SM would give just the same answer (the angle β of the unitarity triangle), it is possible to obtain different values when SUSY effects are switched on. SUSY contributions to some of the decay amplitudes can be as high as 70% with respect to the SM contribution, whereas other decay channels are not affected at all by the SUSY presence. Hence, assuming large CP violating phases in SUSY, one could find discrepancies with the SM expectations that are larger than any reasonable theoretical hadronic uncertainty in the SM computation. We refer the interested reader to [54] for a detailed discussion. It is worth emphasizing that the above example shows that there is still room for sizeable SUSY signals in CP violating processes, but this represents some kind of 'maximal hope' of what we can expect from SUSY. In other words, one takes the maximally allowed values of relevant δs to maximize the possible SUSY deviations from SM on CP observables. A different question is what we can 'typically' expect in a SUSY model. As we stressed in the introduction, no 'typical' SUSY model exists; what we call low-energy SUSY represents a vast class of models. Yet it makes sense to try to identify some features of minimal SUSY models where no drastic departures from flavour universality are taken and to consider in this more restricted context what we can expect.
In the following, we analyse a 'realistic' nonuniversal MSSM, and compute the 'reasonable' expectations for the different mass insertions in this context. In the first place, we define our generic MSSM through a set of four general conditions.
(1) Minimal particle content. We consider the MSSM, with no additional particles from M W to M GUT .
(2) Arbitrary soft-breaking terms O(m 3/2 ). The supersymmetry soft-breaking terms as given at the scale M GUT have a completely general flavour structure, but all of them are of the order of a single scale, m 3/2 .
(3) Trilinear couplings originate from Yukawa couplings. Although trilinear couplings are a completely new flavour structure they are related to the Yukawas in the usual way: (4) Gauge coupling and gaugino unification at M GUT and RGE evolution of the different parameters from that scale.
In this framework, any particular MSSM is completely defined, once we specify the softbreaking terms at M GUT . We specify these soft-breaking terms in the basis in which all the squark mass matrices, M 2 Q , M 2 U , M 2 D , are diagonal. In this basis, the Yukawa matrices are with M d and M u diagonal quark mass matrices, K the CKM mixing matrix and K D L , K U R , K D R unknown, completely general, 3 × 3 unitary matrices.
Although our analysis is completely general within this scenario [55], we prefer to discuss a concrete example based on type-I [47] string theory (see [56] for a definition) †. In this particular 4.13 example, gaugino masses, right-handed squarks and trilinear terms are nonuniversal. Gaugino masses are whereas the A-terms are obtained as for the trilinear terms associated with the first-, second-and third-generation right-handed squarks respectively. Here m 3/2 is the gravitino mass, α S and α i are the CP phases of the F terms of the dilaton field S and the three moduli fields T i , and θ and Θ i are goldstino angles with the constraint in matrix notation [57]. In addition, universal soft scalar masses for quark doublets and the Higgs fields are obtained, Finally, the soft scalar masses for quark singlets are nonuniversal, with T i = (Θ 2 3 , Θ 2 2 , Θ 2 1 ). To complete the definition of the model, we also need to specify the Yukawa textures. The only available experimental information is the CKM mixing matrix and the quark masses. We choose our Yukawa texture following two simple assumptions: (a) the CKM mixing matrix originates from the down-type Yukawa couplings (as done in [58]) and (b) our Yukawa matrices are Hermitian [59]. With these two assumptions we obtain K D L = K and K U L = 1. However, it is important to emphasize that given that now K D L and K U L measure the flavour misalignment between quarks and squarks, and that we already use the rephasing invariance of the quarks to make K CKM real, we can expect new observable (unremovable) phases in the quark-squark mixings, and in particular in the first two-generation sector. That is, to O(λ 4 ); A and ρ = |ρ + iη| are the usual parameters in the Wolfenstein parametrization, both O(1). We must emphasize here that the observable phase in the CKM mixing matrix corresponds to the combination δ CKM = β − α − γ; hence it is transparent that we can have a vanishing δ CKM while being left with large observable phases in the SUSY sector [21]. Hence, the Yukawa It is important to remember that this is the simplest structure consistent with all phenomenological constraints.
4.14
Now, the next step is to use the MSSM RGEs [19,20] to evolve these matrices down to the electroweak scale. The main RGE effects from M GUT to M W are those associated with the gluino mass and the large third-generation Yukawa couplings. Regarding squark mass matrices, it is well known that diagonal elements receive important RGE contributions proportional to gluino mass that dilute the mass eigenstate nondegeneracy, m 2D A i [19,20,23], Similarly, gaugino effects in the trilinear RGE are always proportional to the Yukawa matrices, not to the trilinear matrices themselves, and so they are always diagonal to extremely good approximation in the SCKM basis. Once more, the off-diagonal elements will be approximately given by q respectively. Hence, in our example defined in equations (9)- (14), we have LR and RR off-diagonal mass insertions, which can be estimated as and Equation (15) reveals an important feature of the LR mass insertions. Because of the trilinear term structure in generic models of soft breaking, the LR sfermion matrices are always suppressed by m q i /mq, with m q i the mass of one of the quarks involved in the coupling and mq the average squark mass [57]. In any case, this suppression is necessary to avoid charge and colour breaking and directions unbounded from below [36]. We can easily estimate the different mass insertions with these formulae. First we must take into account that, owing to the gluino dominance in the squark eigenstates at M W , m 2 q (M W ) ≈ 6m 2 g (M GUT ). In the kaon system, we can neglect m d ; replacing the values of masses and mixings in equations (9)-(14) we obtain where we have used θ 0.7 as in [56]. Comparing this value with the bounds in table 2, we see that it could indeed give a very sizeable contribution to ε /ε [56,58,60]. The phases α 2 and α 1 are actually unconstrained by EDM experiments as emphasized in [56] †. This important result means that even if the relative quark-squark flavour misalignment is absent and the only flavour mixing is provided by the usual CKM matrix, i.e. K D L = K CKM , the presence of nonuniversal flavour-diagonal trilinear terms is enough to generate large FCNC effects in the kaon system. Similarly, in the neutral B system, (δ d LR ) 13 contributes to the B d −B d mixing parameter, ∆M B d . However, in our minimal scenario, K D L ≈ K, we obtain clearly too small to generate sizeableb-d transitions, as the bounds in table 5 show. Notice that larger effects are still possible in a more 'exotic' scenario with a large mixing in K D L 13 . For instance, with a maximal value, |K D L 13 K D L * 33 | = 1/2, we would obtain (δ d LR ) 13 2 × 10 −3 · (100 GeV/m 3/2 ). Even in this limiting situation, this result is roughly one order of magnitude too small to saturate ∆M B d , though it could still be observed through the CP asymmetries. Hence in the B system we reach a very different result: it is not enough to have nonuniversal trilinear terms; large flavour misalignment among quarks and squarks is also required.
A similar analysis can be made with the chirality conserving mass insertions. From equation (16), in the kaon system, we obtain This value has to be compared with the mass insertion bounds required to saturate ε K [48], which in this case are (δ d R ) bound 12 ≤ 0.0032. Using θ 0.7, we obtain Hence, it is clear that we can easily saturate ε K without any special fine-tuning. Indeed, this constraint, which is one of the main sources of the so-called supersymmetric flavour problem, in this generic MSSM amounts to the requirement that (Θ 2 1 − Θ 2 2 ) sin α 0.1 with all the different factors in this expression Θ 2 1 , Θ 2 2 , sin α ≤ 1 [21]. Now we turn to the CP asymmetries in the B system. Once more, with equation (15) we have to be compared with the mass insertion bound (δ d R ) bound 12 ≤ 0.098 required to not over-saturate the B 0 mass difference.
We conclude that large effects are expected in the kaon system in the presence of nonuniversal squark masses even with a 'natural' CKM-like mixing for both chirality changing and chirality conserving transitions. The B system is much less sensitive to supersymmetric contributions, so observable effects are expected only with approximately maximalb-d mixings.
Recently, the arrival of the first measurements of B 0 CP asymmetries from the B factories has caused a great excitement in the high-energy physics community.
4.16
The errors are still too large to draw any firm conclusion. Still, these measurements leave room for an asymmetry sizeably different from the SM expectations corresponding to 0.59 ≤ a SM J/ψ = sin(2β) ≤ 0.82. This possible discrepancy, if confirmed, would be a first sign of the presence of new physics in CP violation experiments. Several papers have discussed the possible implications of a nonstandard CP asymmetry [55,61] and pointed out two possibilities. A small asymmetry can be due to a large new physics contribution in the B system and/or to a new contribution in the K system modifying the usual determination of the unitarity triangle. Taking into account the results above, in a nonuniversal MSSM it is realistic to reproduce the CP violation in the kaon system through SUSY effects, while being left with a small a J/ψ in the B system. Indeed the role of the CKM phase could be confined to the SM fit of the charmless semileptonic B decays and B 0 d -B 0 d mixing, while predominantly attributing to SUSY the K CP violation (ε K and ε /ε). In this case the CKM phase can be quite small, leading to a lower a J/ψ CP asymmetry [21].
Conclusions and outlook
The main points of our discussion can be summarized as follows. (i) There exist strong theoretical and 'observational' reasons to go beyond the SM. (ii) The gauge hierarchy and coupling unification problems favour the presence of low-energy SUSY (either in its minimal version, CMSSM, or more naturally, in some less constrained realization). (iii) Flavour and CP problems constrain low-energy SUSY, but, at the same time, provide new tools to search for SUSY indirectly. (iv) In general, we expect new CP violating phases in the SUSY sector. However, these new phases are not going to produce sizeable effects as long as the SUSY model we consider does not exhibit a new flavour structure in addition to the SM Yukawa matrices.
(v) In the presence of a new flavour structure in SUSY, large contributions to CP violating observables are indeed possible.
In summary, in a flavour-blind SUSY there exist quite a few special places where we can hope to 'see' SUSY in action: the EDMs, the A b→sγ CP and, as emerged recently, the anomalous magnetic moment of the muon. On the other hand, in the more general (and, in our view, also more likely) case where, indeed, SUSY breaking is not insensitive to the flavour mechanism, there exist a rich variety of FCNC and CP violation potentialities for SUSY to show up. As we have seen, K and B physics offer appealing possibilities: ε K , ε /ε, CP violating rare kaon decays, CP asymmetries in B decays, rare B decays, . . . . In fact, we think that the relevance of SUSY searches in rare processes is not confined to the usually quoted possibility that indirect searches can arrive 'first', before direct searches (Tevatron and LHC), in signalling the presence of SUSY. Even after the possible direct production and observation of SUSY particles, the importance of FCNC and CP violation in testing SUSY remains of utmost relevance. They are and will be complementary to the Tevatron and LHC establishing low-energy supersymmetry as the response to the electroweak breaking puzzle. | 9,823.8 | 2002-02-01T00:00:00.000 | [
"Physics"
] |
Establishment of an animal model of chronic osteomyelitis with Staphylococcus aureus by ligating the femoral artery of rats
Osteomyelitis caused by Staphylococcus aureus (S. aureus) is an important post-operation complication, especially after fracture internal xation and articial joint replacement. Animal models play an indispensable role in exploring the pathogenesis of osteomyelitis. Most models use internal xation, bacterial suspension and vascular sclerosing agent to destroy blood vessels. Vascular sclerosing agents not only damage blood vessels but also lead to local inammatory immune disorders, which is different from simple vascular disease and osteomyelitis caused by ischemia in clinical practice. The experimental animals were randomly divided into three groups: femoral artery ligation group, vascular sclerosing agent group and non-infection aseptic operation group. In the femoral artery ligation group, the femoral artery was ligated to reduce the blood ow of the affected limb to simulate the clinical ischemic state and increase the susceptibility, then the Kirschner needle with S. aureus biolm was implanted into the tibia of rats, and the bone defect was sealed with aseptic paran. The non-infection aseptic operation group and the infection model group caused by vascular sclerosing agent have been used as the blank and positive control group. After operation, survival rate, body temperature and incision healing were monitored. Four weeks later, radiological and pathological changes of all animals were evaluated, and the secretions from osteomyelitis experienced etiological separation and cultivation. S. and lateral views on days 28 after operation. For the X-rays, the digital fi lms and X-ray unit were used to assess development and progression of bone infection. Each radiograph was evaluated by two independent observers in a blinded manner to look for evidence of chronic osteomyelitis based on the presence of periosteal reaction, osteolysis, soft-tissue swelling, deformity, sequestrum formation and spontaneous fracture. model of S. aureus chronic osteomyelitis are very similar to those observed in human patients.
Abstract Background
Osteomyelitis caused by Staphylococcus aureus (S. aureus) is an important post-operation complication, especially after fracture internal xation and arti cial joint replacement. Animal models play an indispensable role in exploring the pathogenesis of osteomyelitis. Most models use internal xation, bacterial suspension and vascular sclerosing agent to destroy blood vessels. Vascular sclerosing agents not only damage blood vessels but also lead to local in ammatory immune disorders, which is different from simple vascular disease and osteomyelitis caused by ischemia in clinical practice.
Methods
The experimental animals were randomly divided into three groups: femoral artery ligation group, vascular sclerosing agent group and non-infection aseptic operation group. In the femoral artery ligation group, the femoral artery was ligated to reduce the blood ow of the affected limb to simulate the clinical ischemic state and increase the susceptibility, then the Kirschner needle with S. aureus bio lm was implanted into the tibia of rats, and the bone defect was sealed with aseptic para n. The non-infection aseptic operation group and the infection model group caused by vascular sclerosing agent have been used as the blank and positive control group. After operation, survival rate, body temperature and incision healing were monitored. Four weeks later, radiological and pathological changes of all animals were evaluated, and the secretions from osteomyelitis experienced etiological separation and cultivation.
Results
The chronic osteomyelitis model was established successfully by ligating femoral artery and implanting Kirschner needle covered with S. aureus bio lm. Signs of chronic osteomyelitis were observed in all rats of femoral artery ligation infection group and positive control group. No signs of infection and chronic osteomyelitis were found in the non-infection aseptic operation control group.
Conclusion
The method of ligating the femoral artery and implanting Kirschner needle with S. aureus bio lm into the tibia of rats can effectively establish a stable and reproducible chronic osteomyelitis model which is closer to the clinical pathogenesis and natural route of infection. This model could be useful for the study of pathogenesis and therapeutics of chronic osteomyelitis with S. aureus.
Background
Chronic osteomyelitis is a kind of infection and destruction of bone, which can be caused by aerobic or anaerobic bacteria, mycobacteria and fungi. The most common pathogen of Chronic osteomyelitis is S. aureus [1] . Chronic osteomyelitis most often occurs in the long bones. It is common for adults who developed post-traumatic fracture infection in the feet and who are diabetic patients with poor blood supply, penetrating bone injury caused by trauma or operation.
Hematogenous osteomyelitis is uncommon in adults but often occur in children. It is one of the most frequent invasive bacterial infection in the long bones with good blood supply, such as the tibia or the epiphysis of the femur [2,3] . In clinic, it often occurs repeatedly and does not heal for a long time, which seriously affects physical, mental health and labor ability [4] . Acute osteomyelitis begins with high fever and local pain, and when it turns to chronic osteomyelitis, there will be rupture, pus, dead bone or cavity formation. Severe patients are often in danger of life, which have to take emergency measures of amputation, resulting in lifelong physical disability [5] . Thorough debridement, opening cancellous bone graft and repeated irrigation are the most commonly method of clinical treatment of osteomyelitis so far [6] . In addition, puncture and aspiration, windowing and drainage, extraction of dead bone, lling of pedicled muscle ap, amputation, resection of massive diseased bone are also be used, but the therapeutic effect are not ideal, often requires multiple operations and prolonged antibiotic administration, and the recurrence rate are still very high [7,8] . The treatment of osteomyelitis is still a di cult problem in orthopedic surgery at present. A good animal model must effectively and accurately reproduce the clinical osteomyelitis, which is the basis and key to study the new treatment of osteomyelitis. Existing animal models of osteomyelitis are mainly established by injecting sclerosing agents into sclerosing blood vessels to reduce local blood ow and implanting internal xation to simulate clinical osteomyelitis [9] , which are different from the clinical causes of osteomyelitis. Clinically, most of osteomyelitis are caused by implanting with post-traumatic internal xation which can bring local vascular diseases and result in local aseptic ischemia and osteomyelitis which formed after bacterial invasion [10] . Compared with the previous animal models of osteomyelitis, using rabbits as experimental subjects will lead to more expensive and inconvenient management; using mice as experimental subjects is inconvenient for users to operate because the bones of mice are too small; the simulation of clinical ischemia is through the injection of vascular sclerosing agent, which is not close enough to the pathogenesis of clinical osteomyelitis [11] . In this paper, purchasing and raising costs, the balance of convenience, size requirements, especially the clinical comparability have been considered. The purpose of this study was to develop a rat model that mimics chronic osteomyelitis with S. aureus by ligating the femoral artery of rats to simulate ischemia and implanting the Kirschner needle with bacterial bio lm into the tibia of rats to simulate the infection after clinical internal xation, which were more similar to the clinical pathogenic factors. The 8 weeks old female SD rats, SPF, weighting 250~270g were purchased from the Changsha Tianqin Biological Company. Rats were raised by the Animal Center of Zunyi Medical University (3 rats per cage), provided with standard food, and had free access to bottled drinking water. In addition, the temperature and humidity of the facility environment can be controlled uniformly. Animals were acclimatized for a week prior to the initiation of this study. The weight and temperature of all rats were measured for 3 days before the experiment and 7 days after operation. After operation, phenobarbital sodium (3%) was used as the postoperative analgesic. All rats were executed by cervical vertebrae dislocation after phenobarbital anesthesia on the 28th day after operation.
Experiment grouping
Eighteen rats were randomly divided into 3 groups. Group A: femoral artery ligation + Kirschner needle with S. aureus bio lm internal xation; Group B (positive control, vascular sclerosing agent group [12] ): vascular sclerosing agent + Kirschner needle with S. aureus bio lm internal xation; Group C (blank control): noninfected aseptic operation group and sterile Kirschner needle internal xation.
Preparation of bacterial strains and bacteria-carrying Kirschner needle
The standard S. aureus strain (ATCC25923) was inoculated on blood agar plate and incubated at 37℃ for 24 hours. The pathogens were grown to 0.6 of OD600 nm in MHB medium at 37℃ with shaking at 250 rpm, diluted to approximate 1×10 6 CFU/ml with distilled medium [13] . Then, the sterile Kirschner needle with diameter of 1mm and length of 5mm were inoculated in bacterial suspension and incubated for 18 hours at 37℃ with agitation. Crystal violet staining con rmed the existence of bio lm on the Kirschner needle head.
Establishment of osteomyelitis model
Rats were anesthetized by intraperitoneal injection with 3% pentobarbital sodium (30mg/kg) before operation. The limbs of the rats were xed, the right lower abdomen and right lower extremities were accepted pre-operative skin preparation, sterilization by iodine, then covered with disposable aseptic treatment towels. In group A, a 2cm longitudinal incision was made from the right groin to the subcutaneous, hemostatic clamp was used to separate the subcutaneous soft tissue, the inguinal ligament was used as the sign to nd the femoral artery, then the femoral artery was separated and ligated, and then the subcutaneous soft tissue and skin were sutured. After that, skin incision of 2cm was made along the medial side of the anterior tibial crest under the knee joint and reached the periosteum on the medial side of the anterior tibial muscle to expose the anterolateral tibial crest of the upper tibia; drilled a bone hole by a hand drill with diameter of 1.5mm, explored the cancellous bone with a small curette. As for group A and group B, a Kirschner needle with S. aureus bio lm (sterile Kirschner needle in the blank group) was implanted into the hole. In addition, the local blood ow was reduced by ligating the femoral artery of the right lower limb in group A and injecting vascular sclerosing agent in group B, respectively. Finally, the bone window was sealed with bone wax and sutured layer by layer. Each rat was fed in a single cage and observed for 4 weeks after operation.
Detection and evaluation
The physiological status, death, induration, edema, purpura or dehiscence of the model incision of the right lower limb, pus and sinus formation were observed every day. Radiological examination was performed at the 4th week of modeling. Then all rats were killed by cervical vertebra dislocation after anesthesia. The secretions and tissue samples from the disease area were collected under aseptic conditions and cultured to con rm the presence of S. aureus, and the bone tissue was taken for HE pathological examination to con rm the presence of chronic osteomyelitis.
Clinical signs of infection
Animals were examined for clinical signs of infection (swelling and reddening of the right hind leg, loss of passive motion in knee and ankle joints. Moreover, the body temperature was measured with a digital thermometer on days -3 to 7 and the weight was also be determined.
Radiographic evaluation
Radiographs were taken in posterior-anterior and lateral views on days 28 after operation. For the X-rays, the digital films and X-ray unit were used to assess development and progression of bone infection. Each radiograph was evaluated by two independent observers in a blinded manner to look for evidence of chronic osteomyelitis based on the presence of periosteal reaction, osteolysis, soft-tissue swelling, deformity, sequestrum formation and spontaneous fracture.
Microbiological evaluation
The contents of any abscess cavities at the end point of this study were cultured using a sterile inoculation ring directly plated onto agar plates, which used to con rm the presence of S. aureus. The plates were evaluated by two different microbiologists at 24 hours for colony growth.
Gross view of bone tissue
The soft tissues of the affected limbs of the rats were completely removed, and the tibia was separated for comparison. The healing of the bone tissue in the holes of the modeled bricks was observed based on the presence of sinus and pus, dead bone formation, deformity and fracture. and 4 mm sections were placed on glass slides. Slides underwent depara nization and staining by hematoxylin and eosin (H&E). For morphometric analyses, images of H&E stained were observed and recorded under light microscope.
Animals infected with S. aureus (ATCC 25923)
All rats survived in the surgery and establishment of chronic osteomyelitis. On the 5th to 7th day after operation, the rats in group A and group B had abscess formation of the right lower extremity and purulent secretion in the incision. There were stulas in the incised limbs deep to the marrow cavity. There were no signs of suppurative infection observed in control group C, and the incision healed completely around the 7th day after operation.
Body temperature and weight
Temperature monitoring of rats inoculated with bacterial biological kirschner needle in Group A and Group B indicated that the temperature began to rise rapidly to more than 38℃ on the rst day after the operation and maintained for a week. The body temperature monitoring in the Control group C increased on the rst day after the operation, with the maximum temperature reaching 38.8℃, and the body temperature returned to the same level as before the operation on the second day after the operation ( Figure 3). Compared with group C, the growth rate of body weight decreased in Group A and Group B.
Gait signs
Limping of the right lower extremity occurred for 4 weeks in Group A and Group B implanted with S. aureus bio lm Kirschner needle (Figure 4). No claudication was observed in Group C implanted with sterile Kirschner needle, and they gradually returned to normal on the 5th day after operation.
Radiographic examination
Radiographic examination of the surgical area indicated that there were soft tissue swelling and periosteum reaction at the right tibial modeling site of rats in groups A and B, as well as nonunion of bone cortex, enlargement of local bone marrow cavity, formation of dead bone and new cladding. In group C, X-ray results showed there was no swelling shadow in the soft tissue of the surgical area, no enlargement in the bone marrow cavity, no dead bones or new envelopment formation ( Figure 5). The X-ray analysis in this present rat model of S. aureus chronic osteomyelitis are very similar to those observed in human patients.
Bacterial culture and bone healing
After skin incision, the tibial brick hole of rats in Group A and Group B was still present with a large number of suppurative secretions, while the tibial brick hole 7). Classical S. aureus colony was identi ed in all rats of group A and group B, while no bacteria was observed in group C ( Figure 6). For comparison with the group C (blank control), S. aureus was cultured form bone secretions, which con rmed the diagnosis of chronic osteomyelitis.
Histological examination
The development of chronic osteomyelitis in infected bones of this study was further con rmed by histological examination. The group A and B were characterized by suppurative in ammation with foci of intense bacterial multiplication and necrosis which was similar to the signs of human patients. There are a great many of in ammatory cells in ltrating the bone marrow cavity in Group A and Group B, and signi cantly fewer in ammatory cells in Group C than in Group A and Group B (Figure 8).
Discussion
Chronic osteomyelitis can occur in any bone in the human body. It is a recalcitrant condition in which symptoms have been present for longer than 3 months and a source of disability in humans [14,15] . At present, it is still a di cult problem for orthopedic doctors. And there is no better treatment plan [16] . The incidence of osteomyelitis also decreases with the increase of tra c accidents and patients with traumatic fracture. Epidemiological investigation shows that the incidence of adult osteomyelitis in developing countries is 24.4/100000, and the incidence of osteomyelitis in males who are the main labor force is higher than that in females [17] . Among these patients with osteomyelitis, osteomyelitis caused by S. aureus plays a major role [19] . The main cause of recurrent chronic osteomyelitis is the formation of bacterial bio lm. When the bone is infected with bacteria for more than 24 hours, the bacteria begin to form bio lm locally [20] . Bacteria bio lm blocks the drug and routine surgical cleaning treatment, which is effective for enhancing bacterial survival in hostile environments and prevention of bacterial infection [21,22] . As we all know, blood circulation not only plays a critical role in providing nutrition to tissues and organs, but also plays an important role in the transportation of anti-in ammatory factors. Turkey et al found that there was a positive correlation between the incidence of osteomyelitis and the arterial ischemic disease. Incidence of amputation among patients with diabetes who also have peripheral arterial disease and osteomyelitis increased signi cantly [23,24] . In previous animal osteomyelitis models, local injection of vascular sclerosing agent was used to simulate clinical ischemic symptoms. However, there are few patients with osteomyelitis caused by local injection of vascular sclerosing agent into bone tissue [12] . Therefore, there is a gap between the osteomyelitis model made by sclerosing agent and the actual incidence of osteomyelitis, which may lead to changes in the in ammatory pathological mechanism of the bone infection model, which is not conducive to the later study of the treatment of osteomyelitis. In clinic, most patients have arterial blood ow disorders and the insu cient supply of branch vessels, which lead to the deterioration of local immune function and bone tissue infection [26] . In the previous osteomyelitis model with internal xation, a certain amount of bacterial suspension was inoculated on the basis of bone defect or injected directly into bone tissue. Despite this method successfully improved the success rate of osteomyelitis model, it is still insu cient in simulating the pathogenesis of osteomyelitis in clinic [27] . The Kirschner needle with bio lm is closer to the clinical pathogenic factors. In the selection of animal model of osteomyelitis, the rabbit as the experimental object in the early stage, but it is more di cult to raise and more expensive than rats [28] . Mice also be used as the experimental object of osteomyelitis model. However, the small bones of mice are not convenient for the surgical operation, which increase the di culty of internal xation and cannot well simulate the incidence of clinical osteomyelitis [29] . Therefore, in this study, we report a novel model of chronic osteomyelitis in the rat which is based on many advantages over other animal models. Firstly, the rat is large enough to simulate the clinical, radiographic, and histologic characteristics of human. In contrast to most published models in which S. aureus are directly placed into the bones, in this model (model A) the Kirschner needle with S. aureus bio lm was implanted to facilitate S. aureus infection, which mimics the natural route of infection in hematogenous osteomyelitis and can facilitate the identi cation of bacterial factors involved in bone tropism. More importantly, an additional advantage of our model, the local blood ow was reduced by ligating the femoral artery of the right lower limb, which is more similar to the clinical pathogenesis [30,31] . It also has its advantages in the balance of convenience, purchasing and raising costs.
The most important requirements of a reliable animal model of chronic osteomyelitis are: high infection rate, inability to heal, low mortality in the course of the experiment, and little difference in symptoms among infected animals in the same group. The rat model of chronic osteomyelitis in this experiment reliably mimics the natural route and clinical features of chronic osteomyelitis. The rst phase of infection is highly symptomatic and characterized by the signi cantly increased body temperature (Fig. 3) and decreased growth rate of body weight. Typical in ammatory secretion which shows a single bacterial colony and hemolytic ring on the agar blood culture can be seen in group A and group B (Figs. 6 and 7). The changes in the bone structure were already examined by x-ray. The infected bones undergo the enlargement of local bone marrow cavity, the formation of new bone and cladding, the thickened and uplifted bone tissue, and the swollen soft tissue (Fig. 4). The formation of sequestrum were detected in chronic osteomyelitis model, group A and B (Fig. 5 and Fig. 7). Histopathological evaluation of group A and B revealed a large quantity of in ammatory cells during the chronic phase of osteomyelitis (Fig. 8).
According to the above results and comparing with positive control group B which has been reported as chronic osteomyelitis model, the rat model developed in this study can serve as animal model to characterize the chronic osteomyelitis caused by S. aureus successfully. In addition, the most important advantage of this rat model is that the local blood ow can be reduced by ligating the femoral artery, which mimics the natural and clinical pathogenesis. As a disease with a high recurrence rate [32] , it is very necessary to make a stable animal model which closer to the current clinic to develop the new therapeutic strategies for patients with chronic osteomyelitis.
Conclusion
This present model is a novel animal model of chronic osteomyelitis by ligating the femoral artery and implanting the Kirschner needle with S. aureus bio lm, which closer to the clinical pathogenesis and natural route of infection. This model has a high success rate and a good balance of the survival rate, clinical characteristics, convenient, cost and size requirements, providing an important platform for studying the pathogenic mechanism and the new therapeutic strategies of chronic osteomyelitis. Figure 2 The diagram of the bone hole and the materials of the Kirschner needle with S. aureus bio lm. Figure A2 showed a sterile implanted Kirschner needle with a length of 5mm and a diameter of 1mm. The culture of S. aureus was shown in gure B2. A bone hole, 2mm diameter, located on the right tibial spine, was made for implanting a Kirschner needle ( gure C2).
Figure 3
The average rectal temperature of all rats from the 1 day before surgery to 7 days after operation. (A) A group; (B) B group; (C) Control group.
Figure 4
General observation of rats at 4 weeks after operation in each group. Compared with Control group C (C3), group A (A3) and group B (B3) were characterized with the local swelling and purulent secretion. Representative radiographs from a rat in each group at 4 weeks after surgery. The group A (A5) and group B (B5) were characterized by the enlargement of local bone marrow cavity, the formation of new bone and cladding, the thickened and uplifted bone tissue and the swollen soft tissue.
Figure 6
The culture results of secretion collected from the disease area at 4 weeks after surgery. S. aureus contribute to the occurrence of chronic osteomyelitis. Obvious purulent secretion in the bone marrow cavity, the formation of new bone and dead bone were found in group A and B. | 5,423.6 | 2020-05-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Evaluating Author Attribution on Emirati Tweets
Author Attribution (AA) is a critical stylometry problem that tries to deduce the identity of the authors of electronic texts (e-texts) by only examining the texts. AA is essential for enhancing various application domains, such as recommender systems and forensics. Nevertheless, existing techniques in AA have not been assessed with Emirati social media e-texts. The reason is that no suitable dataset exists for evaluating AA techniques in this context. This paper introduces the Khonji-Iraqi Emirati Tweets Author Identification (AID) dataset with 30 authors (KIT-30), and detailed evaluations. Compound grams, a new definition of grams, are introduced, which allows us to achieve higher classification accuracy. Also, when the number of suspect authors increases, the classification accuracy degradation is not as severe as previously reported, when using suitable data representation. Furthermore, in order to work towards addressing the lack of conveniently-available implementations of stylometry methods, we have developed an extensive e-text feature extraction library, namely <italic>Fextractor</italic>, with a highly intuitive API. The library generalizes all existing <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula>-gram-based feature extraction methods under the <italic>at least</italic> <inline-formula> <tex-math notation="LaTeX">$l$ </tex-math></inline-formula> <italic>-frequent,</italic> <inline-formula> <tex-math notation="LaTeX">$\texttt {dir}$ </tex-math></inline-formula> <italic>-directed,</italic> <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> <italic>-skipped</italic> <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula> <italic>-grams</italic>, and allows grams to be diversely defined, including definitions that are based on high-level grammatical aspects, such as Part of Speech (POS) tags, as well as lower-level ones, such as the distribution of function words and word shapes.
I. INTRODUCTION
E-text stylometry is concerned about analyzing the writing styles of input e-texts in order to extract information about their authors. Such inferred information could be the identity of the authors, their genders, age groups, personality types, or even the diagnosis of certain illnesses [1], [2]. Author Attribution (AA) is an important problem in e-text stylometry and is defined as follows: given a set of texts with known authors, find a classification model that predicts which of these known authors is also the author of the input test texts whose authors are not known. The target classification label, in this case, is the identity of the author [3]- [5]. This is a closed-set classification task, which means that the classification model is expecting the actual author of the input test text to be represented in the learning set.
While various stylometry problem solvers have been evaluated against texts of various domains, the accuracy of AA techniques on Emirati social media texts is unknown. This The associate editor coordinating the review of this manuscript and approving it for publication was Victor S. Sheng. work aims to address the following two challenges that face e-text stylometry problems, namely: • The lack of evaluation datasets for stylometry problem solvers, when executed against e-texts that are written in Emirati Arabic, a dialect of the Arabic language that is natively spoken in the United Arab Emirates (UAE). This effectively casts uncertainty concerning the performance of all stylometry methods, when evaluated against electronic texts that are written in this dialect. As a result, the applicability of e-text stylometry methods against Emirati texts to enhance forensics, anti-forensics, or market analysis, is unknown.
• The lack of conveniently-available, and extensive, software that implement the many existing stylometry methods and feature extraction functions. There is often a tremendous need in re-developing the many proposed methods or functions, and because of the sheer amount of effort that is required to develop as such, it is common that most of the methods or functions are not adequately evaluated. As a result, the actual value of the numerous VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ independent contributions, relative to each other, is often not adequately known.
Hence, this work has these main contributions: • The construction of an original AA assessment dataset made of Emirati tweets (the KIT-30 dataset).
• The new category of grams, namely compound grams, that allows a significant classification accuracy increase.
• The extensive assessment of AA classification techniques against the introduced dataset.
• The implementation of an extensive stylometry feature extraction library with easy-to-use interface in Python. While alternative feature extraction libraries exist [6], to the best of our knowledge, our library Fextractor is, by far, the most extensive library of its kind to date. Our library supports language-independent features, as well as language-dependent features for the following languages: Arabic, English, Chinese, French, German, and Spanish.
• The generalization of numerous feature extraction methods. This allows us to define novel variants of the existing feature extraction methods, in addition to simplifying the implementation.
• The release of the library under a permissible opensource library. We hope that this would enable other researchers to conveniently study the feature extraction methods, or evaluate their methods against the existing ones, without facing the time and effort barrier that is currently required to implement the many methods.
The results of the performance evaluation of more than 10,000 AA models show that the techniques using the introduced compound grams have significantly higher accuracy than those using other types of grams. The results also show that even when the number of suspect authors increases to 30, some AA models can achieve high accuracy in the context of Emirati tweets when using suitable text vectorization methods.
These results are remarkable as they also imply that the accuracy decrease when adding more authors, is not as severe as proclaimed before [7]. More specifically, while the most accurate Twitter AA models are less accurate than 0.8 even with two authors [7], our top performing technique achieves 0.98 accuracy with 30 authors.
The remaining of this paper is as follows. Section II discusses related works. Section III introduces the KIT-30 dataset. Section IV presents the technique used for solving the AA problems, while compound grams are introduced in Section V. Sections VI and VII show the evaluation approach and the results, respectively. Section VIII introduces, Fextractor, our extensive feature extraction library, and the conclusion is given in Section IX.
II. RELATED WORKS
The most relevant investigation to the present work is the PAN'12 closed-set AA challenge [8], where a number of AA models are evaluated against several problems, including closed-set AA ones. Although the considered datasets were only in the English language, this investigation is important as it shows the performance of leading AA algorithms. Khonji et al. have shown in [9] that the Random Forests (RFs) classification algorithm can achieve an accuracy that is equivalent to that of the best AA models of the closed-set AA evaluation of PAN'12.
Other AIDs evaluation problems in the literature, including the following editions of PAN competitions expanded the set of considered languages within their datasets. E.g., the following languages were added in recent PAN evaluations: Dutch, Greek, and Spanish. However, evaluation of stylometry methods, such as AA solvers, against Emirati texts remained absent in the literature. 1 The most related evaluation dataset to our constructed KIT-30 is perhaps the Arabic Sentiment Tweets Dataset (ASTD) by Nabil et al. [11]. Still, while our techniques of collecting the tweets are influenced by their work, the ASTD dataset has the authors' identifiers removed (rightfully so due to the nature of Twitter terms of use, and the nature of the study targeted by the ASTD dataset). Therefore, ASTD is not suitable to evaluate AA techniques.
III. THE KHONJI-IRAQI EMIRATI TWEETS AID EVALUATION DATASET (KIT-30)
This section introduces the objective of our dataset, methods that were used in order to construct it, as well as its various statistics.
The goal of the KIT-30 dataset is to provide e-text fit for generating and answering Emirati AID questions to evaluate AID models. AID can be the AA problem addressed in this paper, or the Author Verification (AV) problem (verifying if the same author wrote a pair of texts), or the Author Diarization (AD) problem (grouping sections in a particular document according to their authors).
To accomplish these goals, we execute the tasks in Algorithm 1. This resulted in obtaining a total number of 30 Emirati Twitter accounts [10]. The first two steps of the algorithm were inspired by Nabil et al. [11].
Algorithm 1 Obtaining a Set of Twitter User Accounts 1) Detect the most active accounts in the UAE by using SocialBakers. 2) Detect more accounts by looking for specific tags that are unique to the UAE. 3) Manually examine the saved accounts to drop non-UAE or non-Arabic accounts.
Next, Algorithm 2 was used to save, discard, and preprocess the tweets as deemed appropriate for the objectives of the evaluation dataset at hand. This resulted in obtaining the finalized KIT-30 dataset, which is comprised of over 50, 000 tweets in total.
Algorithm 2 Downloading and Preprocessing Tweets
1) Save the maximum number of tweets as allowed by the Twitter application programming interface. 2) Drop all reposted tweets, as the owner of the account did not write them. 3) Use placeholders to replace all tags, user names, and URLs. For example, all hashtags, such as , will be replaced by the single placeholder ''#TAG''. This is to ensure that the evaluated AID models remain unable to solve AID problems by simply memorizing specific tags, user names, or URLs that potentially happen to strongly correlate with their author identities. 4) Save author identifiers.
To allow meaningful comparisons among the evaluation results against this Emirati tweets dataset and those of other languages, we have repeated the same process by adapting it for the Dutch, Greek, Spanish, and English languages. Table 1 shows the statistics of KIT-30, while Table 2 presents the per-author statistics of Emirati tweets subset.
IV. AUTHOR ATTRIBUTION MODEL
In the context of this work, we adopt RFs as the learning technique as it was shown in [9] that this algorithm achieves competitive high classification accuracy when solving AA problems.
RFs need the input samples to be described as vectors. We follow a vector representation approach similar to that of Khonji et al. [9]. Every text x is denoted by a vector x, where x[i] denotes the frequency of a unique k-skip n-gram [12] in the text x. The element i always refers to the frequency of the same unique k-skip n-gram pattern. For example, given a pair of vectors x 1 and x 2 that represent different texts x 1 and x 2 , respectively, x 1 [i] is the frequency of a unique pattern in text x 1 , while x 2 [i] is the frequency of the same pattern in text x 2 . This will allow a meaningful comparison of the pair of vectors.
Then the learning set of texts represented as vectors and their author identities are used to train an RF model. The trained model is subsequently used for predicting the author of texts in the testing set. Before we define k-skip n-gram patterns, we define n-gram patterns, and then expand the definition of n-grams by the addition of k-skips.
An n-gram pattern is a series of n neighboring grams in a given text. A gram is a parameter that defines the most fundamental unit of the processed text. For example, if grams are words, then the most basic unit of any text is considered to be words. Figure 1 depicts all n-grams for the text ''The quick fox jumped over the lazy dog'' when grams are words, and n = 3. The list of common definitions of grams in the literature includes characters, letters, punctuation marks, words, word shapes, and POS tags.
The only novelty that k-skip n-grams bring relative to n-grams is that they expand each n-gram into multiple n-grams such that the grams adjacency constraint can be violated for up to k many skips [12].
For example, if k = 2, and the starting gram is ''The'', then we not only identify the 3-gram ''The quick fox'', but also all of its 3-gram variants as listed in Table 3. TABLE 3. k-skip n-grams in text ''The quick fox jumped over . . . '' for when k = 2, n = 3, grams are words, and the first gram is ''The''.
Similar inflation affects all other n-grams, except those near the end of the string by which only fewer skips become possible in order to avoid overrunning after the end of the string.
It can be seen that the concept of k-skip n-grams is a generalization of the concept of n-grams. This makes n-grams a special case of k-skip n-grams for when k = 0. I.e., 0-skip n-grams and n-grams are identical (for any value of n, and any definition of what constitutes as a gram).
Note that since k-skip n-grams inflate each n-gram into multiple variants with skips ranging from 0 up to k (inclusive of 0 and k), the total number of n-gram parameters increases combinatorially. Therefore large values of k are sometimes computationally infeasible.
Additionally, it is customary to disregard less-frequent kskip n-grams. This is to reduce dimensionality, as well as due to the fact that such measures are often found to be too noisy for the purpose of solving AA problems. A successful rule that has been used in the literature is to drop all k-skip n-grams that only occur for less than l many times in any single text in the dataset. In our preliminary evaluations of the proposed models, we found that l = 5 was optimal for the evaluation. I.e., if a pattern fails to appear five or more times in any text, it is ignored and therefore not used in subsequent analysis. Table 3 presents 2-skip 3-grams, when grams are defined to be words. Another definition of grams that is known in the literature of stylometry is defining them to be the POS tags, or dependency tags. For example, Table 4 presents an example of the case when grams are defined to be POS tags. Note that each word is substituted by its corresponding POS tag, as defined by the Penn Treebank project. 2 The same could be trivially extended to dependency tags, word lengths, and word shapes.
V. COMPOUND GRAMS
Compound grams essentially aim to aggregate multiple definitions of grams that refer to the same text segment. TABLE 5. k-skip n-grams in text ''The quick fox jumped over . . . '' for when k = 2, n = 3, grams are a tuple word-POS tags, and the first gram is the tuple word-POS tag that corresponds to ''The''. Table 5 presents examples of some compound grams, when aggregating the definition ''word'' and ''POS tag'' into one gram.
Compound grams allow for capturing additional information than the classical ones. For example, measuring the frequencies of grams, as shown in Tables 3 and 4, allows for identifying the tendency of words or POS tags to independently occur in a given text. On the other hand, as shown in Table 5, measuring the frequency of compound grams allows for identifying the tendency of certain words to jointly take certain POS tags in a given text. This can be valuable information for identifying authors, as authors can be made unique not only by the independent frequency of certain grams (words or POS tags), but rather by their tendency of choosing certain words in certain positions of their sentences. For example, the word ''saw'' can be used as both, a verb, and a noun as in the sentence ''I saw the saw''.
VI. EVALUATION METHODOLOGY
Once AA models are trained as described in Section IV, we evaluate them by using 10-fold cross-validation. However, in order to ensure that each fold is comprised of realistic learning and testing samples, we add the constraint that limits the free mixing of tweets that were written at different times. Specifically, the tweets per author are chronologically grouped into 10 chunks such that their tweets do not exist in two adjacent time intervals.
This constraint increases the difficulty of the AA problems, as it substantially minimizes the possibility of test tweets being chronologically too close from their learning counterparts.
The statistics of the evaluation dataset, after grouping the tweets into 10 chronological chunks on per author basis, are presented in Table 6. Recall from earlier sections that the at least l-frequent k-skip n-grams have the following parameters: The definition of what constitutes a gram. For completeness, we repeat the evaluation many times, each with a distinct AA RF model, such that each makes use of a unique data representation function. Specifically, we exhaustively implement all possible definitions of the at least l-frequent k-skip n-grams for the following sets of parameter values: tag tuple, dependency tag, word-dependency tag tuple, POS-dependency tags tuple}. The tuple grams essentially represent a special case of our proposed compound grams with two components. This process results in 9 × 2 × 3 × 7 = 378 unique text vectorization methods, each of which is used by an RF model that is evaluated by 10-fold cross-validation.
The only exception to this is the Dutch and Greek datasets by which gram can be word or word length. This is because the POS tagger that we use 3 does not support these languages. 3 http://stanfordnlp.github.io/CoreNLP/#human-languages-supported Additionally, since the accuracy of AA problem solvers is sensitive to the number of considered authors, we repeat the entire evaluation for 29 times, each time while evaluating against a unique size of suspects space. I.e., we evaluate for all suspect space sizes in {2, 3, . . . , 30}. Therefore, the total number of evaluations is 378 × 29 = 10, 962 many 10-fold cross-validations.
Subsequently, to investigate the statistical significance of the different performance results, Approximate Randomization (AR) [13] is applied to compute the p values. The labels of various significance levels are shown in Table 7. For example, if 0.01 < p ≤ 0.05, we view the variation between considered classification accuracy as statistically significant, as is the case in [14], and indicate it by one asterisk ''*''.
A. ACCURACY OF AUTHOR ATTRIBUTION MODELS AS A FUNCTION OF SUSPECTS SPACE SIZE
Recall from earlier that, in total, 10, 962 10-fold crossvalidations are performed in order to evaluate RF AA models exhaustively with various parameter values of l, k, and n. Figure 2 depicts the empirical commutative density function (ECDF) of all of the 10, 962 classification accuracies found by 10-fold cross-validation using the Emirati tweets in KIT-30, such that, for any line i ∈ {2, 3, . . . , 30} (each line i is denoted by a unique color), i represents the ECDF of the classification accuracy of all models that are assessed against problems with a suspects space of i many authors. The results in Figure 2 show that the larger the number of suspect authors, the more there are text representation techniques that achieve lower RFs AA accuracy. Nevertheless, VOLUME 8, 2020 even with the 30 authors, there are specific text representation techniques that allow the RFs AA models to achieve an accuracy very close to 1. More details on such successful configurations will be outlined in the next subsection of this evaluation. Figure 3 presents the classification accuracy versus the number authors. This accuracy is measured by considering the performance of all of the feature extraction functions. It can be seen that the performance of solving AA problems against Emirati tweets is superior to those of Dutch and Greek datasets, and inferior to those of Spanish and US English. However, this is not necessarily an indication that solving AA problems is more difficult under Emirati tweets than Spanish or US English. This is due to the fact that some poorly performing features could degrade the overall classification accuracy, and mask the effect of the well-performing features.
To demonstrate this, Figures 4, 5, 6 and 7 present the same results like those in Figure 3, except for choosing specific feature extraction functions that tend to perform well under specific datasets. It can be seen that the performance of solving AA problems with Emirati tweets can be highly similar to those of Spanish and US tweets datasets when certain feature extraction methods are chosen. Namely, when defining grams as the tuple of word-POS tags. However, the performance on Emirati tweets degrades significantly when grams are words, as shown in Figures 5 and 6.
This suggests that, while the current methods of stylometry analysis were never previously assessed with Emirati social media texts (and rarely against Arabic texts in general), accurately solving AA problems with Emirati tweets is nonetheless possible by using compound grams that are formed by combining successful feature extraction methods as found based on the literature of stylometry for other languages.
B. ACCURACY OF AUTHOR ATTRIBUTION MODELS AS A FUNCTION OF TEXT VECTORIZATION METHODS
Since this section discusses the effect of the various parameters of the feature extraction functions in greater detail, the discussion is focused on Emirati tweets and a suspects space of 30 authors for brevity. Figure 8 depicts the ECDFs of the evaluated RF AA classification models with varying values of l when tested against a set of 30 suspect authors. The ECDFs generally indicate that the most accurate classification models can be identified when l ∈ {3, 6}. Interestingly, this is close to the value l = 5 that was found by Khonji et al. [15] for the other languages (i.e., Dutch, English, Greek, and Spanish).
However, the most accurate AA classification models under each value of l ∈ {1, 2, . . . , 9}, are not statistically significantly different than those found with different values of l. Table 8 presents the pair-wise statistical significance results against the most accurate models that are found under each value of l. Figure 9 depicts the ECDFs of the evaluated RF AA classification models with varying values of k. The ECDFs indicate that when k = 0 more accurate classification models can be identified than when k = 1, which suggests that the tolerated violations of the grams adjacency assumption are unhealthy for identifying authors of Emirati social media texts. However, Table 9 indicates that the difference between the best performing classifiers under each value of k is not statistically significant. Figure 10 depicts the ECDFs of the evaluated RF AA classification models with varying values of n. The ECDFs indicate that when n = 1 more accurate classification models can be identified than when n > 1, which suggests that observing the distribution of grams in relation to their adjacent ones is unhealthy for identifying authors of Emirati social media texts. However, Table 10 indicates that the difference in accuracy between the most accurate models under each value of n is not statistically significant. The only exception is between the cases when n = 1 and n = 3 by which the difference in accuracy is statistically significant. Figure 11 depicts the ECDFs of the evaluated RF AA classification models with varying definitions of grams. The ECDFs indicate that compound grams allow for the identification of more accurate classification models than otherwise. Table 11 indicates that the increase in accuracy with the most accurate models using compound grams is always statistically significant, except for the gram pos, where the difference is not statistically significant (p = 0.2348), which may be due to the size of the dataset. Table 12 presents a classification accuracy ranked list, and parameters of the text vectorization methods, of the best performing classifiers with a small enough difference between classification accuracy not to be statistically significant. It can be seen that the top 10 best performing classifiers exclusively make use of compound grams. This suggests that our novel definition of grams is successful in allowing for the achievement of higher classification accuracy under the Emirati tweets domain than when grams are defined classically.
It is important to note that an accurately performing AA classifier is not necessarily an indication of the model's ability in identifying the writing styles of authors. For example, if the dataset contains a significant author-topic bias, then a model that is originally intended to be an AID model, can be partly both, an AID model, as well as a topic identification model. Therefore care must be taken to ensure that the used features do not contain too much topic information, as such topic information could confuse the learning algorithm and transform it into a topic classifier up to a larger degree than otherwise. This is specifically a concern when features that contain words are used, as such words could be content words (as opposed to function words).
If compound grams contain excessive topic information, this may lead the model to become a topic classifier instead of an author classifier. Therefore, to ensure that this is not the case for our best performing compound gram (word-POS or word-dep as shown in Table 12), Figure 12 lists the 20 most important features. The features were aggregated from each of the 10 evaluation folds as used in our RF models (duplicate entries are removed).
In this case, none of the recorded compound grams contains content or topic words. Interestingly, the identified Arabic words in the list above are also Arabic function words. The only arguable word is '' '', which translates to ''God''. However, since the word '' '' is often used in various expressions that are independent of the topic, we believe that it is fair to consider it a word that does not contain significant topic information. On the other hand, the least performing features (i.e., features that contribute least to AA model's decision in solving AA problems) contain a significant amount of content or topic words. A list of such features is presented in Figure 13.
This supports the claims that the KIT-30 dataset does not include meaningful author-topic bias, and that the suggested compound grams are reasonably assisting the learning algorithm to find AID models, as opposed to topic classification models.
VIII. FEXTRACTOR: EXTENSIVE STYLOMETRY FEATURE EXTRACTION LIBRARY
One of the critical issues that face today's research on stylometry is the fact that implementations of most of the stylometryrelated proposed methods are not released publicly. As a result, re-evaluating, or comparing newer methods against the previous ones is often extremely difficult due to the need for re-implementing those methods again (which requires a tremendous amount of time and effort).
A notable aspect of the research in e-text stylometry, is the enhancement of feature extraction methods. Currently, such methods are highly diverse, and range from simple letter counts, up to more sophisticated ones that use independent statistical models, such as POS taggers. For example, it is quite common in the literature that a good portion of the considered feature extraction methods are evaluated in isolation, without adequate comparison against existing methods to truly justify their relative effectiveness. Another issue is the lack of adequate generalizations of the proposed methods, which leaves some of the novel variants unstudied.
A. SUPPORTED FEATURE EXTRACTION METHODS
The following feature extraction methods are supported: • n-grams (classical n-grams), with parameters: -Normalize (Boolean): If set to True, the library will normalize the number of occurrences of a Table 13 presents a list of supported grams. -Cache (path): If set to None, then caching is disabled. If set to a path, then caching is enabled. This can be useful for expensive features, such as those that require making use of POS taggers (the cache will save time by avoiding parsing same sentences twice).
• k-skip n-grams, with parameters: k: the total number of tolerated adjacency violations in an n-gram, in the unit of grams. E.g., k = 2 will tolerate up to 2 adjacency violations, while k = 0 will not tolerate any and cause it to be identical to classical n-grams.
• Rewrite-rules. Unlike other rewrite-rules implementations, ours has the novelty in that it allows us to substitute the terminal words by their alternative forms (e.g., word shape). For consistency, we refer to this as ''gram''. Additionally, compound grams are also made available to the rewrite-rules feature extraction function. The parameters are: -Normalize (Boolean). l.
-Gram. Table 13 lists all supported gram definitions.
B. GENERALIZATION OF n-GRAM METHODS
This section presents the mechanism by which our library implements n-grams, k-skip n-grams, and syntactic n-grams.
In order to simplify the implementation, enhance the ability to introduce more novel variants, as well as extend the coverage of the library, we have generalized all of the n-gram-based methods as the at least l-frequent dir-directed k-skip ngrams. Then, we implemented this generalization instead. As a result of this, we get a more-extensive library that is also simpler and allows superior code re-use. Further details are presented below. Consider the text example that is presented in Figure 14, and, for simplicity, suppose that grams are defined to be words. Then, if the parameter dir = spatial, the example text in Figure 14 is represented in the following row matrix in (1).
The quick fox jumped over the lazy dog (1) Then, the sliding window, as depicted in Figure 1, will operate on the matrix in (1) on a row-by-row basis. Since there is a single (but long) row, the sliding window will move along that one row, depending on the chosen value of parameter n.
On the other hand, if the parameter dir = deptree, the example text in Figure 14 is represented in the following row matrix in (2). Note that each row represents a path from the root node towards the numerous leaf nodes as we walk down the dependency tree that is depicted in the Figure Then, similar to the dir = spacial case, the sliding window, as depicted in Figure 1, will operate on the matrix in (2) on a row-by-row basis. Since there are 5 rows, the sliding window will move along each row, independently. It can be seen that, because of this design, we are able to re-use our sliding window code for both dir = spacial (classical n-grams) and dir = deptree (syntactic n-grams with dependency trees).
Alternatively, one may decide to construct a matrix similar to the one in (1), but while using a different type of trees, or methods that might not necessarily be based upon linguistics basis. Extending this library is as simple as introducing code that defines a matrix out of sentences.
As for the parameter k that specifies the total number of permissible gram skips, it is implemented in the sliding window code and is therefore fully re-used elsewhere, independent of the direction. Therefore, the rest of the code is re-used, independent of how the matrices are defined. 9 10 # c o u n t raw k−s k i p n−grams p a t t e r n s w i t h raw f r e q u e n c i e s 11 p = f e x t r a c t o r . g e t c o u n t _ k s n g r a m s (m, k =2 , n =2 , n o r m a l i z e = F a l s e ) 12 13 # p r i n t t h e s c o r e 14 Additionally, if using a vector-representation is required, the represented texts can be trivially converted into vectors. Below is a code of an example where two distinct texts, text1 and text2, are transformed into a vector space. Such vectors could then be used by other classifiers as required.
IX. CONCLUSION
A key contribution of this paper relates to the uncertainty that is associated with the applicability of the stylometry problemsolvers against the domain of Emirati Arabic texts. To work towards addressing this issue, we have constructed the KIT-30 dataset, which is the first Emirati social media authoridentification evaluation dataset. Interestingly, our studies found that the scalability issues of AA problem solvers, that are generally reported in the literature concerning the size of the suspect authors space, is noticeably more aggressive than what our findings indicate. For example, we were able to achieve a classification accuracy of over 0.98 when solving AA problems that were constructed based on chunks of Emirati tweets, with a set of 30 suspect authors. This accuracy is notably higher than evaluated in the literature on similar space sizes of suspect authors in literature [7], [17], especially when knowing that our chunks of tweets, per author, remained relatively small (only a few hundreds of tweets per chunk).
Additionally, in order to work towards addressing the lack of conveniently-available implementations of stylometry methods, we have developed an extensive e-text feature extraction library, with a highly intuitive API. This library offers, by far, the most extensive set of e-text stylometry feature extraction methods to date, which is partly thanks to our generalization of n-gram-based feature extraction methods. The library also contains a number of novelties, such as novel definitions of grams (e.g., compound grams) for both, n-gram-based methods, as well as CFG rewrite-rules. Interestingly, when using our feature extraction library, our evaluation of efficient AA solver against Emirati tweets AA problems indicates that the use of compound grams allows for the identification of more accurate AA models. • The KIT-30 dataset is available at https://gitlab.com/ mmaakh/kit-30.git. | 7,721 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Development and performance assessment of an advanced Lucas‐Kanade algorithm for dose mapping of cervical cancer external radiotherapy and brachytherapy plans
Abstract Purpose The aim of this study was to verify the possibility of summing the dose distributions of combined radiotherapeutic treatment of cervical cancer using the extended Lucas‐Kanade algorithm for deformable image registration. Materials and methods First, a deformable registration of planning computed tomography images for the external radiotherapy and brachytherapy treatment of 10 patients with different parameter settings of the Lucas‐Kanade algorithm was performed. By evaluating the registered data using landmarks distance, root mean square error of Hounsfield units and 2D gamma analysis, the optimal parameter values were found. Next, with another group of 10 patients, the accuracy of the dose mapping of the optimized Lucas‐Kanade algorithm was assessed and compared with Horn‐Schunck and modified Demons algorithms using dose differences at landmarks. Results The best results of the Lucas‐Kanade deformable registration were achieved for two pyramid levels in combination with a window size of 3 voxels. With this registration setting, the average landmarks distance was 2.35 mm, the RMSE was the smallest and the average gamma score reached a value of 86.7%. The mean dose difference at the landmarks after mapping the external radiotherapy and brachytherapy dose distributions was 1.33 Gy. A statistically significant difference was observed on comparing the Lucas‐Kanade method with the Horn‐Schunck and Demons algorithms, where after the deformable registration, the average difference in dose was 1.60 Gy (P‐value: 0.0055) and 1.69 Gy (P‐value: 0.0012), respectively. Conclusion Lucas‐Kanade deformable registration can lead to a more accurate model of dose accumulation and provide a more realistic idea of the dose distribution.
| INTRODUCTION
Locally advanced cervical cancer is standardly treated using a combination of concomitant chemotherapy, external radiotherapy (EBRT), and brachytherapy (BRT) boost to the cervical region. 1 The current standard for dose accumulation of the combined radiotherapeutic treatment according to the International Commission on Radiation Units and Measurements (ICRU) report 89 is based on the simple DVH parameter addition without employing an adequate registration model. 2 Doses for absolute organs at risk (OAR) volumes are estimated by adding dose-volume histogram (DVH) parameters from each fraction, assuming that the location of a given hotspot volume (e.g., 2 cm 3 ) is identical in each fraction. 3 So the worst alternative should always be considered. For critical organs, this indicates that the maximum dose calculated in each fraction is always realized in the same volume of tissue. On the contrary, in the case of tumors, the minimum doses always meet in the same volume.
Changes in the patient's irradiation position between EBRT and BRT and the introduction of applicators close to the tumor before each fraction of BRT causes organ movement and soft tissue deformation. As a result, the specific tissue structures generally occupy a completely new position compared to previously applied radiation fields. The tissue structures bring previously accumulated doses to these new positions, which are often significantly different from those that would correspond to their current position. To correctly express the accumulated dose, the absorbed doses must be summed up not in the same spatial coordinates, but always in the same anatomical volume of tissue. 4 Therefore, a prerequisite for proper adaptive dose accumulation planning is the registration of shifts that occur within the tissues between individual fractions. These displacements are formally described by the deformation field. Appropriate anatomy imaging is the starting point for determining shifts between fractions. The general registration scheme involves the selection of a reference fraction, to which so-called floating images, corresponding to the remaining fractions of treatment, are subsequently related. Using a reference and a floating image, an appropriate registration algorithm can then be employed to determine the deformation fields. Knowledge of the deformation field enables reconstruction of the actual position of the tissues with respect to the applied radiation fields, thus, determining the appropriate contribution to the accumulated dose in each fraction. This procedure is referred to as dose mapping.
The use of the concept of dose mapping by image registration thus, enables the addition of doses in corresponding anatomical areas, even when applied in different fractions of combined radiotherapy.
The concept described above was used in this study to develop a software (SW) tool designed to optimize dose prescription in combined cervical radiotherapy. The application of such a tool to real tissue changes is merely an approximation of reality. Therefore, an integral part of the design of a model tool for dose mapping using image registration is careful testing of its statements.
Thus, this work aimed to create an SW tool containing an algorithm for deformable image registration (DIR) and to verify its possibilities for adaptive planning of combined EBRT and BRT treatment of patients with cervical cancer.
| MATERIALS AND METHODS
The study was divided into two parts. The setting of the parameters of the algorithm for the registration of computed tomography (CT) images and dose mapping of the combined EBRT and BRT was specified in retrospective study #1. The optimization phase of study #1, in which the optimal parameters of the registration algorithm were assessed, was performed on a group of 10 patients with cervical cancer. The second part of this study (study #2) assessed the performance of the algorithm by evaluating the accuracy of dose mapping on 10 other patients and compared the results with the current standard of dose accumulation in combined radiotherapy.
The fractionation parameters applied in the treatment of patients whose CTs were used in this study were EBRT 45 Gy in 25 fractions on planning target volume including nodes and then boost to the cervical region 6 Gy in three fractions using the volumetric modulated arc therapy (VMAT). After completing an external radiotherapy course, patients were treated with 28 Gy intracavitary BRT on highrisk clinical target volume in four fractions using a Fletcher Utrecht CT compatible applicator.
Compared to conventional radiotherapeutic procedures, combined radiotherapy is characterized by significantly different fractionation parameters during external and brachytherapeutic parts. For this case, ICRU report 89 recommends recalculating the absorbed dose distribution to the equieffective dose. The equieffective dose related to normofractionation is denoted by the symbol EQD2. 2,5 The following relationship was used to calculate EQD2 based on the applied doses: in which the ratio α=β was set to 10 Gy for tumor response and 3 Gy for late effect in critical organs.
2.A | Image preprocessing
The presence of an applicator in the reference image represents a major problem when registering floating images, taken as a part of EBRT planning, to a reference image corresponding to the BRT. In general, it is not possible to register a part of the data that is present in a single image. As per the procedure suggested by Moulton et al, 6 before the application of the deformable registration, preparatory steps were made to modify each BRT study. 2. The replaced area was blurred using a Gaussian filter.
2.B | Rigid and affine preregistration
Application of the algorithm for deformable image registration to significantly different CT studies of individual fractions of combined radiotherapy demonstrated that for its effective functioning, it is advantageous to first preregister all CT studies manually using rigid transformation to match the corresponding bone structures.
Improved image data matching was further achieved by using a relatively fast automatic affine preregistration of floating images to the reference image. The result of these steps was then followed by deformable registration to fine-tune the image similarity.
2.C | Lucas-Kanade deformable registration
Considering the results of studies 7-10 comparing the accuracy of different approaches for deformable CT data registration, the Lucas-Kanade (LK) algorithm was used. This method works with the concept of deformation as a continuous optical flow between CTs. The results of the above studies indicate that when modeling "physiologi-
2.D | Patient study #1registration parameters setting
The function of the selected registration algorithm can be further optimized for a specific application. The accuracy of image data registration is essential for the subsequent mapping of dose distributions of individual patients, as the same deformation vector field is used.
In study #1, the values and combinations of the number of image pyramid levels and size of the neighborhood around a given point were optimized.
Various metrics have been proposed to calculate the similarity of images and to assess registration accuracy. However, when evaluating registration errors under real conditions, it appears that none of the available criteria can be applied universally, and the application of a single criterion often leads to incorrect conclusions. An example might be the failure of image similarity evaluation procedures based on image intensity in areas of homogeneous intensity. 15 Regarding the recommendations of publications 6 and, 16 a combination of the following approaches was used to assess the accuracy of registration when optimizing the parameters.
2.D.1 | Landmarks distance
In the landmark-tracking technique, the degree of similarity is determined by the distance between the corresponding points marked in both the floating and reference images. When testing the algorithm, at least 100 well-recognizable matching points were manually identified in both CT studies by an experienced physician. After registration, the average 3D distance for each patient was calculated.
Since the area to which the BRT boost dose distribution was to be delivered was smaller than the area of interest for the EBRT, the selection of landmarks was not uniform across the CT study. A denser network of points (a minimum of five points in a given CT slice) was marked in slices affected by the BRT dose distribution. In other parts of the CT covered only by the EBRT dose, a smaller number of points were marked (always a minimum of two points in the slice).
2.D.2 | Mean squared error (MSE)
This approach is based on the comparison of the distribution of HU in the corresponding images using a certain mathematical or statisti- therefore, proper deformation in these areas is not guaranteed. Thus, a more robust method of comparing two intensity distributions was used, which, to some extent, combines both approaches and considers not only differences in intensity (or HU) at a given point but also spatial differences such as gamma analysis. 17,18 Gamma analysis was performed only on the patient's contour. For practical reasons, Verisoft v.7.1 software (PTW, Freiburg, Germany) was used to evaluate the agreement between individual CT slices by two-dimensional gamma analysis. The evaluation criteria were set to 2%/2 mm with reference to local "dose." An average gamma score specifying the percentage of image pixels that fulfilled the selected criteria, was determined for the given registration parameters. of published studies, 6,7,9,10 indicating that both methods were sufficiently accurate in comparison with other freely or commercially available algorithms, the results were used as a reference to evaluate the accuracy of the tested registration model.
2.E.2 | Comparison with the current standard of dose accumulation
After mapping the dose distributions, the results can be presented as a summary DVH of EBRT and BRT treatment expressed in EQD2.
2.E.3 | Statistical methods
Welch's two-sample t test was used to compare the average dose differences at landmarks between the LK algorithm and the HS
3.A.3 | Gamma analysis
The evaluation of the image similarity by gamma analysis is presented in Table 3 and confirms the results of the previous methods.
When setting the gamma analysis parameters to 2mm/2%, the highest mean gamma score of registered CT studies of all patients was 86.7%.
3.A.4 | Evaluation summary
The results of the evaluation using all three selected criteria correspond to each other. Based on the average landmark distance, RMSE, and gamma scores, the best agreement was achieved for two pyramid levels and a neighborhood radius of 3 voxels. The parameters optimized in this manner were introduced into the LK algorithm and further used for deformable registration in patient study #2.
Regarding patient no. 5 (in Tables 1-3), the average distance of the corresponding points was doubled compared to the others, and the average gamma score after affine preregistration was <50%. Probably due to the significantly different distribution of the patient's tissues during the acquisition of the planning CT for BRT and EBRT, the presented SW model failed. Moreover, its application generated visually unrealistic deformations (Fig. 4). It should also be noted that no form of post-optimization regularization was used in the method. The statistical analysis revealed a significant difference in the average dose differences at landmarks between the extended LK algorithm and HS (P = 0.0055) and modified Demons (P = 0.0012) algorithm. Therefore, the accuracy of the LK method appears to be slightly higher than that of others.
| DISCUSSION
The presented results indicate that the proposed SW tool, using the LK algorithm extended by the implementation of weighted windows and pyramidal representation of the image, can be used to estimate the cumulative dose of combined EBRT and BRT with higher accuracy than similar approaches. 7 F I G . 2. Difference between CT studies of patient no. 1 before registration (left) and after affine registration (right).
Difference between CT studies of patient no. 1 after deformable registration by optimized pyramid LK method with weighted window (pyramid level 2, neighborhood radius of 3 voxels). In patient study #2, the accuracy of dose deformation and mapping of dose distributions using the LK algorithm was evaluated.
According to statistical analysis, the results summarized in In addition to the possible advantages of the presented algorithm, the results of this work also showed the limits of its use.
Although the results indicate the potential use of the method to generate an accumulated dose estimate with the stated accuracy, its practical use significantly limits the need for manual removal of the applicator introduced into the patient's body in BRT applications.
This procedure could also introduce an additional error into the evaluation process.
Furthermore, it appears that the algorithm fails in cases of extreme changes in the position of the tissues between the floating and reference CT images. Unfortunately, these cases are not exceptional when EBRT is combined with BRT (results for patient no. 5 in Tables 1-3). In this context, it should be noted that the higher robustness of the presented tool against extreme deformations could be provided mainly by the addition of the LK algorithm with a pyramidal representation of the CT image. However, the pyramid approach is not limited in this regard.
Thus, the use of manual alignment of bone structures and affine preregistration of CT images is of similar importance to the inclusion of the pyramid method. Without using these auxiliary steps, the results for the entire patient data tested were erroneous, as shown in Fig. 4.
Registration inaccuracies were more pronounced, particularly in the case of soft tissues. Bone structures were usually paired correctly owing to the different densities. In Fig. 6, which shows the results of the comparison by gamma analysis, it can be observed that F I G . 5. EQD2 differences between bladder DVH parameters obtained from simple DVH parameter addition and DIRbased dose accumulation.
F I G . 6. Comparison of the agreement of individual CT slice of patient no. 1 using gamma analysis after affine registration (left) and after deformable registration by LK with optimized parameters (right) (pyramid level 2, neighborhood radius of 3 voxels).
F I G . 7. Rectum (green) and bladder (blue) matching differences for patient no. 1 of study #1.
| 77 in the case of soft tissues in areas with homogeneous image intensity, the detected registration errors were higher. This is a relatively serious finding as the registration errors are subsequently transmitted to the deformation of the contours of critical structures, such as the bladder and rectum (Fig. 7).
| CONCLUSION
This study shows that the extended LK method of DIR could provide competitive accuracy of dose accumulation. The properly deformed and summed dose distribution after each BRT fraction could then be used for adaptive planning of the following fraction.
Based on the comparison with the current standard of dose accumulation, it can be concluded that the use of deformable registration should allow radiation oncologists to gain a more realistic view of the dose distribution within the patient.
The outcomes of this study also indicate some possibilities for further development of procedures for the preliminary detection of CT studies unsuitable for this type of registration; that is, finding a criterion for the magnitude of the anatomical change between fractions in which DIR fails.
CONF LICT OF I NTEREST
No conflict of interest.
DATA AVAILABILITY STATEMENT
Data available from the corresponding author upon request. | 3,841.6 | 2021-05-01T00:00:00.000 | [
"Medicine",
"Physics"
] |